Friday, January 4th 2013 11:25:01 PM PST
The first generation of stars are believed to have formed about 150-300 million years after the big bang, when the universe would have been mostly neutral, and completely dark to human eyes. These first generations of objects are unreachable by telescopes currently in use, but are ideally suited to computer simulations. The initial conditions for their formation are well known, and the set of physics that governs their formation is relatively simple, so they make for ideal problem spaces for supercomputer laboratories.
To simulate the manner in which these stars form, a random sample of the universe is generated, into which the correct initial conditions are poured. By following and updating the set of equations that governs the way matter would evolve in that region — hydrodynamics, gravity, molecular chemistry — structures naturally arise. In regions where more interest phenomena are occurring, such as overdense regions or regions unstable to collapse, higher resolution elements are inserted. For a typical simulation, this typically occurs on the order of 30 times, such that the finest region simulated is 2^30 times smaller than the coarsest region simulated inside that very same computer simulation. The physical model describing the way the gas evolves is valid for many orders of magnitude in density, and for that reason the resolution available easily covers scales from galaxies down to sub-stellar radii.
From previous calculations, it appeared that the first generation of stars were always very massive, forming in isolation. Recent studies, following the collapse with an expanded physical model and higher resolution, have suggested that perhaps it is not always so: sometimes the primordial clouds break into multiple clumps, each of which could subsequently form an individual star. However, because the number of calculations using this expanded physical model is currently quite low, a statistical sampling needs to be conducted in order to constrain the rate of fragmentation — the "initial mass function."
Using the Triton Resource, we are running 20 different random realizations, each of which follows the collapse of a random realization of the universe to high densities. These calculations utilize not only a very full physical model, but also higher resolution, than most previous calculations. This allows us to study the detailed hydrodynamic flow onto these pre-stellar objects. In this manner, we hope to constrain the way the objects collapse, and to use them as a guide to whether these first stars actually were lone giants.
The simulations use Enzo, a UCSD-based Adaptive Mesh Refinement C++/Fortran code for cosmological structure formation simulation, to generate the data. These are then analyzed using the yt toolkit, for which Matthew Turk serves as the development project lead. For more details on the tools and runs on Triton, see the following sources:
Triton's strengths: Triton's exceptional workflow support makes it unique as a processing platform for this type of simulation. Such tasks have been called "long and thin" because the jobs run on relatively few processors, but have very long run times (typically on the order of several days).
Triton's user experience: The user experience on Triton is very accommodating. One can compile code, run analysis jobs, get interactive parallel analysis runs, and store it all in one place. It's a produce/consume setup that suits this type of workflow very well: kick off several medium-sized jobs, come back a few days later and not have to bother with moving data around at all.
Use your own software: One can install a Python stack to analyze the simulations in parallel, and even run VTK-based interactive visualizations on Triton from a laptop.
Flexible disk capacity: Each of the project simulations requires about a terabyte of disk space, so the openness of /mirage, Triton's Lustre-based parallel file system, has been extremely helpful. With Data Oasis coming online this summer, that disk capacity will double to about 360 terabytes.
To learn more about this project, visit the following links: