yt scripts can be a bit intimidating, and at times a bit obtuse. But there’s a lot you can do, and this section of the manual will assist with figuring out how to do some fairly common tasks – which can lead to combining these, with other Python code, into more complicated and advanced tasks.
Note
All of these scripts are located in the mercurial repository at http://hg.enzotools.org/cookbook/
This is a simple recipe to show how to open a dataset and then plot a slice through it, centered at its most dense point. For more information, see Slices.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simple_slice.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
pc.add_slice("Density", 0) # 0 = x-axis
pc.add_slice("Density", 1) # 1 = y-axis
pc.add_slice("Density", 2) # 2 = z-axis
pc.set_width(1.5, 'mpc') # change width of all plots in pc
pc.save(fn) # save all plots
Sample Output
This is a simple recipe to show how to open a dataset and then take a weighted-average projection through it, centered at its most dense point. For more information see Projections.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simple_projection.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
pc.add_projection("Density", 0, weight="Density") # 0 = x-axis
pc.add_projection("Density", 1, weight="Density") # 1 = y-axis
pc.add_projection("Density", 2, weight="Density") # 2 = z-axis
pc.set_width(1.5, 'mpc') # change width of all plots in pc
pc.save(fn) # save all plots
Sample Output
This is a recipe to show how to open a dataset, calculate the angular momentum vector in a sphere, and then use that to take an oblique slice. See Derived Quantities and Cutting Planes for more information.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/aligned_cutting_plane.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
# Now let's create a data object that describes the region of which we wish to
# take the angular momentum.
# First find the most dense point, which will serve as our center. We get the
# most dense value for free, too! This is an operation on the 'hierarchy',
# rather than the parameter file.
v, c = pf.h.find_max("Density")
print "Found highest density of %0.3e at %s" % (v, c)
# Now let's get a sphere centered at this most dense point, c. We have to
# convert '5' into Mpc, which means we have to use unit conversions provided by
# the parameter file. To convert *into* code units, we divide. (To convert
# back, we multiply.)
sp = pf.h.sphere(c, 5.0 / pf["mpc"])
# Now we have an object which contains all of the data within 5 megaparsecs of
# the most dense point. So we want to calculate the angular momentum vector of
# this 5 Mpc set of gas, and yt provides the facility for that inside a
# "derived quantity" property. So we use that, and it returns a vector.
L = sp.quantities["AngularMomentumVector"]()
print "Angular momentum vector: %s" % (L)
pc = PlotCollection(pf, center=c) # Make a new plot holder
pc.add_cutting_plane("Density", L) # Add our oblique slice
pc.set_width(2.5, 'mpc') # change the width
pc.save(fn) # save out with the pf as a prefix to the image name
Sample Output
This recipe shows how to take a sphere, centered on the most dense point, and sum up the total mass in baryons and particles within that sphere. Note that this recipe will take advantage of multiple CPUs if executed with mpirun and supplied the –parallel command line argument. For more information, see Derived Quantities.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/sum_mass_in_sphere.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
v, c = pf.h.find_max("Density")
sp = pf.h.sphere(c, 1.0/pf["mpc"])
baryon_mass, particle_mass = sp.quantities["TotalQuantity"](
["CellMassMsun", "ParticleMassMsun"], lazy_reader=True)
print "Total mass in sphere is %0.5e (gas = %0.5e / particles = %0.5e)" % \
(baryon_mass + particle_mass, baryon_mass, particle_mass)
This is a simple recipe to show how to open a dataset and then plot a phase plot showing mass distribution in the rho-T plane.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simple_phase.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
pc.add_phase_sphere(10.0, "mpc", # how many of which unit at pc.center
["Density", "Temperature", "CellMassMsun"], # our fields: x, y, color
weight=None) # don't take the average value in a cell, just sum them up
pc.save(fn) # save all plots
Sample Output
This is a simple recipe to show how to open a dataset and then plot a profile showing mass-weighted average Temperature as a function of Density inside a sphere.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simple_profile.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
pc.add_profile_sphere(10.0, "mpc", # how many of which unit at pc.center
["Density", "Temperature"], weight="CellMassMsun") # x, y, weight
pc.save(fn) # save all plots
Sample Output
This is a simple recipe to show how to open a dataset and then plot a radial profile showing mass-weighted average Density inside a sphere. For more information, see Multi-dimensional Profiles.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simple_radial_profile.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
pc.add_profile_sphere(10.0, "mpc", # how many of which unit at pc.center
["RadiusMpc", "Density"], weight="CellMassMsun", # x, y, weight
x_bounds = (1e-3, 10.0)) # cut out zero-radius and tiny-radius cells
# But ... weight defaults to CellMassMsun, so we're being redundant here!
pc.save(fn) # save all plots
Sample Output
This script shows the simples way of getting halo information. For more information, see Halo Finding.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/halo_finding.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
halos = HaloFinder(pf)
halos.write_out("%s_halos.txt" % pf)
Sample Output
RedshiftOutput0005_halos.txt
# HALOS FOUND WITH HOP
# Group Mass # part max densx y z center-of-mass x y z vx vy vz max_r
0 3.349212591e+11 822 8.530865463e+03 9.474301815e-01 8.003573695e-01 6.135765693e-01 9.525055889e-01 8.015884263e-01 6.138615237e-01 1.705851494e+06 -1.249580137e+06 2.916880874e+05 3.264138852e-02
1 2.758414750e+11 677 8.202446551e+03 1.796526850e-01 7.625952107e-01 6.479191838e-01 1.800868084e-01 7.631527126e-01 6.476983977e-01 -2.122530412e+06 1.410529735e+06 -5.558529287e+05 3.346727634e-02
2 1.947595643e+11 478 4.555572625e+03 5.532828495e-01 3.234508336e-01 1.526681708e-01 5.502839176e-01 3.227445666e-01 1.463095679e-01 4.768146446e+05 5.700263869e+05 7.768371888e+05 2.631150506e-02
3 1.377170141e+11 338 3.242180406e+03 9.754010795e-01 7.632175520e-01 7.384444522e-01 9.754894863e-01 7.616535505e-01 7.426239921e-01 3.147013348e+05 6.912953967e+05 -3.158456248e+06 2.418755287e-02
4 1.075659518e+11 264 2.767986753e+03 9.062823292e-01 1.429594372e-01 6.518503248e-01 9.092065242e-01 1.360712140e-01 6.495222326e-01 -5.615869189e+05 -6.366013443e+05 1.681235144e+05 2.159628487e-02
5 8.311914461e+10 204 2.716905006e+03 7.370274945e-01 1.881616252e-02 2.038525792e-01 7.366971198e-01 1.833388582e-02 2.035289977e-01 1.663413733e+06 7.167873345e+04 -6.195454974e+05 2.220572281e-02
6 7.985957031e+10 196 3.728141647e+03 7.032178905e-01 4.612143097e-01 2.543813295e-01 7.029408322e-01 4.608586672e-01 2.556131836e-01 5.110343768e+05 7.263383815e+05 -1.357792572e+05 2.069425029e-02
7 7.374786850e+10 181 2.707414503e+03 8.504477878e-01 4.341667808e-03 1.388950015e-01 8.498287193e-01 3.363578745e-03 1.389072946e-01 -6.683213708e+05 -4.607395365e+05 1.314755282e+06 2.009615603e-02
8 5.419042271e+10 133 2.289804554e+03 3.776429757e-01 7.283994005e-01 6.446504099e-01 3.777433187e-01 7.269790920e-01 6.443397020e-01 -1.617485927e+06 1.406450423e+06 -3.942225913e+05 1.616496998e-02
9 5.256063556e+10 129 1.414660466e+03 8.358023677e-02 7.730337546e-01 6.259202318e-01 8.158372163e-02 7.746775670e-01 6.237812975e-01 -2.255594150e+05 -1.998432996e+06 1.754192578e+06 1.681364212e-02
10 3.381808334e+10 83 7.426291275e+02 7.586783008e-01 5.565828318e-01 2.499949337e-01 7.615372697e-01 5.602183627e-01 2.516651310e-01 -2.892032842e+05 -4.116040472e+04 1.797132864e+06 1.463921311e-02
11 3.218829620e+10 79 1.065896463e+03 7.922319010e-01 6.849626178e-01 3.618562838e-01 7.917738127e-01 6.854323398e-01 3.638244474e-01 2.104084591e+06 -8.047160687e+05 1.661570843e+05 1.497200164e-02
12 2.770638154e+10 68 6.274043905e+02 8.762047753e-01 7.390402458e-01 3.784275359e-01 8.745118455e-01 7.389001762e-01 3.775805493e-01 3.619881778e+05 -1.400392085e+06 1.095641286e+06 1.167731823e-02
13 2.648404117e+10 65 8.422493962e+02 7.145945477e-01 5.507938405e-01 4.833302836e-02 7.136183340e-01 5.517347708e-01 4.777013265e-02 1.228141163e+06 -9.438224166e+05 9.374767573e+05 1.181447170e-02
14 2.526170081e+10 62 5.496262872e+02 2.661371336e-01 4.547039190e-01 2.264844106e-01 2.654044204e-01 4.529011416e-01 2.280199884e-01 5.174012916e+05 1.096946582e+06 -5.720811299e+05 1.202792277e-02
15 2.363191366e+10 58 6.261403270e+02 5.537976778e-01 9.321957347e-01 8.870031559e-01 5.536553100e-01 9.301505474e-01 8.885347547e-01 5.270620662e+05 -7.490051223e+04 -1.859688949e+06 1.073805491e-02
16 2.240957330e+10 55 7.216382764e+02 2.869443986e-01 6.260920367e-01 1.654761036e-01 2.872379828e-01 6.265364612e-01 1.664006039e-01 -3.583231204e+05 3.269624338e+02 1.092627021e+06 1.221875078e-02
17 2.077978615e+10 51 7.207463353e+02 7.925281425e-01 6.605537872e-01 2.972905663e-01 7.922277380e-01 6.615778826e-01 2.962863502e-01 1.213295975e+06 -1.790677992e+06 2.286521021e+06 1.019183230e-02
18 1.996489258e+10 49 5.250658436e+02 8.289750280e-01 5.813463800e-01 5.786615919e-01 8.301333188e-01 5.818982175e-01 5.784557151e-01 1.797734829e+06 2.592494092e+06 6.666032929e+05 9.710146517e-03
19 1.955744579e+10 48 4.827141235e+02 8.332161433e-01 3.625128149e-01 7.783504082e-01 8.324564064e-01 3.624546115e-01 7.766859379e-01 2.889876718e+05 -2.176831107e+05 -8.195252305e+05 1.219180959e-02
This is a mechanism for plotting circles representing identified particle halos on an image. For more information, see Halo Finding.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/halo_plotting.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
halos = HaloFinder(pf)
pc = PlotCollection(pf, [0.5, 0.5, 0.5])
p = pc.add_projection("Density", 0)
p.modify["hop_circles"](halos)
pc.save()
Sample Output
This is a simple mechanism for overplotting the particles belonging only to halos. For more information, see Halo Finding.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/halo_particle_plotting.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
halos = HaloFinder(pf)
pc = PlotCollection(pf, [0.5, 0.5, 0.5])
p = pc.add_projection("Density", 0)
p.modify["hop_circles"](halos) # We like the circles for framing
# Only plot the first 100 halos. Also, by default the particles are
# semi-transparent, but the alpha parameter can be overriden to make them
# darker.
p.modify["hop_particles"](halos, max_number=100)
pc.save()
Sample Output
This is a simple recipe to show how to open a dataset, plot a slice through it, and add some extra vectors on top. Here we’ve used the imaginary fields magnetic_field_x, magnetic_field_y and magnetic_field_z.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/arbitrary_vectors_on_slice.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
ax = 0 # x-axis
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
p = pc.add_slice("Density", ax)
v1 = "magnetic_field_%s" % (axis_names[x_dict[ax]])
v2 = "magnetic_field_%s" % (axis_names[y_dict[ax]])
p.modify["quiver"](v1, v2) # This takes a few arguments, but we'll use the defaults
# here. You can control the 'skip' factor in the
# vectors.
pc.set_width(2.5, 'mpc') # change width of all plots in pc
pc.save(fn) # save all plots
This is a simple recipe to show how to open a dataset, plot a slice through it, and add contours of another quantity on top.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/contours_on_slice.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
p = pc.add_slice("Density", 0) # 0 = x-axis
p.modify["contour"]("Temperature")
pc.set_width(1.5, 'mpc') # change width of all plots in pc
pc.save(fn) # save all plots
Sample Output
This is a simple recipe to show how to open a dataset, plot a slice through it, and add velocity vectors on top.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/velocity_vectors_on_slice.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf) # defaults to center at most dense point
p = pc.add_slice("Density", 0) # 0 = x-axis
p.modify["velocity"]() # This takes a few arguments, but we'll use the defaults
# here. You can control the 'skip' factor in the
# vectors.
pc.set_width(2.5, 'mpc') # change width of all plots in pc
pc.save(fn) # save all plots
Sample Output
This recipe finds the average value of a quantity through the entire box. (See Derived Quantities.) Note that this recipe will take advantage of multiple CPUs if executed with mpirun and supplied the –parallel command line argument.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/average_value.py .
from yt.mods import *
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
field = "Temperature" # The field to average
weight = "CellMassMsun" # The weight for the average
dd = pf.h.all_data() # This is a region describing the entire box,
# but note it doesn't read anything in yet!
# We now use our 'quantities' call to get the average quantity
average_value = dd.quantities["WeightedAverageQuantity"](
field, weight, lazy_reader=True)
print "Average %s (weighted by %s) is %0.5e" % (field, weight, average_value)
This is a recipe to show how to find topologicall connected sets of cells inside a dataset. It returns these clumps and they can be inspected or visualized as would any other data object. More detail on this method can be found in astro-ph/0806.1653. For more information, see Contour Finding.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/find_clumps.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
field = "Density" # this is the field we look for contours over -- we could do
# this over anything. Other common choices are 'AveragedDensity'
# and 'Dark_Matter_Density'.
step = 10.0 # This is the multiplicative interval between contours.
pf = load(fn) # load data
# We want to find clumps over the entire dataset, so we'll just grab the whole
# thing! This is a convenience parameter that prepares an object that covers
# the whole domain. Note, though, that it will load on demand and not before!
data_source = pf.h.all_data()
# Now we set some sane min/max values between which we want to find contours.
# This is how we tell the clump finder what to look for -- it won't look for
# contours connected below or above these threshold values.
c_min = 10**na.floor(na.log10(data_source[field]).min() )
c_max = 10**na.floor(na.log10(data_source[field]).max()+1)
# Now find get our 'base' clump -- this one just covers the whole domain.
master_clump = Clump(data_source, None, field)
# This next command accepts our base clump and we say the range between which
# we want to contour. It recursively finds clumps within the master clump, at
# intervals defined by the step size we feed it. The current value is
# *multiplied* by step size, rather than added to it -- so this means if you
# want to look in log10 space intervals, you would supply step = 10.0.
find_clumps(master_clump, c_min, c_max, step)
# As it goes, it appends the information about all the sub-clumps to the
# master-clump. Among different ways we can examine it, there's a convenience
# function for outputting the full hierarchy to a file.
f = open('%s_clump_hierarchy.txt' % pf,'w')
write_clump_hierarchy(master_clump,0,f)
f.close()
# We can also output some handy information, as well.
f = open('%s_clumps.txt' % pf,'w')
write_clumps(master_clump,0,f)
f.close()
# If you'd like to visualize these clumps, a list of clumps can be supplied to
# the "clumps" callback on a plot.
Sample Output
RedshiftOutput0005_clump_hierarchy.txt
Clump at level 0:
Cells: 714638
Mass: 2.867994e+12 Msolar
Jeans Mass (vol-weighted): 1.945683e+10 Msolar
Jeans Mass (mass-weighted): 4.959227e+10 Msolar
Max grid level: 2
Min number density: 7.299408e-09 cm^-3
Max number density: 1.594844e-03 cm^-3
Clump at level 1:
Cells: 136
Mass: 3.150459e+10 Msolar
Jeans Mass (vol-weighted): 2.313084e+11 Msolar
Jeans Mass (mass-weighted): 6.949434e+10 Msolar
Max grid level: 2
Min number density: 3.599559e-06 cm^-3
Max number density: 4.158763e-04 cm^-3
Clump at level 2:
Cells: 1
Mass: 3.698225e+09 Msolar
Jeans Mass (vol-weighted): 3.532331e+09 Msolar
Jeans Mass (mass-weighted): 3.532331e+09 Msolar
Max grid level: 2
Min number density: 4.144234e-04 cm^-3
Max number density: 4.144234e-04 cm^-3
Clump at level 1:
Cells: 944
Mass: 1.363021e+11 Msolar
Jeans Mass (vol-weighted): 3.786709e+11 Msolar
Jeans Mass (mass-weighted): 1.977858e+11 Msolar
Max grid level: 2
Min number density: 3.593325e-06 cm^-3
Max number density: 7.125931e-04 cm^-3
Clump at level 2:
Cells: 2
Mass: 1.205801e+10 Msolar
Jeans Mass (vol-weighted): 2.555663e+09 Msolar
Jeans Mass (mass-weighted): 2.555660e+09 Msolar
Max grid level: 2
Min number density: 6.737547e-04 cm^-3
Max number density: 6.774670e-04 cm^-3
RedshiftOutput0005_clumps.txt
Clump:
Cells: 1
Mass: 3.698225e+09 Msolar
Jeans Mass (vol-weighted): 3.532331e+09 Msolar
Jeans Mass (mass-weighted): 3.532331e+09 Msolar
Max grid level: 2
Min number density: 4.144234e-04 cm^-3
Max number density: 4.144234e-04 cm^-3
Clump:
Cells: 2
Mass: 1.205801e+10 Msolar
Jeans Mass (vol-weighted): 2.555663e+09 Msolar
Jeans Mass (mass-weighted): 2.555660e+09 Msolar
Max grid level: 2
Min number density: 6.737547e-04 cm^-3
Max number density: 6.774670e-04 cm^-3
This is a simple recipe to show how to open a dataset and then plot a couple phase diagrams, save them, and quit. Note that this recipe will take advantage of multiple CPUs if executed with mpirun and supplied the –parallel command line argument. For more information, see Multi-dimensional Profiles.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/global_phase_plots.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
dd = pf.h.all_data() # This is an object that describes the entire box
pc = PlotCollection(pf) # defaults to center at most dense point
# We plot the average x-velocity (mass-weighted) in our object as a function of
# Electron_Density and Temperature
plot=pc.add_phase_object(dd, ["Electron_Density","Temperature","x-velocity"]
lazy_reader = True)
# We now plot the average value of x-velocity as a function of temperature
plot=pc.add_profile_object(dd, ["Temperature", "x-velocity"],
lazy_reader = True)
# Finally, the average electron density as a function of the magnitude of the
# velocity
plot=pc.add_profile_object(dd, ["Electron_Density", "VelocityMagnitude"],
lazy_reader = True)
pc.save() # save all plots
This recipe finds halos and then prints out information about them. Note that this recipe will take advantage of multiple CPUs if executed with mpirun and supplied the –parallel command line argument. For more information, see Halo Finding.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/halo_mass_info.py .
from yt.mods import *
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
# First we run our halo finder to identify all the halos in the dataset. This
# can take arguments, but the default are pretty sane.
halos = HaloFinder(pf)
f = open("%s_halo_info.txt" % pf, "w")
# Now, for every halo, we get the baryon data and examine it.
for halo in halos:
# The halo has a property called 'get_sphere' that obtains a sphere
# centered on the point of maximum density (or the center of mass, if that
# argument is supplied) and with the radius the maximum particle radius of
# that halo.
sphere = halo.get_sphere()
# We use the quantities[] method to get the total mass in baryons and in
# particles.
baryon_mass, particle_mass = sphere.quantities["TotalQuantity"](
["CellMassMsun", "ParticleMassMsun"], lazy_reader=True)
# Now we print out this information, along with the ID.
f.write("Total mass in HOP group %s is %0.5e (gas = %0.5e / particles = %0.5e)\n" % \
(halo.id, baryon_mass + particle_mass, baryon_mass, particle_mass))
f.close()
Sample Output
RedshiftOutput0005_halo_info.txt
Total mass in HOP group 0 is 4.83999e+11 (gas = 6.41403e+10 / particles = 4.19859e+11)
Total mass in HOP group 1 is 4.21962e+11 (gas = 6.07831e+10 / particles = 3.61179e+11)
Total mass in HOP group 2 is 2.62190e+11 (gas = 3.86250e+10 / particles = 2.23565e+11)
Total mass in HOP group 3 is 1.95698e+11 (gas = 3.06816e+10 / particles = 1.65016e+11)
Total mass in HOP group 4 is 1.41878e+11 (gas = 2.12738e+10 / particles = 1.20604e+11)
Total mass in HOP group 5 is 1.16476e+11 (gas = 1.82813e+10 / particles = 9.81947e+10)
Total mass in HOP group 6 is 1.24188e+11 (gas = 2.27342e+10 / particles = 1.01454e+11)
Total mass in HOP group 7 is 7.82627e+10 (gas = 1.47010e+10 / particles = 6.35617e+10)
Total mass in HOP group 8 is 6.61199e+10 (gas = 6.63271e+09 / particles = 5.94872e+10)
Total mass in HOP group 9 is 7.63012e+10 (gas = 1.27395e+10 / particles = 6.35617e+10)
Total mass in HOP group 10 is 4.28431e+10 (gas = 4.95060e+09 / particles = 3.78926e+10)
Total mass in HOP group 11 is 4.31030e+10 (gas = 3.58066e+09 / particles = 3.95223e+10)
Total mass in HOP group 12 is 2.92851e+10 (gas = 1.17132e+09 / particles = 2.81138e+10)
Total mass in HOP group 13 is 3.13247e+10 (gas = 1.98850e+09 / particles = 2.93362e+10)
Total mass in HOP group 14 is 2.97262e+10 (gas = 7.97481e+08 / particles = 2.89287e+10)
Total mass in HOP group 15 is 2.42277e+10 (gas = 5.95763e+08 / particles = 2.36319e+10)
Total mass in HOP group 16 is 2.76922e+10 (gas = 1.20811e+09 / particles = 2.64840e+10)
Total mass in HOP group 17 is 2.18954e+10 (gas = 1.11559e+09 / particles = 2.07798e+10)
Total mass in HOP group 18 is 2.09572e+10 (gas = 5.84902e+08 / particles = 2.03723e+10)
Total mass in HOP group 19 is 2.55289e+10 (gas = 2.71190e+09 / particles = 2.28170e+10)
This recipe shows a slightly-fancy way to save a couple plots at a lot of different widths, ensuring that across the plots we have the same min/max for the colorbar.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/multi_width_save.py .
from yt.mods import *
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf, center=[0.5, 0.5, 0.5]) # We get our Plot Collection object
# Note that when we save, we will be using string formatting to change all of
# the bits in here. You can add more, or remove some, if you like.
fn = "%(bn)s_%(width)010i_%(unit)s" # template for image file names
# Now let's set up the widths we want to use.
widths = [ (2, "mpc"), (1000, 'kpc')]
# We could add on more of these with:
# widths += [ ... ]
# Now we add a slice for x and y.
pc.add_slice("Density", 0)
pc.add_slice("Density", 1)
# So for all of our widths, we will set the width of the plot and then make
# sure that our limits for the colorbar are the min/max across the three plots.
# Then we save! Each saved file will have a descriptive name, so we can tell
# them apart.
for width, unit in widths:
pc.set_width(width,unit)
vmin = min([p.norm.vmin for p in pc.plots])
vmax = max([p.norm.vmax for p in pc.plots])
pc.set_zlim(vmin,vmax)
# This is the string formatting we talked about earlier
d = {'bn':pf.basename, 'width':width, 'unit':unit}
pc.save(fn % d)
Sample Output
This is a recipe that takes a slice through the most dense point, then creates a bunch of frames as it zooms in. It’s important to note that this particular recipe is provided to show how to be more flexible and add annotations and the like – the base system, of a zoomin, is provided by the yt zoomin command.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/zoomin_frames.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
n_frames = 5 # This is the number of frames to make -- below, you can see how
# this is used.
min_dx = 40 # This is the minimum size in smallest_dx of our last frame.
# Usually it should be set to something like 400, but for THIS
# dataset, we actually don't have that great of resolution.
pf = load(fn) # load data
frame_template = "frame_%05i" # Template for frame filenames
pc = PlotCollection(pf, center=[0.5, 0.5, 0.5]) # We make a plot collection that defaults to being
# centered at the most dense point.
p = pc.add_slice("Density", 2) # Add our slice, along z
p.modify["contour"]("Temperature") # We'll contour in temperature -- this kind
# of modification can't be done on the command
# line, so that's why we have the recipe!
# What we do now is a bit fun. "enumerate" returns a tuple for every item --
# the index of the item, and the item itself. This saves us having to write
# something like "i = 0" and then inside the loop "i += 1" for ever loop. The
# argument to enumerate is the 'logspace' function, which takes a minimum and a
# maximum and the number of items to generate. It returns 10^power of each
# item it generates.
for i,v in enumerate(na.logspace(
0, na.log10(pf.h.get_smallest_dx()*min_dx), n_frames)):
# We set our width as necessary for this frame ...
pc.set_width(v,'1')
# ... and we save!
pc.save(frame_template % (i))
Sample Output
This is a simple recipe to show how to open a dataset, plot a projection through it, and add particles on top. For more information see Plot Modification Mechanisms.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/overplot_particles.py .
from yt.mods import * # set up our namespace
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
pc = PlotCollection(pf, center=[0.5,0.5,0.5]) # defaults to center at most dense point
p = pc.add_projection("Density", 0) # 0 = x-axis
# "nparticles" is slightly more efficient than "particles"
p.modify["nparticles"](1.0) # 1.0 is the 'width' we want for our slab of
# particles -- this governs the allowable locations
# of particles that show up on the image
# NOTE: we can also supply a *ptype* to cut based
# on a given (integer) particle type
pc.set_width(1.0, '1') # change width of our plot to the full domain
pc.save(fn) # save all plots
Sample Output
This is a simple recipe to show how to open a dataset and then plot a slice through it, centered at its most dense point. For more information, see get_multi_plot().
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/multi_plot.py .
from yt.mods import * # set up our namespace
import matplotlib.colorbar as cb
fn = "RedshiftOutput0005" # parameter file to load
orient = 'horizontal'
pf = load(fn) # load data
# There's a lot in here:
# From this we get a containing figure, a list-of-lists of axes into which we
# can place plots, and some axes that we'll put colorbars.
# We feed it:
# Number of plots on the x-axis, number of plots on the y-axis, and how we
# want our colorbars oriented. (This governs where they will go, too.
# bw is the base-width in inches, but 4 is about right for most cases.
fig, axes, colorbars = raven.get_multi_plot( 2, 1, colorbar=orient, bw = 4)
# We'll use a plot collection, just for convenience's sake
pc = PlotCollection(pf, center=[0.5, 0.5, 0.5])
# Now we add a slice and set the colormap of that slice, but note that we're
# feeding it an axes -- the zeroth row, the zeroth column, and telling the plot
# "Don't make a colorbar." We'll make one ourselves.
p = pc.add_slice("Density", 0, figure = fig, axes = axes[0][0], use_colorbar=False)
p.set_cmap("bds_highcontrast") # this is our colormap
# We do this again, but this time we take the 1-index column.
p = pc.add_slice("Temperature", 0, figure=fig, axes=axes[0][1], use_colorbar=False)
p.set_cmap("hot") # a different colormap
pc.set_width(5.0, 'mpc') # change width of both plots
# Each 'p' is a plot -- this is the Density plot and the Temperature plot.
# Each 'cax' is a colorbar-container, into which we'll put a colorbar.
# zip means, give these two me together.
for p, cax in zip(pc.plots, colorbars):
# Now we make a colorbar, using the 'image' attribute of the plot.
# 'image' is usually not accessed; we're making a special exception here,
# though. 'image' will tell the colorbar what the limits of the data are.
cbar = cb.Colorbar(cax, p.image, orientation=orient)
# Now, we have to do a tiny bit of magic -- we tell the plot what its
# colorbar is, and then we tell the plot to set the label of that colorbar.
p.colorbar = cbar
p._autoset_label()
# And now we're done! Note that we're calling a method of the figure, not the
# PlotCollection.
fig.savefig("%s" % pf)
Sample Output
This is a simple recipe to show how to open a dataset and then plot a slice through it, centered at its most dense point. For more information, see get_multi_plot().
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/multi_plot_3x2.py .
from yt.mods import * # set up our namespace
import matplotlib.colorbar as cb
fn = "RedshiftOutput0005" # parameter file to load
orient = 'horizontal'
pf = load(fn) # load data
# There's a lot in here:
# From this we get a containing figure, a list-of-lists of axes into which we
# can place plots, and some axes that we'll put colorbars.
# We feed it:
# Number of plots on the x-axis, number of plots on the y-axis, and how we
# want our colorbars oriented. (This governs where they will go, too.
# bw is the base-width in inches, but 4 is about right for most cases.
fig, axes, colorbars = raven.get_multi_plot( 2, 3, colorbar=orient, bw = 4)
# We'll use a plot collection, just for convenience's sake
pc = PlotCollection(pf, center=[0.5, 0.5, 0.5])
# Now we follow the method of "multi_plot.py" but we're going to iterate
# over the columns, which will become axes of slicing.
for ax in range(3):
p = pc.add_slice("Density", ax, figure = fig, axes = axes[ax][0],
use_colorbar=False)
p.set_cmap("bds_highcontrast") # this is our colormap
p.set_zlim(5e-32, 1e-29)
# We do this again, but this time we take the 1-index column.
p = pc.add_slice("Temperature", ax, figure=fig, axes=axes[ax][1],
use_colorbar=False)
p.set_zlim(1e3, 3e4) # Set this so it's the same for all.
p.set_cmap("hot") # a different colormap
pc.set_width(5.0, 'mpc') # change width of both plots
# Each 'p' is a plot -- this is the Density plot and the Temperature plot.
# Each 'cax' is a colorbar-container, into which we'll put a colorbar.
# zip means, give these two me together. Note that it cuts off after the
# shortest iterator is exhausted, in this case pc.plots.
for p, cax in zip(pc.plots, colorbars):
# Now we make a colorbar, using the 'image' attribute of the plot.
# 'image' is usually not accessed; we're making a special exception here,
# though. 'image' will tell the colorbar what the limits of the data are.
cbar = cb.Colorbar(cax, p.image, orientation=orient)
# Now, we have to do a tiny bit of magic -- we tell the plot what its
# colorbar is, and then we tell the plot to set the label of that colorbar.
p.colorbar = cbar
p._autoset_label()
# And now we're done! Note that we're calling a method of the figure, not the
# PlotCollection.
fig.savefig("%s_3x2" % pf)
Sample Output
This is a recipe to sit inside a directory and plot a phase diagram for every one of the outputs in that directory.
If run with mpirun and the –parallel flag, this will take advantage of multiple processors.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/time_series_phase.py .
from yt.mods import * # set up our namespace
# this means get all the parameter files that it can autodetect and then supply
# them as parameter file objects to the loop.
for pf in all_pfs(max_depth=2):
# We create a plot collection to hold our plot
# If we don't specify the center, it will look for one -- but we don't
# really care where it's centered for this plot.
pc = PlotCollection(pf, center=[0.5, 0.5, 0.5])
# Now we add a phase plot of a sphere with radius 1.0 in code units.
# If your domain is not 0..1, then this may not cover it completely.
p = pc.add_phase_sphere(1.0, '1', ["Density", "Temperature", "CellMassMsun"],
weight=None, lazy_reader=True,
x_bins=128, x_bounds = (1e-32, 1e-24),
y_bins=128, y_bounds = (1e2, 1e7))
# We've over-specified things -- but this will help ensure we have constant
# bounds. lazy_reader gives it the go-ahead to run in parallel, and we
# have asked for 128 bins from 1e-32 .. 1e-24 in Density-space and 128 bins
# between 1e2 and 1e7 in Temperature space. This will lead to very fine
# points of much lower mass, which is okay. You can reduce the number of
# bins to get more mass in each bin. Additionally, weight=None means that
# no averaging is done -- it just gets summed up, so the value of each bin
# will be all the mass residing within that bin.
# Nowe let's add a title with some fun information. p is the plot we were
# handed previously. We will add the name of the parameter file and the
# current redshift.
p.modify["title"]("%s (z = %0.2f)" % (pf, pf["CosmologyCurrentRedshift"]))
# Now let's save it out.
pc.save()#"%s" % pf)
Sample Output
This is a recipe to sit inside a directory and calculate a quantity for all of the outputs in that directory.
If run with mpirun and the –parallel flag, this will take advantage of multiple processors.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/time_series_quantity.py .
from yt.mods import * # set up our namespace
# First set up our times and quantities lists
times = []
values = []
# this means get all the parameter files that it can autodetect and then supply
# them as parameter file objects to the loop.
for pf in all_pfs(max_depth=2):
# Get the current time, convert to years from code units
times.append(pf["InitialTime"] * pf["years"])
# Now get a box containing the entire dataset
data = pf.h.all_data()
# Now we calculate the average. The first argument is the quantity to
# average, the second is the weight.
# "lazy_reader" has two meanings -- the first is that it will try to
# operate on each individual grid, rather than a flattened array of all the
# data. The second is that it will also distribute grids across multiple
# processors, if multiple processors are in use.
val = data.quantities["WeightedAverageQuantity"](
"Temperature", "CellVolume", lazy_reader=True)
values.append(val)
# Now we have our values and our time. We can plot this in pylab!
import pylab
pylab.semilogy(times, values, '-x')
pylab.xlabel(r"$Time [years]$")
pylab.ylabel(r"$\mathrm{H}^{+}\/\/\mathrm{Fraction}$")
pylab.savefig("average_HII_fraction.png")
Sample Output
This is a recipe to show how to open a dataset and extract it to a file at a fixed resolution with no interpolation or smoothing. Additionally, this recipe shows how to insert a dataset into an external HDF5 file using h5py. For more information see covering_grid.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/extract_fixed_resolution_data.py .
from yt.mods import *
# For this example we will use h5py to write to our output file.
import h5py
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
# This is the resolution we will extract at
DIMS = 128
# Now, we construct an object that describes the data region and structure we
# want
cube = pf.h.covering_grid(2, # The level we are willing to extract to; higher
# levels than this will not contribute to the data!
# Now we set our spatial extent...
left_edge=[0.0, 0.0, 0.0],
right_edge=[1.0, 1.0, 1.0],
# How many dimensions along each axis
dims=[DIMS,DIMS,DIMS],
# And any fields to preload (this is optional!)
fields=["Density"])
# Now we open our output file using h5py
# Note that we open with 'w' which will overwrite existing files!
f = h5py.File("my_data.h5", "w")
# We create a dataset at the root note, calling it density...
f.create_dataset("/density", data=cube["Density"])
# We close our file
f.close()
This is a recipe for making radial profiles and projections of all of the halos within a cosmological simulation. See Halo Profiling for full documentation of the HaloProfiler.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/run_halo_profiler.py .
import yt.extensions.HaloProfiler as HP
# Instantiate HaloProfiler for this dataset.
hp = HP.HaloProfiler("DD0242/DD0242")
# Add a filter to remove halos that have no profile points with overdensity
# above 200, and with virial masses less than 1e14 solar masses.
# Also, return the virial mass and radius to be written out to a file.
hp.add_halo_filter(HP.VirialFilter,must_be_virialized=True,
overdensity_field='ActualOverdensity',
virial_overdensity=200,
virial_filters=[['TotalMassMsun','>=','1e14']],
virial_quantities=['TotalMassMsun','RadiusMpc'])
# Add profile fields.
hp.add_profile('CellVolume',weight_field=None,accumulation=True)
hp.add_profile('TotalMassMsun',weight_field=None,accumulation=True)
hp.add_profile('Density',weight_field='CellMassMsun',accumulation=False)
hp.add_profile('Temperature',weight_field='CellMassMsun',accumulation=False)
# Make profiles and output filtered halo list to FilteredQuantities.out.
hp.make_profiles(filename="FilteredQuantities.out")
# Add projection fields.
hp.add_projection('Density',weight_field=None)
hp.add_projection('Temperature',weight_field='Density')
hp.add_projection('Metallicity',weight_field='Density')
# Make projections for all three axes using the filtered halo list and
# save data to hdf5 files.
hp.make_projections(save_cube=True,save_images=True,
halo_list='filtered',axes=[0,1,2])
The following recipe will run the HaloProfiler (see Halo Profiling) on all the datasets in one simulation between z = 10 and 0.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simulation_halo_profiler.py .
import yt.extensions.EnzoSimulation as ES
import yt.extensions.HaloProfiler as HP
es = ES.EnzoSimulation("simulation_parameter_file", initial_redshift=10, final_redshift=0)
# Loop over all dataset in the requested time interval.
for output in es.allOutputs:
# Instantiate HaloProfiler for this dataset.
hp = HP.HaloProfiler(output['filename'])
# Add a virialization filter.
hp.add_halo_filter(HP.VirialFilter,must_be_virialized=True,
overdensity_field='ActualOverdensity',
virial_overdensity=200,
virial_filters=[['TotalMassMsun','>=','1e14']],
virial_quantities=['TotalMassMsun','RadiusMpc'])
# Add profile fields.
hp.add_profile('CellVolume',weight_field=None,accumulation=True)
hp.add_profile('TotalMassMsun',weight_field=None,accumulation=True)
hp.add_profile('Density',weight_field="CellMassMsun",accumulation=False)
hp.add_profile('Temperature',weight_field='CellMassMsun',accumulation=False)
# Make profiles and output filtered halo list to FilteredQuantities.out.
hp.make_profiles(filename="FilteredQuantities.out")
# Add projection fields.
hp.add_projection('Density',weight_field=None)
hp.add_projection('Temperature',weight_field='Density')
hp.add_projection('Metallicity',weight_field='Density')
# Make projections for all three axes using the filtered halo list and
# save data to hdf5 files.
hp.make_projections(save_cube=True,save_images=True,
halo_list='filtered',axes=[0,1,2])
del hp
The following recipe will make a light cone projection (see Light Cone Generator) of a single quantity over the redshift interval 0 to 0.4.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/make_light_cone.py .
import yt.extensions.lightcone as LC
# All of the light cone parameters are given as keyword arguments at instantiation.
lc = LC.LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4,
final_redshift=0.0, observer_redshift=0.0,
field_of_view_in_arcminutes=450.0,
image_resolution_in_arcseconds=60.0,
use_minimum_datasets=True, deltaz_min=0.0,
minimum_coherent_box_fraction=0.0,
output_dir='LC', output_prefix='LightCone')
# Calculate a light cone solution and write out a text file with the details
# of the solution.
lc.calculate_light_cone_solution(seed=123456789, filename='lightcone.dat')
# This will be the field to be projected.
field = 'SZY'
# Make the light cone projection, save individual images of each slice
# and of the projection as well as an hdf5 file with the full data cube.
# Add a label of the slice redshift to each individual image.
# The return value is the PlotCollection that holds the image data for
# the final projection, allowing for additional customization of the
# final image.
pc = lc.project_light_cone(field ,save_stack=True, save_slice_images=True, use_colorbar=False,
add_redshift_label=True)
The following recipe will create 15 light cone projections that have at most 10% volume in common with each other.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/unique_light_cones.py .
import yt.extensions.lightcone as LC
# Instantiate a light cone object as usual.
lc = LC.LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4,
final_redshift=0.0, observer_redshift=0.0,
field_of_view_in_arcminutes=120.0,
image_resolution_in_arcseconds=60.0,
use_minimum_datasets=True, deltaz_min=0.0,
minimum_coherent_box_fraction=0.0,
output_dir='LC', output_prefix='LightCone')
# Try to find 15 solutions that have at most 10% volume in
# common and give up after 50 consecutive failed attempts.
# The recycle=True setting tells the code to first attempt
# to use recycled solutions before trying completely
# independent solutions.
LC.find_unique_solutions(lc, max_overlap=0.10, failures=50,
seed=123456789, recycle=True,
solutions=15, filename='unique.dat')
# Make light cone projections with each of the random seeds
# found above. All output files will be written with unique
# names based on the random seed numbers. All keyword arguments
# accepted by project_light_cone can be given here as well.
field = 'SZY'
LC.project_unique_light_cones(lc, 'unique.dat', field)
The following recipe uses the HaloProfiler to locate halos of at least 10 14 solar masses in the sampled volume and create a boolean mask to cover them up with circles of radii given by the virial radius of each halo. A text file is written out detailing the x and y positions in the light cone projection of all the halos in the mask, their redshifts, virial radii, and virial masses.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/light_cone_halo_mask.py .
import yt.extensions.lightcone as LC
import yt.extensions.HaloProfiler as HP
# Instantiate a light cone object as usual.
lc = LC.LightCone("128Mpc256grid_SFFB.param", initial_redshift=0.4,
final_redshift=0.0, observer_redshift=0.0,
field_of_view_in_arcminutes=600.0,
image_resolution_in_arcseconds=60.0,
use_minimum_datasets=True, deltaz_min=0.0,
minimum_coherent_box_fraction=0.0,
output_dir='LC', output_prefix='LightCone')
# Calculate the light cone solution.
lc.calculate_light_cone_solution(seed=123456789, filename='lightcone.dat')
# The list of halos to be masked out will come from the HaloProfiler.
# Keyword arguments to be given to the halo profiler can be specified
# in the form of a dictionary.
halo_profiler_kwargs = {'halo_list_file': 'HopAnalysis.out'}
# Any actions the HaloProfiler is to perform are given in a list.
halo_profiler_actions = []
# Each list item contains a dictionary with entries for the function to
# be called ("function"), the arguments of the function ("args"), and the
# keyword arguments of the function ("kwargs").
# This item will add a virial filter.
halo_profiler_actions.append({'function': HP.HaloProfiler.add_halo_filter,
'args': [HP.VirialFilter],
'kwargs': {'must_be_virialized':True,
'overdensity_field':'ActualOverdensity',
'virial_overdensity':200,
'virial_filters':[['TotalMassMsun','>','1e14']],
'virial_quantities':['TotalMassMsun','RadiusMpc']}})
# This item will call the make_profile method to get the filtered halo list.
halo_profiler_actions.append({'function': HP.HaloProfiler.make_profiles,
'kwargs': {'filename': "virial_filter.out"}})
# Specify the desired halo list is the filtered list.
# If 'all' is given instead, the full list will be used.
halo_list = 'filtered'
# Get the halo list for the active solution of this light cone using
# the HaloProfiler settings set up above.
# Write the boolean map to an hdf5 file called 'halo_mask.h5'.
# Write a text file detailing the location, redshift, radius, and mass
# of each halo in light cone projection.
lc.get_halo_mask(mask_file='halo_mask.h5', map_file='halo_map.out',
halo_profiler_kwargs=halo_profiler_kwargs,
halo_profiler_actions=halo_profiler_actions,
halo_list=halo_list)
# This will be the field to be projected.
field = 'SZY'
# Make the light cone projection and apply the halo mask.
pc = lc.project_light_cone(field, save_stack=True, save_slice_images=True,
add_redshift_label=True, apply_halo_mask=True)
This recipe shows how to volume render a dataset. There are a number of twiddles, and rough edges, and the process is still very much in beta. See Volume Rendering for more information.
The latest version of this recipe can be downloaded here: http://hg.enzotools.org/cookbook/raw-file/tip/recipes/simple_volume_rendering.py .
from yt.mods import * # set up our namespace
import yt.extensions.volume_rendering as vr
fn = "RedshiftOutput0005" # parameter file to load
pf = load(fn) # load data
dd = pf.h.all_data()
# Get the bounds of density
mi, ma = dd.quantities["Extrema"]("Density")[0]
# We supply the min/max we want the function to cover, in log.
# For this dataset it's -31 and -27.
tf = vr.ColorTransferFunction((-31, -27))
# Now we add some Gaussians on. Work is underway to transform this into a
# graphical user interface, and the initial steps can be found in
# transfer_function_widget.py .
# The first option is where to center the Gaussian, then the width of the
# Gaussian, and then finally the RGBA values.
tf.add_gaussian(-30.0, 0.002, [0.3, 0.1, 0.2, 0.2])
tf.add_gaussian(-29.5, 0.001, [0.0, 0.1, 0.2, 0.3])
tf.add_gaussian(-29.5, 0.003, [0.5, 0.1, 0.1, 0.5])
tf.add_gaussian(-29.0, 0.008, [0.5, 0.5, 0.5, 1.0])
# Now we need a center of our volume to render. Here we'll just use
# 0.5,0.5,0.5, because volume renderings are not periodic.
c = [0.5, 0.5, 0.5]
# Our image plane will be normal to some vector. For things like collapsing
# objects, you could set it the way you would a cutting plane -- but for this
# dataset, we'll just choose an off-axis value at random. This gets normalized
# automatically.
L = [0.5, 0.2, 0.7]
# Our "width" is the width of the image plane as well as the depth -- so we set
# it to be 0.25 so we get half the domain.
W = 0.25
# Now we decide how big an image we want. 512x512 should be sufficient.
Nvec = 512
# Now we call the ray caster, which handles the rest.
# Note that we feed whole_box, which means that it won't apply any cuts to the
# considered grids. This may be unnecessary for most appliations.
grids, image, vectors, norm_vec, pos = vr.direct_ray_cast(
pf, L, c, W, Nvec, tf, whole_box=True)
# Nowe we have "image" which is (Nvec,Nvec,5) where the first three are R,G,B,
# then integrated alpha and then the dt traversed. We strip off the A and T
# values and rescale between 0 and 1.
to_plot = image[:,:,:3]
to_plot = (to_plot - to_plot.min()) / (to_plot.max() - to_plot.min())
# We now massage pylab until we can get to
import pylab
pylab.clf()
fig = pylab.gcf()
fig.set_dpi(100)
fig.set_size_inches((Nvec/100.0, Nvec/100.0))
fig.subplots_adjust(left=0.0, right=1.0, bottom=0.0, top=1.0,
wspace=0.0, hspace=0.0)
# Now we just plot and save.
pylab.imshow(to_plot, interpolation='nearest')
pylab.savefig("%s_volume_rendered.png" % pf)
Sample Output