Volume Rendering

New in version 1.6.

Warning

The volume renderer is beta software! It’s still being developed, it’s kind of slow, and the interface is still rough around the edges. Watch this space, though!

Status

As of right now, the volume renderer is usable: but there are quirks and some artifacting. We’re working on improving it for the next release. A gallery of some visualizations prodeuced with it by Sam Skillman is available at http://casa.colorado.edu/~skillman/simulation_gallery/simulation_gallery.html .

Method

Direct ray casting through a volume enables the generation of new types of visualizations and images describing a simulation. yt now has the facility to generate volume renderings by a direct ray casting method. This comes with several important caveats, as well as pointers to other resources that may provide better images faster. However, the ability to create volume renderings informed by analysis by other mechanisms – for instance, halo location, angular momentum, spectral energy distributions – is useful.

The volume rendering in yt follows a relatively straightforward approach.

  1. Create transfer function, providing red, green, blue and alpha as a function of the variable of interest. (f(v) \rightarrow (r,g,b,a))

  2. Generate vertex-centered data for all grids in the volume rendered domain.

  3. Partition all grids into non-overlapping, fully domain-tiling “bricks.” Each of these “bricks” contains the finest available data at any location.

  4. Order the bricks from back-to-front.

  5. Construct plane of rays parallel to the image plane, with initial values set to zero and located at the back of the region to be rendered.

  6. For every brick, identify which rays intersect. These are then each ‘cast’ through the brick.

    1. Every cell a ray intersects is sampled 5 times, and data values at each sampling point are trilinearly interpolated from the vertex-centered data.

    2. The value for the pixel corresponding to the current ray is updated with new values calculated by standard integration:

      v^{j+1}_{i} = v^{j}_{i} + \int_{v_0}^{v_1}f(v)_idv\Delta t

      where j and j+1 represent the pixel before and after passing through a sample, i is the color (red, green, blue) and f(v)_i is the transfer function for color i, and \Delta t is the path length between samples.

  7. The image is returned to the user:

../_images/vr_sample.png

Step-by-step Example

This section of the documentation is still written. The recipe Simple volume rendering walks through the process of volume rendering. The mailing list is a great place to ask questions if you have any!

Comments

Feel free to leave comments! If you've got a GMail account, you can use https://www.google.com/accounts/o8/id as your OpenID URL.
comments powered by Disqus

Table Of Contents

Previous topic

Halo Mass Function

Next topic

Cookbook

This Page