The pRender node converts the particle system to either an image or geometry. The default is a 3D particle system, which must be connected to a Renderer 3D to produce an image. This allows the particles to be integrated with other elements in a 3D scene before they are rendered.
pRender Node Inputs
The pRender node has one orange input, a green camera input, and a blue effects mask input. Like most particle nodes, this orange input accepts only other particle nodes. A green bitmap or mesh input appears on the node when you set the Region menu in the Region tab to either Bitmap or Mesh.
- Input: The orange input takes the output of other particle nodes.
- Camera Input: The optional green camera input accepts a camera node directly or a 3D scene with a camera connected that is used to frame the particles during rendering.
- Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input for 2D particles crops the output of the particles so they are seen only within the mask.
pRender Node Setup
The pRender node is always placed at the end of a particle branch. If the pRender is set to 2D, then the output connects to other 2D nodes like a Merge node. If the pRender is set to 3D, the output connects to a 3D node like a Merge 3D.
Output Mode (2D/3D)
While the pRender defaults to 3D output, it can be made to render a 2D image instead. This is done with the 3D and 2D buttons on the Output Mode control. If the pRender is not connected to a 3D-only or 2D-only node, you can also switch it by selecting View > 2D Viewer from the viewer’s pop-up menu.
In 3D mode, the only controls in the pRender node that have any effect at all are Restart, Pre-Roll and Automatic Pre-Roll, Sub-Frame Calculation Accuracy, and Pre-Generate frames. The remaining controls affect 2D particle renders only. The pRender node also has a Camera input on the node tree that allows the connection of a Camera 3D node. This can be used in both 2D and 3D modes to allow control of the viewpoint used to render an output image.
Render and the Viewers
When the pRender node is selected in a node tree, all the onscreen controls from Particle nodes connected to it are presented in the viewers. This provides a fast, easy-to-modify overview of the forces applied to the particle system as a whole.
Particle nodes generally need to know the position of each particle on the last frame before they can calculate the effect of the forces applied to them on the current frame. This makes changing current time manually by anything but single frame intervals likely to produce an inaccurate image.
The controls here are used to help accommodate this by providing methods of calculating the intervening frames.
This control also works in 3D. Clicking on the Restart button will restart the particle system at the current frame, removing any particles created up to that point and starting the particle system from scratch at the current frame.
This control also works in 3D. Clicking on this button causes the particle system to recalculate, starting from the beginning of the render range up to the current frame. It does not render the image produced. It only calculates the position of each particle. This provides a relatively quick mechanism to ensure that the particles displayed in the views are correctly positioned.
If the pRender node is displayed when the Pre-Roll button is selected, the progress of the pre-roll is shown in the viewer, with each particle shown as point style only.
Selecting the Automatic Pre-Roll checkbox causes the particle system to automatically pre-roll the particles to the current frame whenever the current frame changes. This prevents the need to manually select the Pre-Roll button whenever advancing through time in jumps larger than a single frame. The progress of the particle system during an Automatic Pre-Roll is not displayed to the viewers to prevent distracting visual disruptions.
Pre-Roll is necessary because the state of a particle system is entirely dependent on the last known position of the particles. If the current time were changed to a frame where the last frame particle state is unknown, the display of the particle is calculated on the last known position, producing inaccurate results.
- Add a pEmitter and a pRender node to the composition.
- View the pRender in one of the viewers.
- Set the Velocity of the particles to 0.1.
- Place the pEmitter on the left edge of the screen.
- Set the Current Frame to 0.
- Set a Render Range from 0–100 and press the Play button.
- Observe how the particle system behaves.
- Stop the playback and return the current time to frame 0.
- In the pRender node, disable the Automatic Pre-Roll option.
- Use the current time number field to jump to frame 10, and then to frames 60 and 90.
Notice how the particle system only adds to the particles it has already created and does not try to create the particles that would have been emitted in the intervening frames. Try selecting the Pre-Roll button in the pRender node. Now the particle system state is represented correctly. For simple, fast-rendering particle systems, it is recommended to leave the Automatic Pre-Roll option on. For slower particle systems with long time ranges, it may be desirable to only Pre-Roll manually, as required.
Only Render in Hi-Q
Selecting this checkbox causes the style of the particles to be overridden when the Hi-Q checkbox is deselected, producing only fast rendering Point-style particles. This is useful when working with a large quantity of slow Image-based or Blob-style particles. To see the particles as they would appear in a final render, simply enable the Hi-Q checkbox.
This drop-down list provides options to determine the position of the camera view in a 3D particle system. The default option of Scene (Perspective) will render the particle system from the perspective of a virtual camera, the position of which can be modified using the controls in the Scene tab. The other options provide orthographic views of the front, top, and side of the particle system.
It is important to realize that the position of the onscreen controls for Particle nodes is unaffected by this control. In 2D mode, the onscreen controls are always drawn as if the viewer were showing the front orthographic view. (3D mode gets the position of controls right at all times.)
The View setting is ignored if a Camera 3D node is connected to the pRender node’s Camera input on the node tree, or if the pRender is in 3D mode.
pRender Node Controls
Blur, Glow, and Blur Blend
When generating 2D particles, these sliders apply a Gaussian blur, glows, and blur blending to the image as it is rendered, which can be used to soften the particles and blend them. The result is no different than adding a Blur after the pRender node in the node tree.
Sub-Frame Calculation Accuracy
This determines the number of sub-samples taken between frames when calculating the particle system. Higher values increase the accuracy of the calculation but also increase the amount of time to render the particle system.
This control is used to cause the particle system to pre-generate a set number of frames before its first valid frame. This is used to give a particle system an initial state from which to start.
A good example of when this might be useful is in a shot where particles are used to create the smoke rising from a chimney. Set Pre-Generate Frames to a number high enough to ensure that the smoke is already present in the scene before the render begins, rather than having it just starting to emerge from the emitter for the first few frames.
Kill Particles That Leave the View
Selecting this checkbox control automatically destroys any particles that leave the visible boundaries of the image. This can help to speed render times. Particles destroyed in this fashion never return, regardless of any external forces acting upon them.
Generate Z Buffer
Selecting this checkbox causes the pRender node to produce a Z Buffer channel in the image. The depth of each particle is represented in the Z Buffer. This channel can then be used for additional depth operations like Depth Blur, Depth Fog, and Downstream Z Merging.
Enabling this option is likely to increase the render times for the particle system dramatically.
Depth Merge Particles
Enabling this option causes the particles to be merged using Depth Merge techniques, rather than layerbased techniques.
pRender Node Scene Tab
The Z Clip control is used to set a clipping plane in front of the camera. Particles that cross this plane are clipped, preventing them from impacting on the virtual lens of the camera and dominating the scene.
pRender Node Grid Tab
These controls do not apply to 3D particles.
The grid is a helpful, non-rendering display guide used to orient the 2D particles in 3D space. The grid is never seen in renders, just like a center crosshair is never seen in a render. The width, depth, number of lines, and grid color can be set using the controls found in this tab.
pRender Node Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the rendered image produced by the node.
Use this menu control to select the Fields Processing mode used by Fusion to render changes to the image. The default option is determined by the Has Fields checkbox control in the Frame Format preferences.
Use Frame Format Settings
When this checkbox is selected, the width, height, and pixel aspect of the rendered images by the node will be locked to values defined in the composition’s Frame Format preferences. If the Frame Format preferences change, the resolution of the image produced by the node will change to match. Disabling this option can be useful to build a composition at a different resolution than the eventual target resolution for the final render.
This pair of controls is used to set the Width and Height dimensions of the image to be rendered by the node.
This control is used to specify the Pixel Aspect ratio of the rendered particles. An aspect ratio of 1:1 would generate a square pixel with the same dimensions on either side (like a computer display monitor), and an aspect of 0.9:1 would create a slightly rectangular pixel (like an NTSC monitor).
The Depth menu is used to set the pixel color depth of the particles. 32-bit pixels require 4X the memory of 8-bit pixels but have far greater color accuracy. Float pixels allow high dynamic range values outside the normal 0…1 range, for representing colors that are brighter than white or darker than black.
Source Color Space
You can use the Source Color Space menu to set the Color Space of the footage to help achieve a linear workflow. Unlike the Gamut tool, this doesn‘t perform any actual color space conversion, but rather adds the source space data into the metadata, if that metadata doesn‘t exist. The metadata can then be used downstream by a Gamut tool with the From Image option, or in a Saver, if explicit output spaces are defined there. There are two options to choose from:
- Auto: Automatically reads and passes on the metadata that may be in the image.
- Space: Displays a Color Space Type menu where you can choose the correct color space of the image.
Source Gamma Space
Using the Curve type menu, you can set the Gamma Space of the footage and choose to remove it by way of the Remove Curve checkbox when working in a linear workflow. There are three choices in the Curve type menu:
- Auto: Automatically reads and passes on the metadata that may be in the image.
- Space: Displays a Gamma Space Type menu where you can choose the correct gamma curve of the image.
- Log: Brings up the Log/Lin settings, similar to the Cineon tool.
Depending on the selected Gamma Space or on the Gamma Space found in Auto mode, the Gamma Curve is removed from, or a log-lin conversion is performed on, the material, effectively converting it to a linear output space.
As with other 2D nodes in Fusion, Motion Blur is enabled from within the Settings tab. You may set Quality, Shutter Angle, Sample Center, and Bias, and Blur will be applied to all moving particles.
About the Author
Justin Robinson is a Certified DaVinci Resolve, Fusion & Fairlight instructor who is known for simplifying concepts and techniques for anyone looking to learn any aspect of the video post-production workflow. Justin is the founder of JayAreTV, a training and premade asset website offering affordable and accessible video post-production education. You can follow Justin on Twitter at @JayAreTV YouTube at JayAreTV or Facebook at MrJayAreTV