Get 30+ hr of DaVinci Resolve courses & 400+ pre-made assets

As little as $15/month for all courses and pre-made assets

Renderer 3D Node

The Renderer 3D node converts the 3D environment into a 2D image using either a default perspective camera or one of the cameras found in the scene. Every 3D scene in a composition terminates with at least one Renderer 3D node. The Renderer node includes a software and OpenGL render engine to produce the resulting image. Additional render engines may also be available via third-party plug-ins.

The software render engine uses the system’s CPU only to produce the rendered images. It is usually much slower than the OpenGL render engine, but produces consistent results on all machines, making it essential for renders that involve network rendering. The Software mode is required to produce soft shadows, and generally supports all available illumination, texture, and material features.

The OpenGL render engine employs the GPU processor on the graphics card to accelerate the rendering of the 2D images. The output may vary slightly from system to system, depending on the exact graphics card installed. The graphics card driver can also affect the results from the OpenGL renderer. The OpenGL render engines speed makes it possible to provide customized supersampling and realistic 3D depth of field options. The OpenGL renderer cannot generate soft shadows. For soft shadows, the software renderer is recommended.

Like most nodes, the Renderer’s motion blur settings can be found under the Common Controls tab. Be aware that scenes containing particle systems require that the Motion Blur settings on the pRender nodes exactly match the settings on the Renderer 3D node.

Otherwise, the subframe renders conflict producing unexpected (and incorrect) results.

Renderer 3D Node Inputs

The Renderer 3D node has two inputs on the node. The main scene input takes in the Merge 3D or other 3D nodes that need to be converted to 2D. The effect mask limits the Renderer 3D output.

  • SceneInput: The orange scene input is a required input that accepts a 3D scene that you want to convert to 2D.
  • EffectMask: The blue effects mask input uses a 2D image to mask the output of the node.

Renderer 3D Node Setup

All 3D scenes must end with a Renderer 3D node. The Renderer 3D node is used to convert a 3D scene into a 2D image. Below, the Renderer 3D node takes the output of a Merge 3D node, and renders the 3D scene into a 2D image.

Renderer 3D Node Controls Tab

  • Camera
    The Camera menu is used to select which camera from the scene is used when rendering. The Default setting uses the first camera found in the scene. If no camera is located, the default perspective view is used instead.
  • Eye
    The Eye menu is used to configure rendering of stereoscopic projects. The Mono option ignores the stereoscopic settings in the camera. The Left and Right options translate the camera using the stereo Separation and Convergence options defined in the camera to produce either left- or right-eye outputs. The Stacked option places the two images one on top of the other instead of side by side.
  • Reporting
    The first two checkboxes in this section can be used to determine whether the node prints warnings and errors produced while rendering to the console. The second set of checkboxes tells the node whether it should abort rendering when a warning or error is encountered. The default for this node enables all four checkboxes.
  • Renderer Type
    This menu lists the available render engines. Fusion provides three: the software renderer, OpenGL renderer, and the OpenGL UV render engine. Additional renderers can be added via third-party plug-ins.

    All the controls found below this drop-down menu are added by the render engine. They may change depending on the options available to each renderer. So, each renderer is described in its own section below.
    • Software Controls
      • Output Channels
        Besides the usual Red, Green, Blue, and Alpha channels, the software renderer can also embed the following channels into the image. Enabling additional channels consumes additional memory and processing time, so these should be used only when required.
        • RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of the image. These channels are required, and they cannot be disabled.
        • Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that represents the distance of each pixel from the camera. Note that the Z-channel values cannot include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used for this pixel.
        • Coverage: This option enables rendering of the Coverage channel. The Coverage channel contains information about which pixels in the Z-buffer provide coverage (are overlapping with other objects). This helps nodes that use the Z-buffer to provide a small degree of anti-aliasing. The value of the pixels in this channel indicates, as a percentage, how much of the pixel is composed of the foreground object.
        • BgColor: This option enables rendering of the BgColor channel. This channel contains the color values from objects behind the pixels described in the Coverage channel.
        • Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color channel containing values in a range from [–1,1] represents each axis.
        • TexCoord: This option enables rendering of the U and V mapping coordinate channels. The pixels in these channels contain the texture coordinates of the pixel. Although texture coordinates are processed internally within the 3D system as three-component UVW, Fusion images store only UV components. These components are mapped into the Red and Green color channel.
        • ObjectID: This option enables rendering of the ObjectID channel. Each object in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the objects that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple objects can share a single Object ID. This buffer is useful for extracting mattes based on the shapes of objects in the scene.
        • MaterialID: This option enables rendering of the Material ID channel. Each material in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the materials that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple materials can share a single Material ID. This buffer is useful for extracting mattes based on a texture; for example, a mask containing all the pixels that comprise a brick texture.
      • Lighting
        • Enable Lighting: When the Enable Lighting checkbox is selected, objects are lit by any lights in the scene. If no lights are present, all objects are black.
        • Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces shadows, at the cost of some speed.
    • OpenGL Controls
      • Output Channels
        In addition to the usual Red, Green, Blue, and Alpha channels, the OpenGL render engine can also embed the following channels into the image. Enabling additional channels consumes additional memory and processing time, so these should be used only when required.
        • RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of the image. These channels are required, and they cannot be disabled.
        • Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that represents the distance of each pixel from the camera. Note that the Z-channel values cannot include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used for this pixel.
        • Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color channel containing values in a range from [–1,1] is represented by each axis.
        • TexCoord: This option enables rendering of the U and V mapping coordinate channels. The pixels in these channels contain the texture coordinates of the pixel. Although texture coordinates are processed internally within the 3D system as three-component UVW, Fusion images store only UV components. These components are mapped into the Red and Green color channels.
        • ObjectID: This option enables rendering of the ObjectID channel. Each object in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the objects that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple objects can share a single Object ID. This buffer is useful for extracting mattes based on the shapes of objects in the scene.
        • MaterialID: This option enables rendering of the Material ID channel. Each material in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the materials that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple materials can share a single Material ID. This buffer is useful for extracting mattes based on a texture—for example, a mask containing all the pixels that comprise a brick texture.
      • Anti-Aliasing
        Anti-aliasing can be enabled for each channel through the Channel menu. It produces an output image with higher quality anti-aliasing by brute force, rendering a much larger image, and then rescaling it down to the target resolution. Rendering a larger image in the first place, and then using a Resize node to bring the image to the desired resolution can achieve the exact same results. Using the supersampling built in to the renderer offers two distinct advantages over this method.

        The rendering is not restricted by memory or image size limitations. For example, consider the steps to create a float-16 1920 x 1080 image with 16x supersampling. Using the traditional Resize node would require first rendering the image with a resolution of 30720 x 17280, and then using a Resize to scale this image back down to 1920 x 1080. Simply producing the image would require nearly 4 GB of memory. When anti-aliasing is performed on the GPU, the OpenGL renderer can use tile rendering to significantly reduce memory usage.

        The GL renderer can perform the rescaling of the image directly on the GPU more quickly than the CPU can manage it. Generally, the more GPU memory the graphics card has, the faster the operation is performed.

        Interactively, Fusion skips the anti-aliasing stage unless the HiQ button is selected in the Time Ruler. Final quality renders always include supersampling, if it is enabled.

        Because of hardware limitations, point geometry (particles) and lines (locators) are always rendered at their original size, independent of supersampling. This means that these elements are scaled down from their original sizes, and likely appear much thinner than expected.

        Anti-Aliasing of Aux Channels in the OpenGL Renderer
        The reason Fusion supplies separate anti-aliasing options for color and aux channels in the Anti-Aliasing preset is that supersampling of color channels is quite a bit slower than aux channels. You may find that 1 x 3 LowQ/HiQ Rate is sufficient for color, but for world position or Z, you may require 4 x 12 to get adequate results. The reasons color anti-aliasing is slower are that the shaders for RGBA can be 10x to even 100x or 1000x more complex, and color is rendered with sorting enabled, while aux channels get rendered using the much faster Z-buffer method.

        Do not mistake anti-aliasing with improved quality. Anti-aliasing an aux channel does not mean it’s better quality. In fact, anti-aliasing an aux channel in many cases can make the results much worse. The only aux channels we recommend you enable anti-aliasing on are WorldCoord and Z.
    • Enable (LowQ/HiQ)
      These two check boxes are used to enable anti aliasing of the rendered image
    • Supersampling LowQ/HiQ Rate
      The LowQ and HiQ rate tells the OpenGL render how large to scale the image. For example, if the rate is set to 4 and the OpenGL renderer is set to output a 1920 x 1080 image, internally a 7680 x 4320 image is rendered and then scaled back to produce the target image. Set the multiplier higher to get better edge anti-aliasing at the expense of render time. Typically 8 x 8 supersampling (64 samples per pixel) is sufficient to reduce most aliasing artifacts.

      The rate doesn’t exactly define the number of samples done per destination pixel; the width of the reconstruction filter used may also have an impact.
    • Filter Type
      When downsampling the supersized image, the surrounding pixels around a given pixel are often used to give a more realistic result. There are various filters available for combining these pixels. More complex filters can give better results but are usually slower to calculate. The best filter for the job often depends on the amount of scaling and on the contents of the image itself.
BoxThis is a simple interpolation scale of the image.
Bi-Linear (triangle)This uses a simplistic filter, which produces relatively clean and fast results
Bi-Cubic (quadratic)This filter produces a nominal result. It offers a good compromise between
speed and quality.
Bi-Spline (cubic)This produces better results with continuous tone images but is slower
than Quadratic. If the images have fine detail in them, the results may be
blurrier than desired.
Catmul-RomThis produces good results with continuous tone images which a
GaussianThis is very similar in speed and quality to Quadratic.
MitchellThis is similar to Catmull-Rom but produces better results with finely detailed
images. It is slower than Catmull-Rom.
LanczosThis is very similar to Mitchell and Catmull-Rom but is a little cleaner
and also slower.
SincThis is an advanced filter that produces very sharp, detailed results, however,
it may produce visible `ringing’ in some situations.
BesselThis is similar to the Sinc filter but may be slightly faster.
  • Window Method
    The Window Method menu appears only when the reconstruction filter is set to Sinc or Bessel.
HanningThis is a simple tapered window.
HammingHamming is a slightly tweaked version of Hanning.
BlackmanA window with a more sharply tapered falloff.
  • Accumulation Effects
    Accumulation effects are used for creating depth of field effects. Enable both the Enable Accumulation Effects and Depth of Field checkboxes, and then adjust the quality and Amount sliders.

    The blurrier you want the out-of-focus areas to be, the higher the quality setting you need. A low amount setting causes more of the scene to be in focus.

    The accumulation effects work in conjunction with the Focal plane setting located in the Camera 3D node. Set the Focal Plane to the same distance from the camera as the subject you want to be in focus. Animating the Focal Plane setting creates rack of focus effects.
  • Lighting
    • Enable Lighting: When the Enable Lighting checkbox is selected, any lights in the scene light objects. If no lights are present, all objects are black.
    • Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces shadows, at the cost of some speed.
  • Texturing
    • Texture Depth: Lets you specify the bit depth of texture maps.
    • Warn about unsupported texture depths: Enables a warning if texture maps are in an unsupported bit depth that Fusion can’t process.
  • Lighting Mode
    The Per-vertex lighting model calculates lighting at each vertex of the scene’s geometry. This produces a fast approximation of the scene’s lighting but tends to produce blocky lighting on poorly tessellated objects. The Per-pixel method uses a different approach that does not rely on the detail in the scene’s geometry for lighting, so it generally produces superior results.

    Although the per-pixel lighting with the OpenGL renderer produces results closer to that produced by the more accurate software renderer, it still has some disadvantages. The OpenGL renderer is less capable of dealing correctly with semi-transparency, soft shadows, and colored shadows, even with per-pixel lighting. The color depth of the rendering is limited by the capabilities of the graphics card in the system.
  • Transparency
    The OpenGL renderer reveals this control for selecting which ordering method to use when calculating transparency
    • Z Buffer (fast): This mode is extremely fast and is adequate for scenes containing only opaque objects. The speed of this mode comes at the cost of accurate sorting; only the objects closest to the camera are certain to be in the correct sort order. So, semi-transparent objects may not be shown correctly, depending on their ordering within the scene.
    • Sorted (accurate): This mode sorts all objects in the scene (at the expense of speed) before rendering, giving correct transparency.
    • Quick Mode: This experimental mode is best suited to scenes that almost exclusively contain particles.
  • Shading Model
    Use this menu to select a shading model to use for materials in the scene. Smooth is the shading model employed in the viewers, and Flat produces a simpler and faster shading model.
  • Wireframe
    Renders the whole scene as wireframe. This shows the edges and polygons of the objects. The edges are still shaded by the material of the objects.
  • Wireframe Anti-Aliasing
    Enables anti-aliasing for the Wireframe render.
  • OpenGL UV Renderer
    The OpenGL UV renderer is a special case render engine. It is used to take a model with existing textures and render it out to produce an unwound flattened 2D version of the model. Optionally, lighting can be baked in. This is typically done so you can then paint on the texture and reapply it.

Render 3D Node Image and Settings Tabs

The remaining controls for the Image and Settings tabs are common to many 3D nodes. These common controls are described in detail HERE

justin_robinson

About the Author

Justin Robinson is a Certified DaVinci Resolve, Fusion & Fairlight instructor who is known for simplifying concepts and techniques for anyone looking to learn any aspect of the video post-production workflow. Justin is the founder of JayAreTV, a training and premade asset website offering affordable and accessible video post-production education. You can follow Justin on Twitter at @JayAreTV YouTube at JayAreTV or Facebook at MrJayAreTV

Get 30+ hr of DaVinci Resolve courses & 400+ pre-made assets

As little as $15/month for all courses and pre-made assets

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *