Get 30+ hr of DaVinci Resolve courses & 200+ pre-made assets

As little as $15/month for all courses and pre-made assets

3d nodes

Alembic Mesh 3D Node

At times, you may need to import 3D geometry from applications like Blender, Cinema4D, or Maya. One of the formats you can use for importing 3D geometry is the Alembic file format. This file type is a 3D scene interchange format that contains baked animation with its geometry. In other words, it eliminates the animation calculation times by embedding fixed, uneditable animation with 3D geometry. The animation is typically embedded using a point cache, which saves the dynamic data such as velocity after it has been calculated. Alembic objects can contain mesh geometry, cameras, points, UVs, normals, and baked animation.

You can import Alembic files (.abc) into Fusion in two ways:

  • Choose File > Import > Alembic Scene in Fusion or Fusion > Import > Alembic Scene in DaVinci Resolve’s Fusion page.
  • Add an AlembicMesh3D node to the Node Editor.

The first method is the preferred method; both Alembic and FBX nodes by themselves import the entire model as one object. However, the Import menu breaks down the model, lights, camera, and animation into a string of individual nodes. This makes it easy to edit and modify and use subsections of the imported Alembic mesh. Also, transforms in the file are read into Fusion splines and into the Transform 3D nodes, which get saved with the comp. Later, when reloading the comp, the transforms are loaded from the comp and not the Alembic file. Fusion handles the meshes differently, always reloading them from the Alembic file.

Arbitrary user data varies depending on the software creating the Alembic file, and therefore this type of metadata is mostly ignored.

Alembic Import Dialog

An Alembic Import dialog is displayed once you select the file to import.

The top half of the Import dialog displays information about the selected file including the name of the plug-in/application that created the Alembic file, the version of the Alembic software developer kit used during the export, the duration of the animation in seconds, if available, and the frame rate(s) in the file.

Various objects and attributes can be imported by selecting the checkboxes in the Import section.

  • Hierarchy: When enabled, the full parenting hierarchy is recreated in Fusion using multiple Transform 3D nodes. When disabled, the transforms in the Alembic file are flattened down into the cameras and meshes. The flattening results in several meshes/cameras connected to a single Merge node in Fusion. It is best to have this disabled when the file includes animation. If enabled, the many rigs used to move objects in a scene will result in an equally large number of nodes in Fusion, so flattening will reduce the number of nodes in your node tree.
  • Orphaned Transforms: When the hierarchy option is enabled, an Orphaned Transforms setting is displayed. Activating this Orphan Transforms setting imports transforms that parent a mesh or camera. For example, if you have a skeleton and associated mesh model, the model is imported as an Alembic mesh, and the skeleton as a node tree of Merge3Ds. If this is disabled, the Merge3Ds are not created.
  • Cameras: When enabled, importing a file includes cameras along with Aperture, Angles of View, Plane of Focus, as well as Near and Far clipping plane settings. The resolution Gate Fit may be imported depending on whether the application used to export the file correctly tagged the resolution Gate Fit metadata. If your camera does not import successfully, check the setting for the Camera3D Resolution Gate Fit. Note that 3D Stereoscopic information is not imported.
  • InverseTransform: Imports the Inverse Transform (World to Model) for cameras.
  • Points: Alembic files support a Points type. This is a collection of 3D points with position information. Some 3D software exports particles as points. However, keep in mind that while position is included, the direction and orientation of the particles are lost.
  • Meshes: This setting determines whether importing includes 3D models from the Alembic file. If it is enabled, options to include UVs and normals are displayed.

Animation
This section includes one option for the Resampling rate. When exporting an Alembic animation, it is saved to disk using frames per second (fps). When importing Alembic data into Fusion, the fps are detected and entered into the Resample Rate field unless you have changed it previously in the current comp. Ideally, you should maintain the exported frame rate as the resample rate, so your samples match up with the original. The Detected Sampling Rates information at the top of the dialog can give an idea of what to pick if you are unsure. However, using this field, you can change the frame rate to create effects like slow motion.

Not all objects and properties in a 3D scene have an agreed upon universal convention in the Alembic file format. That being the case, Lights, Materials, Curves, Multiple UVs, and Velocities are not currently supported when you import Alembic files.

Since the FBX file format does support materials and lights, we recommend the use of FBX for lights, cameras, and materials. Use Alembic for meshes only

Alembic Mesh 3D Node Inputs

The AlembicMesh3D node has two inputs in the Node Editor. Both are optional since the node is designed to use the imported mesh.

  • SceneInput: The orange input can be used to connect an additional 3D scene or model. The imported Alembic objects combine with the other 3D geometry.
  • MaterialInput: The optional green input is used to apply a material to the geometry by connecting a 2D bitmap image. It applies the connected image to the surface of the geometry in the scene.

Alembic Mesh 3D Node Setup

The AlembicMesh3D node is designed to be part of a larger 3D scene. Typically, when imported, a 3D geometry model is represented by one node, and any transforms are in another node. The nodes imported as part of the Alembic file connect into a Merge 3D node along with a camera, lights, and other elements that may be required for the scene.

Alembic Mesh 3D Node Controls Tab

The first tab in the Inspector is the Controls tab. It includes a series of unique controls specific to the Alembic Mesh 3D node as well as six groupings of controls that are common to most 3D nodes. The “Common Controls” section at the end of this chapter includes detailed descriptions of the common controls.

Below are descriptions of the Alembic Mesh 3D specific controls.

  • Filename
    The complete file path of the imported Alembic file is displayed here. This field allows you to change or update the file linked to this node.
  • Object Name
    This text field shows the name of the imported Alembic mesh, which is also used to rename the Alembic Mesh 3D node in the Node Editor.

    When importing with the Alembic Mesh 3D node, if this text field is blank, the entire contents of the Alembic geometry are imported as a single mesh. When importing geometry using File > Import > Alembic Scene, this field is set by Fusion.
  • Wireframe
    Enabling this option causes the mesh to display only the wireframe for the object in the viewer. When enabled, there is a second option for wireframe anti-aliasing. You can also render these wireframes out to a file if the Renderer 3D node has the OpenGL render type selected.

Alembic Mesh 3D Node Controls, Materials, Transform, and Settings Tabs

The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the Controls tab are common in many 3D nodes. The Materials tab, Transforms tab and Settings tab in the Inspector are also duplicated in other 3D nodes.

Bender 3D Node

The Bender 3D node is used to bend, taper, twist, or shear 3D geometry based on the geometry’s bounding box. It works by connecting any 3D scene or object to the orange input on the Bender 3D node, and then adjusting the controls in the Inspector. Only the geometry in the scene is modified. Any lights, cameras, or materials are passed through unaffected.

The Bender node does not produce new vertices in the geometry; it only alters existing vertices in the geometry. So, when applying the Bender 3D node to primitives, like the Shape 3D, or Text 3D nodes, increase the Subdivision setting in the primitive’s node to get a higher-quality result.

Bender 3D Node Inputs

The following inputs appear on the Bender 3D node in the Node Editor.

  • SceneInput: The orange scene input is the required input for the Bender 3D node. You use this input to connect another node that creates or contains a 3D scene or object.

Bender 3D Node Setup

The Bender 3D node works by connecting a 3D node that contains geometry, like an image plane 3D, Shape 3D or Text 3D. The element you connect to the Bender 3D node will be distorted based on the controls in the Inspector. The Bender 3D node is designed to be part of a larger 3D scene, with the output typically connecting into a Merge 3D.

Bender 3D Node Controls Tab

The first tab in the Inspector is the Controls tab. It includes all the controls for the Bender 3D node.

  • Bender Type
    The Bender Type menu is used to select the type of deformation to apply to the geometry. There are four modes available: Bend, Taper, Twist, and Shear.
  • Amount
    Adjusting the Amount slider changes the strength of the deformation.
  • Axis
    The Axis control determines the axis along which the deformation is applied. It has a different meaning depending on the type of deformation. For example, when bending, this selects the elbow in conjunction with the Angle control. In other cases, the deform is applied around the specified axis.
  • Angle
    The Angle thumbwheel control determines what direction about the axis a bend or shear is applied. It is not visible for taper or twist deformations.
  • Range
    The Range control can be used to limit the effect of a deformation to a small portion of the geometry. The Range control is not available when the Bender Type is set to Shear.
  • Group Objects
    If the input of the Bender 3D node contains multiple 3D objects, either through a Merge 3D or strung together, the Group Objects checkbox treats all the objects in the input scene as a single object, and the common center is used to deform the objects, instead of deforming each component object individually.

Bender 3D Node Settings Tab

The Settings tab in the Inspector is common to all 3D nodes.

Camera 3D Node

The Camera 3D node generates a virtual camera for viewing the 3D environment. It closely emulates the settings used in real cameras to make matching live-action or 3D-rendered elements as seamless as possible. Adding any cameras to a 3D composite allows you to frame the elements in a composite how you want and animate the camera during a scene to create moving camera shots.

Camera Projection
The Camera 3D node can also be used to perform Camera Projection by projecting a 2D image through the camera into 3D space. Projecting a 2D image can be done as a simple Image Plane aligned with the camera, or as an actual projection, similar to the behavior of the Projector 3D node, with the added advantage of being aligned precisely with the camera. The Image Plane, Projection, and Materials tabs do not appear until you connect a 2D image to the magenta image input on the Camera 3D node in the Node Editor.

Stereoscopic
The Camera node has built-in stereoscopic features. They offer control over eye separation and convergence distance. The camera for the right eye can be replaced using a separate camera node connected to the green left/right stereo camera input. Additionally, the plane of focus control for depth of field rendering is also available here

If you add a camera by dragging the camera icon from the toolbar onto the 3D view, it automatically connects to the Merge 3D you are viewing. Also, the current viewer is set to look through the new camera.

Alternatively, it is possible to copy the current viewer to a camera (or spotlight or any other object) by selecting the Copy PoV To option in the viewer’s contextual menu, under the Camera submenu.

Camera 3D Node Inputs

There are three optional inputs on the Camera 3D node in the Node Editor.

  • SceneInput: The orange input is used to connect a 3D scene or object. When connected, the geometry links to the camera’s field of view. It acts similarly to an image attached to the Image Plane input. If the camera’s Projection tab has projection enabled, the image attached to the orange image input projects on to the geometry.
  • ImageInput: The optional magenta input is used to connect a 2D image. When camera projection is enabled, the image can be used as a texture. Alternatively, when the camera’s image plane controls are used, the parented planar geometry is linked to the camera’s field of view.
  • RightStereoCamera: The green input should be connected to another Camera 3D node when creating 3D stereoscopic effects. It is used to override the internal camera used for the right eye in stereoscopic renders and viewers.

Camera 3D Node Setup

The output of a camera 3D node should be connected to a Merge 3D node. You then view the Merge 3D node and select the camera from the viewer’s right-click menu or by right-clicking over the axis label in the viewer.

Displaying a camera node directly in the viewer shows only an empty scene; there is nothing for the camera to see. To view the scene through the camera, view the Merge 3D node where the camera is connected, or any node downstream of that Merge 3D. Then right-click on the viewer and select Camera > [Camera name] from the contextual menu. Right-clicking on the axis label found in the lower corner of each 3D viewer also displays the Camera submenu.

The aspect of the viewer may be different from the aspect of the camera, so the camera view may not match the actual boundaries of the image rendered by the Renderer 3D node. Guides can be enabled to represent the portion of the view that the camera sees and assist you in framing the shot. Right-click on the viewer and select an option from the Guides > Frame Aspect submenu. The default option uses the format enabled in the Composition > Frame Format preferences. To toggle the guides on or off, select Guides > Show Guides from the viewers’ contextual menu, or use the Command-G (macOS) or Ctrl-G (Windows) keyboard shortcut when the viewer is active.

Camera 3D Node Controls Tab

The Camera3D Inspector includes six tabs along the top. The first tab, called the Controls tab, contains some of the most fundamental camera settings, including the camera’s clipping plains, field of view, focal length, and stereoscopic properties. Some tabs are not displayed until a required connection is made to the Camera 3D node.

  • Projection Type
    The Projection Type menu is used to select between Perspective and Orthographic cameras. Generally, real-world cameras are perspective cameras. An orthographic camera uses parallel orthographic projection, a technique where the view plane is perpendicular to the viewing direction. This produces a parallel camera output that is undistorted by perspective. Orthographic cameras present controls only for the near and far clipping planes, and a control to set the viewing scale.
  • Near/Far Clip
    The clipping planes are used to limit what geometry in a scene is rendered based on an object’s distance from the camera’s focal point. Clipping planes ensure objects that are extremely close to the camera, as well as objects that are too far away to be useful, are excluded from the final rendering.

    The default perspective camera ignores this setting unless the Adaptive Near/Far Clip checkbox located under the Near/Far Clip control is disabled.

    The clip values use units, so a far clipping plane of 20 means that any object more than 20 units from the camera is invisible to the camera. A near clipping plane of 0.1 means that any object closer than 0.1 units is also invisible.
  • Adaptive Near/Far Clip
    When selected, the renderer automatically adjusts the camera’s near/far clipping plane to match the extents of the scene. This setting overrides the values of the Near and Far clip range controls described above. This option is not available for orthographic cameras.
  • Viewing Volume Size
    When the Projection Type is set to Orthographic, the viewing volume size adjustment appears. It determines the size of the box that makes up the camera’s field of view.

    The Z-distance of an orthographic camera from the objects it sees does not affect the scale of those objects, only the viewing size does.
  • Angle of View Type
    Use the Angle of View Type buttons to choose how the camera’s angle of view is measured. Some applications use vertical measurements, some use horizontal, and others use diagonal measurements. Changing the Angle of View type causes the Angle of View control below to recalculate.
  • Angle of View
    Angle of View defines the area of the scene that can be viewed through the camera. Generally, the human eye can see more of a scene than a camera, and various lenses record different degrees of the total image. A large value produces a wider angle of view, and a smaller value produces a narrower, or more tightly focused, angle of view.

    Just as in a real-world camera, the angle of view and focal length controls are directly related. Smaller focal lengths produce a wider angle of view, so changing one control automatically changes the other to match.
  • Focal Length
    In the real world, a lens’ Focal Length is the distance from the center of the lens to the film plane. The shorter the focal length, the closer the focal plane is to the back of the lens. The focal length is measured in millimeters. The angle of view and focal length controls are directly related. Smaller focal lengths produce a wider angle of view, so changing one control automatically changes the other to match.

    The relationship between focal length and angle of view is angle = 2 * arctan[aperture / 2 / focal_length].

    Use the vertical aperture size to get the vertical angle of view and the horizontal aperture size to get the horizontal angle of view.
  • Plane of Focus (For Depth of Field)
    Like a focal point on a real-world camera, this setting defines the distance from the camera to an object. It is used by the OpenGL renderer in the Renderer 3D node to calculate depth of field.
  • Stereo
    The Stereo section includes options for setting up 3D stereoscopic cameras. 3D stereoscopic composites work by capturing two slightly different views, displayed separately to the left and right eyes. The mode menu determines if the current camera is a stereoscopic setup or a mono camera. When set to the default mono setting, the camera views the scene as a traditional 2D film camera. Three other options in the mode menu determine the method used for 3D stereoscopic cameras.
    • Toe In
      In a toe-in setup, both cameras are rotating in on a single focal point. Though the result is stereoscopic, the vertical parallax introduced by this method can cause discomfort by the audience. Toe-in stereoscopic works for convergence around the center of the images but exhibits keystoning, or image separation, to the left and right edges. This setup is can be used when the focus point and the convergence point need to be the same. It is also used in cases where it is the only way to match a liveaction camera rig.
    • Off Axis
      Regarded as the correct way to create stereo pairs, this is the default method in Fusion. Off Axis introduces no vertical parallax, thus creating stereo images with less eye strain. Sometimes called a skewed-frustum setup, this is akin to a lens shift in the real world. Instead of rotating the two cameras inward as in a toe-in setup, Off Axis shifts the lenses inward.
    • Parallel
      The cameras are shifted parallel to each other. Since this is a purely parallel shift, there is no Convergence Distance control that limits your control over placing objects in front of or behind the screen. However, Parallel introduces no vertical parallax, thus creating less strain on the eyes.
    • Rig Attached To
      This drop-down menu allows you to control which camera is used to transform the stereoscopic setup. Based on this menu, transform controls appear in the viewer either on the right camera, left camera, or between the two cameras. The ability to switch the transform controls through rigging can assist in matching the animation path to a camera crane or other live-action camera motion. The Center option places the transform controls between the two cameras and moves each evenly as the separation and convergence are adjusted. Left puts the transform controls on the left camera, and the right camera moves as the separation and convergence are adjusted. Right puts the transform controls on the right camera, and the left camera moves as adjustments are made to separation and convergence.
    • Eye Separation
      Eye Separation defines the distance between both stereo cameras. Setting Eye Separation to a value larger than 0 shows controls for each camera in the viewer when this node is selected. Note that there is no Convergence Distance control in Parallel mode
    • Convergence Distance
      This control sets the stereoscopic convergence distance, defined as a point located along the Z-axis of the camera that determines where both left- and right-eye cameras converge. The Convergence Distance controls are only available when setting the Mode menu to Toe-In or Off Axis.
  • Film Back
    • Film Gate
      The size of the film gate represents the dimensions of the aperture. Instead of setting the aperture’s width and height, you can choose it using the list of preset camera types in the Film Gate menu. Selecting one of the options automatically sets the aperture width and aperture height to match.
    • Aperture Width/Height
      The Aperture Width and Height sliders control the dimensions of the camera’s aperture or the portion of the camera that lets light in on a real-world camera. In video and film cameras, the aperture is the mask opening that defines the area of each frame exposed. The Aperture control uses inches as its unit of measurement.
    • Resolution Gate Fit
      Determines how the film gate is fitted within the resolution gate. This only has an effect when the aspect of the film gate is not the same aspect as the output image.
      • Inside: The image source defined by the film gate is scaled uniformly until one of its dimensions (X or Y) fits the inside dimensions of the resolution gate mask. Depending on the relative dimensions of image source and mask background, either the image source’s width or height may be cropped to fit the dimension of the mask.
      • Width: The image source defined by the film gate is scaled uniformly until its width (X) fits the width of the resolution gate mask. Depending on the relative dimensions of image source and mask, the image source’s Y-dimension might not fit the mask’s Y-dimension, resulting in either cropping of the image source in Y or the image source not covering the mask’s height entirely.
      • Height: The image source defined by the film gate is scaled uniformly until its height (Y) fits the height of the resolution gate mask. Depending on the relative dimensions of image source and mask, the image source’s X-dimension might not fit the mask’s X-dimension, resulting in either cropping of the image source in X or the image source not covering the mask’s width entirely.
      • Outside: The image source defined by the film gate is scaled uniformly until one of its dimensions (X or Y) fits the outside dimensions of the resolution gate mask. Depending on the relative dimensions of image source and mask, either the image source’s width or height may be cropped or not fit the dimension of the mask.
      • Stretch: The image source defined by the film gate is stretched in X and Y to accommodate the full dimensions of the generated resolution gate mask. This might lead to visible distortions of the image source.
  • Control Visibility
    This section allows you to selectively activate the onscreen controls that are displayed along with the camera.
    • Show View Controls: Displays or hides all camera onscreen controls in the viewers.
    • Frustum: Displays the actual viewing cone of the camera.
    • View Vector: Displays a white line inside the viewing cone, which can be used to determine the shift when in Parallel mode.
    • Near Clip: The Near clipping plane. This plane can be subdivided for better visibility.
    • Far Clip: The Far clipping plane. This plane can be subdivided for better visibility.
    • Focal Plane: The plane based on the Plane of Focus slider explained in the Controls tab above. This plane can be subdivided for better visibility.
    • Convergence Distance: The point of convergence when using Stereo mode. This plane can be subdivided for better visibility.
  • Import Camera
    The Import Camera button displays a dialog to import a camera from another application.

    It supports the following file types:
*LightWave Scene.lws
*Max Scene.ase
*Maya Ascii Scene.ma
*dotXSI.xsi

Camera 3D Node Image Tab

When a 2D image is connected to the magenta image input on the Camera3D node, an Image tab is created at the top of the inspector. The connected image is always oriented so it fills the camera’s field of view.

Except for the controls listed below, the options in this tab are identical to those commonly found in other 3D nodes. For more detail on visibility, lighting, matte, blend mode, normals/tangents, and Object ID, see “The Common Controls” section at the end of this chapter.

  • Enable Image Plane
    Use this checkbox to enable or disable the usage of the Image Plane.
  • Fill Method
    This menu configures how to scale the image plane if the camera has a different aspect ratio.
    • Inside: The image plane is scaled uniformly until one of its dimensions (X or Y) fits the inside dimensions of the resolution gate mask. Depending on the relative dimensions of image source and mask background, either the image source’s width or height may be cropped to fit the dimensions of the mask.
    • Width: The image plane is scaled uniformly until its width (X) fits the width of the mask. Depending on the relative dimensions of image source and the resolution gate mask, the image source’s Y-dimension might not fit the mask’s Y-dimension, resulting in either cropping of the image source in Y or the image source not covering the mask’s height entirely.
    • Height: The image plane is scaled uniformly until its height (Y) fits the height of the mask. Depending on the relative dimensions of image source and the resolution gate mask, the image source’s X-dimension might not fit the mask’s X-dimension, resulting in either cropping of the image source in X or the image source not covering the mask’s width entirely.
    • Outside: The image plane is scaled uniformly until one of its dimensions (X or Y) fits the outside dimensions of the resolution gate mask. Depending on the relative dimensions of image source and mask, either the image source’s width or height may be cropped or not fit the respective dimension of the mask.
  • Depth: The Depth slider controls the image plane’s distance from the camera.

Camera 3D Node Materials Tab

The options presented in the Materials tab are identical to those commonly found in other 3D nodes. For more detail on Diffuse, Specular, Transmittance, and Martial ID controls, see the Common Controls HERE

Camera 3D Node Projection Tab

When a 2D image is connected to the camera node, a fourth projection tab is displayed at the top of the Inspector. Using this Projection tab, it is possible to project the image into the scene. A projection is different from an image plane in that the projection falls onto the geometry in the scene exactly as if there were a physical projector present in the scene. The image is projected as light, which means the Renderer 3D node must be set to enable lighting for the projection to be visible.

  • Enable Camera Projection
    Select this checkbox to enable projection of the 2D image connected to the magenta input on the Camera node.
  • Projection Fit Method
    This menu can be used to select the method used to match the aspect of the projected image to the camera’s field of view.
  • Projection Mode
    • Light: Defines the projection as a spotlight.
    • Ambient Light: Defines the projection as an ambient light.
    • Texture: Allows a projection that can be relighted using other lights. Using this setting requires a Catcher node connected to the applicable inputs of the specific material.

Camera 3D Node Transform and Settings Tabs

The options presented in the Transform and Settings tabs are commonly found in other 3D nodes, see the Common Controls HERE

Cube 3D Node

The Cube 3D node is a basic primitive geometry type capable of generating a simple cube.

The node also provides six additional image inputs that can be used to map a texture onto the six faces of the cube. Cubes are often used as shadow casting objects and for environment maps. For other basic primitives, see the Shape 3D node in this chapter.

Cube 3D Node Inputs

The following are optional inputs that appear on the Cube3D node in the Node Editor:

  • SceneInput: The orange scene input is used to connect another node that creates or contains a 3D scene or object. The additional geometry gets added to the Cube3D.
  • NameMaterialInput: These six inputs are used to define the materials applied to the six faces of the cube. You can connect either a 2D image or a 3D material to these inputs. Textures or materials added to the Cube3D do not get added to any 3D objects connected to the Cube’s SceneInput.

Cube 3D Node Setup

The output of a Cube 3D node typically connects to a Merge 3D node, integrating it into a larger scene. When 3D tracking, the Cube 3D is often used as a placeholder for proper geometry that is not available at the current time.

Cube 3D Node Controls Tab

The first tab in the Inspector is the Controls tab. It includes the primary controls for determining the overall size and shape of the Cube 3D node.

  • Lock Width/Height/Depth
    This checkbox locks the Width, Height, and Depth dimensions of the cube together. When selected, only a Size control is displayed; otherwise, separate Width, Height, and Depth sliders are shown.
  • Size or Width/Height/Depth
    If the Lock checkbox is selected, then only the Size slider is shown; otherwise, separate sliders are displayed for Width, Height, and Depth. The Size and Width sliders are the same control renamed, so any animation applied to Size is also applied to Width when the controls are unlocked.
  • Subdivision Level
    Use the Subdivision Level slider to set the number of subdivisions used when creating the image plane.

    The 3D viewers and renderer use vertex lighting, meaning all lighting is calculated at the vertices on the 3D geometry and then interpolated from there. Therefore, the more subdivisions in the mesh, the more vertices are available to represent the lighting. For example, make a sphere and set the subdivisions to be small so it looks chunky. With lighting on, the object looks like a sphere but has some amount of fracturing resulting from the large distance between vertices. When the subdivisions are high, the vertices are closer and the lighting becomes more even. So, increasing subdivisions can be useful when working interactively with lights.
  • Cube Mapping
    Enabling the Cube Mapping checkbox causes the cube to wrap its first texture across all six faces using a standard cubic mapping technique. This approach expects a texture laid out in the shape of a cross.
  • Wireframe
    Enabling this checkbox causes the mesh to render only the wireframe for the object when rendering with the OpenGL renderer in the Renderer 3D node.

Cube 3D Node Controls, Materials, Transform, and Settings Tabs

The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs. Their descriptions can be found in Common Controls HERE

Custom Vertex 3D Node

The Custom Vertex 3D node is an advanced custom node for 3D geometry that performs per vertex manipulations. If you have moderate experience with scripting or C++ programming, you should find the structure and terminology used by the Custom node familiar.

Using scripting math functions and lookup tables from images, you can move vertex positions on 3D geometry. Vertices can be more than just positions in 3D space. You can manipulate normals, texture coordinates, vectors, and velocity.

For example, Custom Vertex 3D can be used to make a flat plane wave like a flag, or create spiral models.

Besides providing a 3D scene input and three image inputs, the Inspector includes up to eight number fields and as many as eight XYZ position values from other controls and parameters in the node tree.

Custom Vertex 3D Node Inputs

The Custom Vertex 3D node includes four inputs. The orange scene input is the only one of the four that is required.

  • SceneInput: The orange scene input takes 3D geometry or a 3D scene from a 3D node output. This is the 3D scene or geometry that is manipulated by the calculations in the Custom Vertex 3D node.
  • ImageInput1, ImageInput2, ImageInput3: The three image inputs using green, magenta, and teal colors are optional inputs that can be used for compositing.

Custom Vertex 3D Node Setup

The object you want to manipulate connects to the orange scene input of the Custom Vertex 3D node. The output typically connects to a Merge 3D node, integrating it into a larger scene.

Custom Vertex 3D Node Vertex Tab

Using the fields in the Vertex tab, vertex calculations can be performed on the Position, Normals, Vertex Color, Texture Coordinates, Environment Coordinates, UV Tangents, and Velocity attributes.

The vertices are defined by three XYZ Position values in world space as px, py, pz. Normals, which define as a vector the direction the vertex is pointing as nx, ny, nz.

Vertex color is the Red, Green, Blue, and Alpha color of the point as vcr, vcg, vcb, vca.

Custom Vertex 3D Node Numbers Tab

  • Numbers 1-8
    Numbers are variables with a dial control that can be animated or connected to modifiers exactly as any other control might. The numbers can be used in equations on vertices at current time: n1, n2, n3, n4,… or at any time: n1_at(float t), n2_at(float t), n3_at(float t), n4_at(float t), where t is the time you want. The values of these controls are available to expressions in the Setup and Intermediate tabs. They can be renamed and hidden from the viewer using the Config tab.

Custom Vertex 3D Node Points Tab

  • Points 1-8
    The point controls represent points in the Custom Vertex 3D tool, not the vertices. These eight point controls include 3D X,Y,Z position controls for positioning points at the current time: (p1x, p1y, p1z, p2x, p2y, p2z) or at any time: p1x_at(float t), p1y_at(float t), p1z_at(float t), p2x_at(float t), p2y_at(float t), p2z_at(float t), where t is the time you want. For example, you can use a point to define a position in 3D space to rotate the vertices around. They can be renamed and hidden from the viewer using the Config tab. They are normal positional controls and can be animated or connected to modifiers as any other node might.

Custom Vertex 3D Node LUT Tab

  • LUTs 1-4
    The Custom Vertex 3D node provides four LUT splines. A LUT is a lookup table that will return a value from the height of the LUT spline. For example, getlut1(float x), getlut2(float x),… where x = 0 … 1 accesses the LUT values.

    The values of these controls are available to expressions in the Setup and Intermediate tabs using the getlut# function. For example, setting the R, G, B, and A expressions to getlut1(r1), getlut2(g1), getlut3(b1), and getlut4(a1) respectively, would cause the Custom Vertex 3D node to mimic the Color Curves node.

    These controls can be renamed using the options in the Config tab to make their meanings more apparent, but expressions still see the values as lut1, lut2,…lut8.

Custom Vertex 3D Node Setup Tab

  • Setups 1-8
    Up to eight separate expressions can be calculated in the Setup tab of the Custom Vertex 3D node. The Setup expressions are evaluated once per frame, before any other calculations are performed. The results are then made available to the other expressions in the node as variables s1, s2, s3, and s4.

    Think of them as global setup scripts that can be referenced by the intermediate and channel scripts for each vertex.

    For example, Setup scripts can be used to transform vertex from model to world space.

Custom Vertex 3D Node Intermediate Tab

  • Intermediates 1-8
    An additional eight expressions can be calculated in the Intermediate tab. The Intermediate expressions are evaluated once per vertex, after the Setup expressions are evaluated. Results are available as variables i1, i2, i3, i4, i5, i6, i7, i8, which can be referenced by channel scripts. Think of them as “per vertex setup” scripts.

    For example, you can run the script to produce the new vertex (i.e., new position, normal, tangent, UVs, etc.) or transform from world space back to model space

Custom Vertex 3D Node Config Tab

  • Random Seed
    Use this to set the seed for the rand() and rands() functions. Click the Reseed button to set the seed to a random value. This control may be needed if multiple Custom Vertex 3D nodes are required with different random results for each.
  • Number Controls
    There are eight sets of Number controls, corresponding to the eight sliders in the Numbers tab. Disable the Show Number checkbox to hide the corresponding Number slider, or edit the Name for Number text field to change its name.
  • Point Controls
    There are eight sets of Point controls, corresponding to the eight controls in the Points tab. Disable the Show Point checkbox to hide the corresponding Point control and its crosshair in the viewer. Similarly, edit the Name for Point text field to change the control’s name.

Custom Vertex 3D Node Settings Tab

The Settings tab controls are common to many 3D nodes, and their descriptions can be found in Common Controls HERE

Displace 3D Node

The Displace 3D node is used to displace the vertices of an object along their normals based on a reference image. The texture coordinates on the geometry are used to determine where to sample the image.

When using Displace 3D, keep in mind that it only displaces existing vertices and does not subdivide surfaces to increase detail. To obtain a more detailed displacement, increase the subdivision amount for the geometry that is being displaced. Note that the pixels in the displacement image may contain negative values.

Displace 3D Node Inputs

The following two inputs appear on the Displace 3D node in the Node Editor:

  • SceneInput: The orange scene input is the required input for the Displace 3D node. You use this input to connect another node that creates or contains a 3D scene or object.
  • Input: This green input is used to connect a 2D image that is used to displace the object connected to the Scene input. If no image is provided, this node effectively passes the scene straight through to its output. So, although not technically a required input, there isn’t much use for adding this node unless you connect this input correctly.

Displace 3D Node Setup

The output of a Displace 3D node typically connects to a Merge 3D node, integrating it into a larger scene. The 3D geometry you want to displace is connected to the orange input, and in this example, a Fast Noise node is used to displace the geometry.

Displace 3D Node Controls Tab

The Displace 3D Inspector includes two tabs along the top. The primary tab, called the Controls tab, contains the dedicated Displace 3D controls.

  • Channel
    Determines which channel of the connected input image is used to displace the geometry.
  • Scale and Bias
    Use these sliders to scale (magnify) and bias (offset) the displacement. The bias is applied first and the scale afterward.
  • Camera Displacement
    • Point to Camera: When the Point to Camera checkbox is enabled, each vertex is displaced toward the camera instead of along its normal. One possible use of this option is for displacing a camera’s image plane. The displaced camera image plane would appear unchanged when viewed through the camera but is deformed in 3D space, allowing one to comp-in other 3D layers that correctly interact in Z.
    • Camera: This menu is used to select which camera in the scene is used to determine the camera displacement when the Point to Camera option is selected.

Displace 3D Node Settings Tab

The Settings tab controls are common to many 3D nodes, and their descriptions can be found in Common Controls HERE

Duplicate 3D Node

Similar to the 2D version called the Duplicate node, the Duplicate 3D node can be used to duplicate any geometry in a scene, applying a successive transformation to each, and creating repeating patterns and complex arrays of objects. The options in the Jitter tab allow non-uniform transformations, such as random positioning or sizes.

Duplicate 3D Node Inputs

The Duplicate 3D node has a single input by default where you connect a 3D scene. An optional Mesh input appears based on the settings of the node.

  • SceneInput: The orange Scene Input is a required input. The scene or object you connect to this input is duplicated based on the settings in the Control tab of the Inspector.
  • MeshInput: A green optional mesh input appears when the Region’s tab Region menu is set to mesh. The mesh can be any 3D model, either generated in Fusion or imported

Duplicate 3D Node Setup

The output of a Duplicate 3D node typically connects to a Merge 3D node, integrating it into a larger scene. The 3D geometry you want duplicated, in this case a Cube 3D, is connected to the orange input.

Duplicate 3D Node Controls Tab

The Controls tab includes all the parameters you can use to create, offset, and scale copies of the object connected to the scene input on the node.

  • Copies
    Use this range control to set the number of copies made. Each copy is a copy of the last copy, so if this control is set to [0,3], the parent is copied, then the copy is copied, then the copy of the copy is copied, and so on. This allows some interesting effects when transformations are applied to each copy using the controls below

    Setting the First Copy to a value greater than 0 excludes the original object and shows only the copies.
  • Time Offset
    Use the Time Offset slider to offset any animations that are applied to the source geometry by a set amount per copy. For example, set the value to -1.0 and use a cube set to rotate on the Y-axis as the source. The first copy shows the animation from a frame earlier; the second copy shows animation from a frame before that, etc. This can be used with great effect on textured planes—for example, where successive frames of a clip can be shown.
  • Transform Method
    • Linear: When set to Linear, transforms are multiplied by the number of the copy, and the total scale, rotation, and translation are applied in turn, independent of the other copies.
    • Accumulated: When set to Accumulated, each object copy starts at the position of the previous object and is transformed from there. The result is transformed again for the next copy
  • Transform Order
    With this menu, the order in which the transforms are calculated can be set. It defaults to ScaleRotation-Transform (SRT).

    Using different orders results in different positions of your final objects.
  • Translation
    The X, Y, and Z Offset sliders set the offset position applied to each copy. An X offset of 1 would offset each copy 1 unit along the X-axis from the last copy
  • Rotation
    The buttons along the top of this group of rotation controls set the order in which rotations are applied to the geometry. Setting the rotation order to XYZ would apply the rotation on the X-axis first, followed by the Y-axis rotation, then the Z-axis rotation.

    The three Rotation sliders set the amount of rotation applied to each copy
  • Pivot
    The pivot controls determine the position of the pivot point used when rotating each copy.
  • Scale
    • Lock: When the Lock XYZ checkbox is selected, any adjustment to the duplicate scale is applied to all three axes simultaneously. If this checkbox is disabled, the Scale slider is replaced with individual sliders for the X, Y, and Z scales.
    • Scale: The Scale controls tell Duplicate how much scaling to apply to each copy.

Duplicate 3D Node Jitter Tab

The options in the Jitter tab allow you to randomize the position, rotation, and size of all the copies created in the Controls tab.

  • Random Seed
    The Random Seed slider is used to generate a random starting point for the amount of jitter applied to the duplicated objects. Two Duplicate nodes with identical settings but different random seeds produce two completely different results.
  • Randomize
    Click the Randomize button to auto generate a random seed value.
  • Jitter Probability
    Adjusting this slider determines the percentage of copies that are affected by the jitter. A value of 1.0 means 100% of the copies are affected, while a value of 0.5 means 50% are affected.
  • Time Offset
    Use the Time Offset slider to offset any animations that are applied to the source geometry by a set amount per copy. For example, set the value to –1.0 and use a cube set to rotate on the Y-axis as the source. The first copy shows the animation from a frame earlier; the second copy shows animation from a frame before that, etc. This can be used with great effect on textured planes—for example, where successive frames of a clip can be shown.
  • Translation Jitter
    Use these three controls to adjust the amount of variation in the X, Y, and Z translation of the duplicated objects.
  • Rotation Jitter
    Use these three controls to adjust the amount of variation in the X, Y, and Z rotation of the duplicated objects.
  • Pivot Jitter
    Use these three controls to adjust the amount of variation in the rotational pivot center of the duplicated objects. This affects only the additional jitter rotation, not the rotation produced by the Rotation settings in the Controls tab.
  • Scale Jitter
    Use this control to adjust the amount of variation in the scale of the duplicated objects. Disable the Lock XYZ checkbox to adjust the scale variation independently on all three axes.

Duplicate 3D Node Region Tab

The options in the Region tab allow you to define an area in the viewer where the copies can appear or are prevented from appearing. Like most parameters in Fusion, this area can be animated to cause the copied object to pop on and off the screen based on the region’s shape and setting.

  • Region
    The Region section includes two settings for controlling the shape of the region and the affect the region has on the duplicate objects.
    • Region Mode: There are three options in the Region Mode menu. The default, labeled “Ignore region” bypasses the node entirely and causes no change to the copies of objects from how they are set in the Controls and Jitter tabs. The menu option labeled “When inside region” causes the copied objects to appear only when their position falls inside the region defined in this tab. The last menu option, “When not Inside region” causes the copied objects to appear only when their position falls outside the region defined in this tab.
    • Region: The Region menu determines the shape of the region. The five options include cube, sphere, and rectangle primitive shapes. The mesh option allows you to connect a 3D model into the green mesh input on the node. The green input appears only after the Region menu is set to Mesh. The All setting refers to the entire scene. This allows the copies to pop on and off if the Region mode is animated. When the Region menu is set to Mesh, four other options are displayed. These are described below.
      • Winding Rule: Using four common techniques, the Winding Rule menu determines how the mesh of polygons is determined as an area of volume and consequently how copies locate the vertices in the mesh. Complex overlapping regions of a mesh can cause an irregular fit. Trying a different technique from this menu can sometimes create a better match between the mesh and how the copies interpret the mesh shape.
      • Winding Ray Direction: A 3D model is a mesh of vertices made up of flat polygons. When making this a volume for a region, the Winding Ray Direction is used to determine in which direction the volume of each polygon (like depth extrude) is aligned.
      • Limit by Object ID: When a scene with multiple meshes is connected to the green Mesh input on the node, all the meshes are used as the region. Enabling this checkbox allows you to use the Object ID slider to select the ID for the mesh you want to use as the Region.
      • Object ID: When the Limit by Object ID checkbox is enabled, this slider selects the number ID for the mesh object you want to use for the Region.

Duplicate 3D Node Settings Tab

The Settings tab controls are common to many 3D nodes, and their descriptions can be found in Common Controls HERE

FBX Exporter 3D Node

The FBX Exporter node provides a method of exporting a Fusion 3D scene to the FBX scene interchange format. Each node in Fusion is a single object in the exported file. Objects, lights, and cameras use the name of the node that created them. The node can be set to export a single file for the entire scene, or to output one frame per file.

Setting the Preferences > Global > General > Auto Clip Browse option in the Fusion Studio application, or the Fusion > Fusion Settings > General > Auto Clip Browse option in DaVinci Resolve to Enabled (default), and then adding this node to a composition automatically displays a file browser allowing you to choose where to save the file.

Once you have set up the node, the FBX Exporter is used similarly to a Saver node: clicking the Render button in the toolbar renders out the file.

Besides the FBX format, this node can also export to the 3D Studio’s .3ds, Collada’s .dae, Autocad’s .dxf, and the Alias .obj formats.

FBX Exporter 3D Node Inputs

The FBX Exporter node has a single orange input.

  • Input: The output of the 3D scene that you want to export connects to the orange input on the FBX Exporter node.

FBX Exporter 3D Node Setup

The input to the FBX Exporter 3D node is any 3D scene you want to export. Below, the node is placed as a separate branch off of the Duplicate 3D node. Only the objects generated by the Duplicate 3D node are exported.

FBX Exporter 3D Node Controls Tab

The Controls tab includes all the parameters you used to decide how the FBX file is created and what elements in the scene get exported.

  • Filename
    This Filename field is used to display the location and file that is output by the node. You can click the Browse button to open a file browser dialog and change the location where the file is saved.
  • Format
    This menu is used to set the format of the output file

    Not all features of this node are supported in all file formats. For example, the .obj format does not handle animation.
  • Version
    The Version menu is used to select the available versions for the chosen format. The menu’s contents change dynamically to reflect the available versions for that format. If the selected format provides only a single option, this menu is hidden

    Choosing Default for the FBX formats uses FBX2011.
  • Frame Rate
    This menu sets the frame rate that is in the FBX scene.
  • Scale Units By
    This slider changes the working units in the exported FBX file. Changing this can simplify workflows where the destination 3D software that you have uses a different scale.
  • Geometry/Lights/Cameras
    These three checkboxes determine whether the node attempts to export the named scene element. For example, deselecting Geometry and Lights but leaving Cameras selected would output only the cameras currently in the scene.
  • Render Range
    Enabling this checkbox saves the Render Range information in the export file, so other applications know the time range of the FBX scene.
  • Reduce Constant Keys
    Enabling this option automatically removes keyframes if the adjacent keyframes have the same value.
  • File Per Frame (No Animation)
    Enabling this option forces the node to export one file per frame, resulting in a sequence of numbered files. This disables the export of animation. Enable this checkbox to reveal the Sequence Start Frame control where you can set the first frame in the sequence to a custom value.
  • Sequence Start Frame
    Enabling this checkbox displays a thumbwheel control to set a specific start frame for the number sequence applied to the rendered filenames. For example, if Global Start is set to 1 and frames 1–30 are rendered, files are normally numbered 0001–0030. If the Sequence Start frame is set to 100, the rendered output is numbered from 100–131.

FBX Exporter 3D Node Settings Tab

The Settings tab controls are common to many 3D nodes, and their descriptions can be found in Common Controls HERE

FBX Mesh 3D Node

The FBX Mesh 3D node is used to import polygonal geometry from scene files that are saved in the FilmBox (FBX) format. It is also able to import geometry from OBJ, 3DS, DAE, and DXF scene files. This provides a method for working with more complex geometry than is available using Fusion‘s built-in primitives.

When importing geometry with this node, all the geometry in the FBX file is combined into one mesh with a single pivot and transformation. The FBX Mesh node ignores any animation applied to the geometry.

Alternatively, in Fusion Studio, the File > Import > FBX Scene or in DaVinci Resolve, the Fusion > Import > FBX Scene menu can be used to import an FBX scene. This option creates individual nodes for each camera, light, and mesh in the file. This menu option can also be used to preserve the animation of the objects.

Setting the Preferences > Global > General > Auto Clip Browse option in Fusion Studio, or the Fusion > Fusion Settings > General > Auto Clip Browse option in DaVinci Resolve to Enabled (default), and then adding this node to a composition automatically displays a file browser allowing you to choose the file to import.

FBX Mesh 3D Node Input

  • SceneInput: The orange scene input is an optional connection if you wish to combine other 3D geometry nodes with the imported FBX file.
  • Material Input: The green input is the material input that accepts either a 2D image or a 3D material. If a 2D image is provided, it is used as a diffuse texture map for the basic material tab in the node. If a 3D material is connected, then the basic material tab is disabled.

FBX Mesh 3D Node Setup

The FBX Mesh 3D node can be used as a stand-alone node without any other nodes connected to it. The output is connected to a Merge 3D, integrating the FBX model into a lager scene. Below, the FBX Mesh 3D node also has a chrome material connected to its material input.

FBX Mesh 3D Node Controls Tab

Most of the Controls tab is taken up by common controls. The FBX-specific controls included on this tab are primarily information and not adjustments.

  • Size
    The Size slider controls the size of the FBX geometry that is imported. FBX meshes have a tendency to be much larger than Fusion’s default unit scale, so this control is useful for scaling the imported geometry to match the Fusion environment.
  • FBX File
    This field displays the filename and file path of the currently loaded FBX mesh. Click the Browse button to open a file browser that can be used to locate a new FBX file. Despite the node’s name, this node is also able to load a variety of other formats.
FBX ascii*.fbx
FBX 5.0 binary*.fbx
Autocad DXF*.dxf
3D Studio 3Ds*.3ds
Alias OBJ*.obj
Collada DAE*.dae
  • Object Name
    This input shows the name of the mesh from the FBX file that is being imported. If this field is blank, then the contents of the FBX geometry are imported as a single mesh. You cannot edit this field; it is set by Fusion when using the File > Import > FBX Scene menu.
  • Take Name
    FBX files can contain multiple instances of an animation, called Takes. This field shows the name of the animation take to use from the FBX file. If this field is blank, then no animation is imported. You cannot edit this field; it is set by Fusion when using the File > Import > FBX Scene menu.
  • Wireframe
    Enabling this checkbox causes the mesh to render only the wireframe for the object. Only the OpenGL renderer in the Renderer 3D node supports wireframe rendering.

FBX Mesh 3D Node Controls, Materials, Transform, and Settings Tabs

The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs. Their descriptions can be found in Common Controls HERE

Fog 3D Node

The Fog 3D node applies fog to the scene based on a depth cue. It is the 3D version of the Fog node in the Deep Pixel category. It is designed to work completely in 3D space and takes full advantage of antialiasing and depth of field effects during rendering.

The Fog 3D node essentially retextures the geometry in the scene by applying a color correction based on the object’s distance from the camera. An optional density texture image can be used to apply variation to the correction.

Fog 3D Node Input

The Fog 3D node has two inputs in the Node Editor, only one of which is required for the Fog 3D to project onto a 3D scene.

  • SceneInput: The required orange-colored input accepts the output of a 3D scene on which the fog is “projected.”
  • DensityTexture: This optional green-colored input accepts a 2D image. The color of the fog created by this node is multiplied by the pixels in this image. When creating the image for the density texture, keep in mind that the texture is effectively projected onto the scene from the camera.

Fog 3D Node Setup

The Fog 3D node is placed after the Merge 3D node that contains the scene. Viewing the Fog node will show the fog applied to the objects in the 3D scene based on their Z position.

Fog 3D Node Controls Tab

The Controls tab includes all the parameters you use to decide how the Fog looks and projects onto the geometry in the scene

  • Enable
    Use this checkbox to enable or disable parts of the node from processing. This is not the same as the red switch in the upper-left corner of the inspector. The red switch disables the tool altogether and passes the image on without any modification. The Enable checkbox is limited to the effect part of the tool. Other parts like scripts in the Settings tab still processes as normal.
  • Show Fog in View
    By default, the fog created by this node is visible only when the scene is viewed using a camera node. When this checkbox is enabled, the fog becomes visible in the scene from all points of view.
  • Color
    This control can be used to set the color of the fog. The color is also multiplied by the density texture image, if one is connected to the green input on the node.
  • Radial
    By default, the fog is created based on the perpendicular distance to a plane (parallel with the near plane) passing through the eye point. When the Radial option is checked, the radial distance to the eye point is used instead of the perpendicular distance. The problem with perpendicular distance fog is that when you move the camera about, as objects on the left or right side of the frustum move into the center, they become less fogged although they remain the same distance from the eye. Radial fog fixes this. Radial fog is not always desirable, however. For example, if you are fogging an object close to the camera, like an image plane, the center of the image plane could be unfogged while the edges could be fully fogged.
  • Type
    This control is used to determine the type of falloff applied to the fog.
    • Linear: Defines a linear falloff for the fog.
    • Exp: Creates an exponential nonlinear falloff.
    • Exp2: Creates a stronger exponential falloff.
  • Near/Far Fog Distance
    This control expresses the range of the fog in the scene as units of distance from the camera. The Near Distance determines where the fog starts, while the Far Distance sets the point where the fog has its maximum effect. Fog is cumulative, so the farther an object is from the camera, the thicker the fog should appear.

Fog 3D Node Settings Tab

The Settings tab controls are common to many 3D nodes, and their descriptions can be found in Common Controls HERE

Image Plane 3D Node

The Image Plane node produces 2D planar geometry in 3D space. The node is most commonly used to represent 2D images on “cards” in the 3D space. The aspect of the image plane is determined by the aspect of the image connected to the material input. If you do not want the aspect ratio of the image to modify the “card” geometry, then use a Shape 3D node instead.

Image Plane 3D Node Inputs

Of the two inputs on this node, the material input is the primary connection you use to add an image to the planar geometry created in this node.

  • SceneInput: This orange input expects a 3D scene. As this node creates flat, planar geometry, this input is not required.
  • MaterialInput: The green-colored material input accepts either a 2D image or a 3D material. It provides the texture and aspect ratio for the rectangle based on the connected source such as a Loader node in Fusion Studio or a MediaIn node in DaVinci Resolve. The 2D image is used as a diffuse texture map for the basic material tab in the Inspector. If a 3D material is connected, then the basic material tab is disabled.

Image Plane 3D Node Setup

The Image Plane 3D node is primarily used to bring a video clip into a 3D composite. The MediaIn or Loader node is connected to the Image Plane 3D node, and the Image Plane 3D is then connected to a Merge 3D node. Viewing the Merge 3D node will show all the Image Plane 3D nodes and other elements connected to it.

Image Plane 3D Node Controls Tab

Most of the Controls tab is taken up by common controls. The Image Plane specific controls at the top of the Inspector allow minor adjustments.

  • Lock Width/Height
    When checked, the subdivision of the plane is applied evenly in X and Y. When unchecked, there are two sliders for individual control of the subdivisions in X and Y. This defaults to on.
  • Subdivision Level
    Use the Subdivision Level slider to set the number of subdivisions used when creating the image plane. If the Open GL viewer and renderer are set to Vertex lighting, the more subdivisions in the mesh, the more vertices are available to represent the lighting. So, high subdivisions can be useful when working interactively with lights.
  • Wireframe
    Enabling this checkbox causes the mesh to render only the wireframe for the object when using the OpenGL renderer

Image Plane 3D Node Controls, Materials, Transform, and Settings Tabs

The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs. Their descriptions can be found in Common Controls HERE

Locator 3D Node

The Locator 3D node’s purpose is to transform a point in 3D space to 2D coordinates that other nodes can use as part of expressions or modifiers.

When the Locator is provided with a camera and the dimensions of the output image, it transforms the coordinates of a 3D control into 2D screen space. The 2D position is exposed as a numeric output that can be connected to/from other nodes. For example, to connect the center of an ellipse to the 2D position of the Locator, right-click on the Mask center control and select Connect To > Locator 3D > Position.

Locator 3D Node Inputs

Two inputs accept 3D scenes as sources. The orange scene input is required, while the green Target input is optional.

  • SceneInput: The required orange scene input accepts the output of a 3D scene. This scene should contain the object or point in 3D space that you want to covert to 2D coordinates.
  • Target: The optional green target input accepts the output of a 3D scene. When provided, the transform center of the scene is used to set the position of the Locator. The transformation controls for the Locator become offsets from this position.

Locator 3D Node Setup

The scene provided to the Locator’s input must contain the camera through which the coordinates are projected. So, the best practice is to place the Locator after the Merge that introduces the camera to the scene.

If an object is connected to the Locator node’s target input, the Locator is positioned at the object’s center, and the Transformation tab’s translation XYZ sliders function in the object’s local coordinate space instead of global scene space. This is useful for tracking an object’s position despite any additional transformations applied further downstream.

Locator 3D Node Controls Tab

Most of the controls for the locator 3D are cosmetic, dealing with how the locator appears and whether it is rendered in the final output. However, the Camera Settings are critical to getting the results you’re looking for.

  • Size
    The Size slider is used to set the size of the Locator’s onscreen crosshair.
  • Color
    A basic Color control is used to set the color of the Locator’s onscreen crosshair.
  • Matte
    Enabling the Is Matte option applies a special texture to this object, causing this object to not only become invisible to the camera, but also making everything that appears directly behind the camera invisible as well. This option overrides all textures. For more information, see Chapter 85, “3D Compositing Basics,” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion Reference Manual.
    • Is Matte: When activated, objects whose pixels fall behind the matte object’s pixels in Z do not get rendered.
    • Opaque Alpha: Sets the Alpha value of the matte object to 1. This checkbox is visible only when the Is Matte option is enabled.
    • Infinite Z: Sets the value in the Z-channel to infinity. This checkbox is visible only when the Is Matte option is enabled.
  • Sub ID
    The Sub ID slider can be used to select an individual subelement of certain geometry, such as an individual character produced by a Text 3D node or a specific copy created by a Duplicate 3D node.
  • Make Renderable
    Defines whether the Locator is rendered as a visible object by the OpenGL renderer. The software renderer is not currently capable of rendering lines and hence ignores this option.
  • Unseen by Camera
    This checkbox control appears when the Make Renderable option is selected. If the Unseen by Camera checkbox is selected, the Locator is visible in the viewers but not rendered into the output image by the Renderer 3D node.
  • Camera
    This drop-down control is used to select the Camera in the scene that defines the screen space used for 3D to 2D coordinate transformation.
  • Use Frame Format Settings
    Select this checkbox to override the width, height, and pixel aspect controls, and force them to use the values defined in the composition’s Frame Format preferences instead.
  • Width, Height, and Pixel Aspect
    In order for the Locator to generate a correct 2D transformation, it must know the dimensions and aspect of the image. These controls should be set to the same dimensions as the image produced by a renderer associated with the camera specified above. Right-clicking on these controls displays a contextual menu containing the frame formats configured in the composition’s preferences.

Locator 3D Node Transform and Settings tabs

The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be found in Common Controls HERE

Merge 3D Node

The Merge 3D node is the primary node in Fusion that you use to combine separate 3D elements into the same 3D environment.

For example, in a scene created with an image plane, a camera, and a light, the camera would not be able to see the image plane and the light would not affect the image plane until all three objects are introduced into the same environment using the Merge 3D node.

The Merge provides the standard transformation controls found on most nodes in Fusion’s 3D suite. Unlike those nodes, changes made to the translation, rotation, or scale of the Merge affect all the objects connected to the Merge. This behavior forms the basis for all parenting in Fusion’s 3D environment.

Merge 3D Node Inputs

The Merge node displays only two inputs initially, but as each input is connected a new input appears on the node, assuring there is always one free to add a new element into the scene.

  • SceneInput: These multicolored inputs are used to connect image planes, 3D cameras, lights, entire 3D scenes, as well as other Merge 3D nodes. There is no limit to the number of inputs this node can accept. The node dynamically adds more inputs as needed, ensuring that there is always at least one input available for connection.

Merge 3D Node Setup

The Merge 3D is the hub of a 3D composite. All elements in a 3D scene connect into a Merge 3D.

Multiple Merge 3D nodes can be strung together to control lighting or for neater organization. The last Merge 3D in a string must connect to a Renderer 3D to be output as a 2D image.

Merge 3D Node Controls Tab

The Controls tab is used only to pass through any lights connected to the Merge 3D node.

  • Pass Through Lights
    When the Pass Through Lights checkbox is selected, lights are passed through the Merge into its output to affect downstream elements. Normally, the lights are not passed downstream to affect the rest of the scene. This is frequently used to ensure projections are not applied to geometry introduced later in the scene.

Merge 3D Node Transform and Settings Tabs

The remaining controls for the Transform and Settings tabs are common to most 3D nodes. Their descriptions can be found in Common Controls HERE

Override 3D Node

The Override node lets you change object-specific options for every object in a 3D scene simultaneously. This is useful, for example, when you wish to set every object in the input scene to render as a wireframe. Additionally, this node is the only way to set the wireframe, visibility, lighting, matte, and ID options for 3D particle systems and the Text 3D node.

Override 3D Node Inputs

  • SceneInput: The orange Scene input accepts the output of a Merge 3D node or any node creating a 3D scene.

Override 3D Node Setup

The Override 3D node is frequently used in conjunction with the Replace Material node to produce isolated passes. For example, in the node tree below, a scene branches out to an Override node that turns off the Affected by Lights property of each node, then connects to a Replace Material node that applies a Falloff shader to produce a falloff pass of the scene.

Override 3D Node Controls Tab

The function of the controls found in the Controls tab is straightforward. First, you select the option to override using the Do [Option] checkbox. That reveals a control that can be used to set the value of the option itself. The individual options are not documented here; a full description of each can be found in any geometry creation node in this chapter, such as the Image Plane, Cube, or Shape nodes.

  • Do [Option]
    Enables the override for this option.
  • [Option]
    If the Do [Option] checkbox is enabled, then the control for the property itself becomes visible. The control values of the properties for all upstream objects are overridden by the new value.

Override 3D Node Settings Tabs

The Settings tab includes controls common to most 3D nodes. Their descriptions can be found in Common Controls HERE

Point Cloud 3D Node

A Point Cloud is generally many null objects created by 3D tracking or modeling software.

When produced by 3D tracking software, the points typically represent each of the patterns tracked to create the 3D camera path. These point clouds can be used to identify a ground plane and to orient other 3D elements with the tracked image. The Point Cloud 3D node creates a point cloud either by importing a file from a 3D tracking application or generating it when you use the Camera Tracker node.

Point Cloud 3D Node Inputs

The Point Cloud has only a single input for a 3D scene.

  • SceneInput: This orange input accepts a 3D scene.

Point Cloud 3D Node Setup

The Point Cloud 3D node is viewed and connected through a Merge 3D node, integrating it into the larger 3D scene.

Point Cloud 3D Node Controls Tab

The Controls tab is where you can import the point cloud from a file and controls its appearance in the viewer.

  • Style
    The Style menu allows you to display the point cloud as cross hairs or points in the viewer.
  • Lock X/Y/Z
    Deselect this checkbox to provide individual control over the size of the X, Y, and Z arms of the points in the cloud.
  • Size X/Y/Z
    These sliders can be used to increase the size of the onscreen crosshairs used to represent each point.
  • Density
    This slider defines the probability of displaying a specific point. If the value is 1, then all points are displayed. A value of 0.2 shows only every fifth point.
  • Color
    Use the standard Color control to set the color of onscreen crosshair controls.
  • Import Point Cloud
    The Import Point Cloud button displays a dialog to import a point cloud from another application.

Supported file types are:

Alias’s Maya.ma
3DS Max ASCII Scene Export.ase
NewTek’s LightWave.lws
Softimage XSI’s.xsi
  • Make Renderable
    Determines whether the point cloud is visible in the OpenGL viewer and in final renderings made by the OpenGL renderer. The software renderer does not currently support rendering of visible crosshairs for this node.
  • Unseen by Camera
    This checkbox control appears when the Make Renderable option is selected. If the Unseen by Cameras checkbox is selected, the point cloud is visible in the viewers but not rendered into the output image by the Renderer 3D node.

Point Cloud 3D Node Transform and Settings Tabs

The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be found in Common Controls HERE

Point Cloud 3D Node Onscreen Contextual Menu

Frequently, one or more of the points in an imported point cloud is manually assigned to track the position of a specific feature. These points usually have names that distinguish them from the rest of the points in the cloud. To see the current name for a point, hover the mouse pointer directly over a point, and after a moment a small tooltip appears with the name of the point.

When the Point Cloud 3D node is selected, a submenu is added to the viewer’s contextual menu with several options that make it simple to locate, rename, and separate these points from the rest of the point cloud.

The contextual menu contains the following options:

  • Find: Selecting this option from the viewer contextual menu opens a dialog to search for and select a point by name. Each point that matches the pattern is selected.
  • Rename: Rename any point by selecting Rename from the contextual menu. Type the new name into the dialog that appears and press Return. The point now has that name, with a four-digit number added to the end. For example, the Name window is window0000, and multiple points would be window0000, window0001, and so on. Names must be valid Fusion identifiers (i.e., no spaces allowed, and the name cannot start with a number).
  • Delete: Selecting this option deletes the currently selected points.
  • Publish: Normally, the exact position of a point in the cloud is not exposed. To expose the position, select the points, and then select the Publish option from this contextual menu. This adds a coordinate control to the control panel for each published point that displays the point’s current location.

Point Cloud 3D Node Additional Toolbar and Shortcuts

Delete Selected PointsDel
Select AllShift+A
Find PointsShift+F
Rename Selected PointsF2
Create New PointShift+C
Toggle Names on None/Selected/Published/All PointsShift+N
Toggle Locations on None/Selected/Published/All PointsShift+L
Publish Selected PointsShift+P
Unpublish Selected PointsShift+U
Create a Shape at Selected PointsShift+S
Create and Fit an ImagePlane to Selected PointsShift+I
Create a Locator at Selected PointsShift+O

Projector 3D Node

The Projector 3D node is used to project an image upon 3D geometry. This can be useful in many ways: texturing objects with multiple layers, applying a texture across multiple separate objects, projecting background shots from the camera’s viewpoint, image-based rendering techniques, and more. The Projector node is just one of several nodes capable of projecting images and textures. Each method has advantages and disadvantages. For more information, see Chapter 85, “3D Compositing Basics,” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion Reference Manual.

Projected textures can be allowed to “slide“ across the object if the object moves relative to the Projector 3D, or, alternatively, by grouping the two with a Merge 3D so they can be moved as one and the texture remains locked to the object.

The Projector 3D node’s capabilities and restrictions are best understood if the Projector is considered to be a variant on the SpotLight node. The fact that the Projector 3D node is actually a light has several important consequences when used in Light or Ambient Light projection mode:

  • Lighting must be turned on for the results of the projection to be visible.
  • The light emitted from the projector is treated as diffuse/specular light. This means that it is affected by the surface normals and can cause specular highlights. If this is undesirable, set the Projector 3D to project into the Ambient Light channel.
  • Enabling Shadows causes Projector 3D to cast shadows.
  • Just as with other lights, the light emitted by a Projector 3D only affects objects that feed into the first Merge 3D that is downstream of the Projector 3D node in the node tree.
  • Enabling Merge 3D’s Pass Through Lights checkbox allows the projection to light objects further downstream.
  • The light emitted by a Projector 3D is controlled by the Lighting options settings on objects and the Receives Lighting options on materials.
  • Alpha values in the projected image do not clip geometry in Light or Ambient Light mode. Use Texture mode instead.
  • If two projections overlap, their light contributions are added.

To project re-lightable textures or textures for non-diffuse color channels (like Specular Intensity or Bump), use the Texture projection mode instead:

  • Projections in Texture mode only strike objects that use the output of the Catcher node for all or part of the material applied to that object.
  • Texture mode projections clip the geometry according to the Alpha channel of the projected image.

See the section for the Catcher node for additional details.

Camera Projection vs. Projection 3D Node
The Camera 3D node also provides a projection feature, and should be used when the projection is meant to match a camera, as this node has more control over aperture, film back, and clip planes. The Projector 3D node was designed to be used as a custom light in 3D scenes for layering and texturing. The projector provides better control over light intensity, color, decay, and shadows.

Projector 3D Node Inputs

The Projector 3D has two inputs: one for the scene you are projecting on to and another for the projected image.

  • SceneInput: The orange scene input accepts a 3D scene. If a scene is connected to this input, then transformations applied to the spotlight also affect the rest of the scene.
  • ProjectiveImage: The white input expects a 2D image to be used for the projection. This connection is required.

Projector 3D Node Setup

As an example, the Projector 3D node below is used to project a texture (MediaIn2) onto 3D primitives as a way to create a simple 3D set. All the set elements are connected into a Merge 3D, which outputs the projected set into a larger scene with camera, lights, and other elements. As an alternative, the Projector 3D node could be inserted between the two Merge 3D nodes; however, the transform controls in the Projector 3D node would then affect the entire scene.

Projector 3D Node Controls Tab

  • Enabled
    When this checkbox is enabled, the projector affects the scene. Disable the checkbox to turn off the projector. This is not the same as the red switch in the upper-left corner of the Inspector. The red switch disables the tool altogether and passes the image on without any modification. The Enabled checkbox is limited to the effect part of the tool. Other parts, like scripts in the Settings tab still process as normal.
  • Color
    The input image is multiplied by this color before being projected into the scene.
  • Intensity
    Use this slider to set the Intensity of the projection when the Light and Ambient Light projection modes are used. In Texture mode, this option scales the Color values of the texture after multiplication by the color.
  • Decay Type
    A projector defaults to No Falloff, meaning that its light has equal intensity on geometry, despite the distance from the projector to the geometry. To cause the intensity to fall off with distance, set the Decay type to either Linear or Quadratic modes.
  • Angle
    The Cone Angle of the node refers to the width of the cone where the projector emits its full intensity. The larger the angle, the wider the cone angle, up to a limit of 90 degrees.
  • Fit Method
    The Fit Method determines how the projection is fitted within the projection cone.

    The first thing to know is that although this documentation may call it a “cone,” the Projector 3D and Camera 3D nodes do not project an actual cone; it’s more of a pyramid of light with its apex at the camera/projector. The Projector 3D node always projects a square pyramid of light—i.e., its X and Y angles of view are the same. The pyramid of light projected by the Camera 3D node can be non-square depending on what the Film Back is set to in the camera. The aspect of the image connected into the Projector 3D/Camera 3D does not affect the X/Y angles of the pyramid, but rather the image is scaled to fit into the pyramid based upon the fit options.

    When both the aspect of the pyramid (AovY/AovX) and the aspect of the image (height * pixelAspectY)/ (width * pixelAspectX) are the same, there is no need for the fit options, and in this case the fit options all do the same thing. However, when the aspect of the image and the pyramid (as determined by the Film Back settings in Camera 3D) are different, the fit options become important.

    For example, Fit by Width fits the width of the image across the width of the Camera 3D pyramid. In this case, if the image has a greater aspect ratio than the aspect of the pyramid, some of the projection extends vertically outside of the pyramid.

    There are five options:
    • Inside: The image is uniformly scaled so that its largest dimension fits inside the cone. Another way to think about this is that it scales the image as big as possible subject to the restriction that the image is fully contained within the pyramid of the light. This means, for example, that nothing outside the pyramid of light ever receives any projected light.
    • Width: The image is uniformly scaled so that its width fits inside the cone. Note that the image could still extend outside the cone in its height direction.
    • Height: The image is uniformly scaled so that its height fits inside the cone. Note that the image could still extend outside the cone in its width direction.
    • Outside: The image is uniformly scaled so that its smallest dimension fits inside the cone. Another way to think about this is that it scales the image as small as possible subject to the restriction that the image covers the entire pyramid (i.e., the pyramid is fully contained within the image). This means that any pixel of any object inside the pyramid of light always gets illuminated.
    • Stretch: The image is non-uniformly scaled, so it exactly covers the cone of the projector.
  • Projection Mode
    • Light: Projects the texture as a diffuse/specular light.
    • Ambient Light: Uses an ambient light for the projection.
    • Texture: When used in conjunction with the Catcher node, this mode allows re-lightable texture projections. The projection strikes only objects that use the catcher material as part of their material shaders.

      One useful trick is to connect a Catcher node to the Specular Texture input on a 3D Material node (such as a Blinn). This causes any object using the Blinn material to receive the projection as part of the specular highlight. This technique can be used in any material input that uses texture maps, such as the Specular and Reflection maps.
  • Shadows
    Since the projector is based on a spotlight, it is also capable of casting shadows using shadow maps. The controls under this reveal are used to define the size and behavior of the shadow map.
    • Enable Shadows: The Enable Shadows checkbox should be selected if the light is to produce shadows. This defaults to selected.
    • Shadow Color: Use this standard Color control to set the color of the shadow. This defaults to black (0, 0, 0).
    • Density: The Shadow Density determines the transparency of the shadow. A density of 1.0 produces a completely transparent shadow, whereas lower values make the shadow transparent.
    • Shadow Map Size: The Shadow Map Size control determines the size of the bitmap used to create the shadow map. Larger values produce more detailed shadow maps at the expense of memory and performance.
    • Shadow Map Proxy: The Shadow Map Proxy determines the size of the shadow map used for proxy and auto proxy calculations. A value of 0.5 would use a 50% shadow map.
    • Multiplicative/Additive Bias: Shadows are essentially textures applied to objects in the scene, so there is occasionally Z-fighting, where the portions of the object that should be receiving the shadows render over the top of the shadow instead.
    • Multiplicative and Additive Bias: Bias works by adding a small depth offset to move the shadow away from the surface it is shadowing, eliminating the Z-fighting. Too little bias and the objects can self-shadow themselves. Too much bias and the shadow can become separated from the surface. Adjust the multiplicative bias first, then fine tune the result using the additive bias control.
    • Force All Materials Non-Transmissive: Normally, an RGBAZ shadow map is used when rendering shadows. By enabling this option, you are forcing the renderer to use a Z-only shadow map. This can lead to significantly faster shadow rendering while using a fifth as much memory. The disadvantage is that you can no longer cast “stained-glass”-like shadows.
    • Shadow Map Sampling: Sets the quality for sampling of the shadow map.
    • Softness: Soft edges in shadows are produced by filtering the shadow map when it is sampled. Fusion provides three separate filtering methods that produce different effects when rendering shadows.
      • None: Shadows have a hard edge. No filtering of the shadow map is done at all. The advantage of this method is that you only have to sample one pixel in the shadow map, so it is fast.
      • Constant: Shadow edges have a constant softness. A filter with a constant width is used when sampling the shadow map. Adjusting the Constant Softness slider controls the size of the filter. Note that the larger you make the filter, the longer it takes to render the shadows. If the Softness is set to constant, then a Constant slider appears. It can be used to set the overall softness of the shadow.
      • Variable: The softness of shadow edges grows the farther away the shadow receiver is from the shadow caster. The variable softness is achieved by changing the size of the filter based on the distance between the receiver and caster. When this option is selected, the Softness Falloff, Min Softness and Max Softness sliders appear.
Softness FalloffThe Softness Falloff slider appears when the Softness is set to variable. This slider controls how fast the softness of shadow edges grows with distance. More precisely, it controls how fast the shadow map filter size grows based on the distance between shadow caster and receiver. Its effect is mediated by the values of the Min and Max Softness sliders.
Min SoftnessThe Min Softness slider appears when the Softness is set to variable. This slider controls the Minimum Softness of the shadow. The closer the shadow is to the object casting the shadow, the sharper it is up to the limit set by this slider.
Max SoftnessThe Max Softness slider appears when the Softness is set to variable. This slider controls the Maximum Softness of the shadow. The farther the shadow is from the object casting the shadow, the softer it is up to the limit set by this slider.

Projector 3D Node Transform and Settings Tabs

The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be found in Common Controls HERE

Renderer 3D Node

The Renderer 3D node converts the 3D environment into a 2D image using either a default perspective camera or one of the cameras found in the scene. Every 3D scene in a composition terminates with at least one Renderer 3D node. The Renderer node includes a software and OpenGL render engine to produce the resulting image. Additional render engines may also be available via third-party plug-ins.

The software render engine uses the system’s CPU only to produce the rendered images. It is usually much slower than the OpenGL render engine, but produces consistent results on all machines, making it essential for renders that involve network rendering. The Software mode is required to produce soft shadows, and generally supports all available illumination, texture, and material features.

The OpenGL render engine employs the GPU processor on the graphics card to accelerate the rendering of the 2D images. The output may vary slightly from system to system, depending on the exact graphics card installed. The graphics card driver can also affect the results from the OpenGL renderer. The OpenGL render engines speed makes it possible to provide customized supersampling and realistic 3D depth of field options. The OpenGL renderer cannot generate soft shadows. For soft shadows, the software renderer is recommended.

Like most nodes, the Renderer’s motion blur settings can be found under the Common Controls tab. Be aware that scenes containing particle systems require that the Motion Blur settings on the pRender nodes exactly match the settings on the Renderer 3D node.

Otherwise, the subframe renders conflict producing unexpected (and incorrect) results.

Renderer 3D Node Inputs

The Renderer 3D node has two inputs on the node. The main scene input takes in the Merge 3D or other 3D nodes that need to be converted to 2D. The effect mask limits the Renderer 3D output.

  • SceneInput: The orange scene input is a required input that accepts a 3D scene that you want to convert to 2D.
  • EffectMask: The blue effects mask input uses a 2D image to mask the output of the node.

Renderer 3D Node Setup

All 3D scenes must end with a Renderer 3D node. The Renderer 3D node is used to convert a 3D scene into a 2D image. Below, the Renderer 3D node takes the output of a Merge 3D node, and renders the 3D scene into a 2D image.

Renderer 3D Node Controls Tab

  • Camera
    The Camera menu is used to select which camera from the scene is used when rendering. The Default setting uses the first camera found in the scene. If no camera is located, the default perspective view is used instead.
  • Eye
    The Eye menu is used to configure rendering of stereoscopic projects. The Mono option ignores the stereoscopic settings in the camera. The Left and Right options translate the camera using the stereo Separation and Convergence options defined in the camera to produce either left- or right-eye outputs. The Stacked option places the two images one on top of the other instead of side by side.
  • Reporting
    The first two checkboxes in this section can be used to determine whether the node prints warnings and errors produced while rendering to the console. The second set of checkboxes tells the node whether it should abort rendering when a warning or error is encountered. The default for this node enables all four checkboxes.
  • Renderer Type
    This menu lists the available render engines. Fusion provides three: the software renderer, OpenGL renderer, and the OpenGL UV render engine. Additional renderers can be added via third-party plug-ins.

    All the controls found below this drop-down menu are added by the render engine. They may change depending on the options available to each renderer. So, each renderer is described in its own section below.
    • Software Controls
      • Output Channels
        Besides the usual Red, Green, Blue, and Alpha channels, the software renderer can also embed the following channels into the image. Enabling additional channels consumes additional memory and processing time, so these should be used only when required.
        • RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of the image. These channels are required, and they cannot be disabled.
        • Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that represents the distance of each pixel from the camera. Note that the Z-channel values cannot include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used for this pixel.
        • Coverage: This option enables rendering of the Coverage channel. The Coverage channel contains information about which pixels in the Z-buffer provide coverage (are overlapping with other objects). This helps nodes that use the Z-buffer to provide a small degree of anti-aliasing. The value of the pixels in this channel indicates, as a percentage, how much of the pixel is composed of the foreground object.
        • BgColor: This option enables rendering of the BgColor channel. This channel contains the color values from objects behind the pixels described in the Coverage channel.
        • Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color channel containing values in a range from [–1,1] represents each axis.
        • TexCoord: This option enables rendering of the U and V mapping coordinate channels. The pixels in these channels contain the texture coordinates of the pixel. Although texture coordinates are processed internally within the 3D system as three-component UVW, Fusion images store only UV components. These components are mapped into the Red and Green color channel.
        • ObjectID: This option enables rendering of the ObjectID channel. Each object in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the objects that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple objects can share a single Object ID. This buffer is useful for extracting mattes based on the shapes of objects in the scene.
        • MaterialID: This option enables rendering of the Material ID channel. Each material in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the materials that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple materials can share a single Material ID. This buffer is useful for extracting mattes based on a texture; for example, a mask containing all the pixels that comprise a brick texture.
      • Lighting
        • Enable Lighting: When the Enable Lighting checkbox is selected, objects are lit by any lights in the scene. If no lights are present, all objects are black.
        • Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces shadows, at the cost of some speed.
    • OpenGL Controls
      • Output Channels
        In addition to the usual Red, Green, Blue, and Alpha channels, the OpenGL render engine can also embed the following channels into the image. Enabling additional channels consumes additional memory and processing time, so these should be used only when required.
        • RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of the image. These channels are required, and they cannot be disabled.
        • Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that represents the distance of each pixel from the camera. Note that the Z-channel values cannot include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used for this pixel.
        • Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color channel containing values in a range from [–1,1] is represented by each axis.
        • TexCoord: This option enables rendering of the U and V mapping coordinate channels. The pixels in these channels contain the texture coordinates of the pixel. Although texture coordinates are processed internally within the 3D system as three-component UVW, Fusion images store only UV components. These components are mapped into the Red and Green color channels.
        • ObjectID: This option enables rendering of the ObjectID channel. Each object in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the objects that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple objects can share a single Object ID. This buffer is useful for extracting mattes based on the shapes of objects in the scene.
        • MaterialID: This option enables rendering of the Material ID channel. Each material in the 3D environment can be assigned a numeric identifier when it is created. The pixels in this floating-point image channel contain the values assigned to the materials that produced the pixel. Empty pixels have an ID of 0, and the channel supports values as high as 65534. Multiple materials can share a single Material ID. This buffer is useful for extracting mattes based on a texture—for example, a mask containing all the pixels that comprise a brick texture.
      • Anti-Aliasing
        Anti-aliasing can be enabled for each channel through the Channel menu. It produces an output image with higher quality anti-aliasing by brute force, rendering a much larger image, and then rescaling it down to the target resolution. Rendering a larger image in the first place, and then using a Resize node to bring the image to the desired resolution can achieve the exact same results. Using the supersampling built in to the renderer offers two distinct advantages over this method.

        The rendering is not restricted by memory or image size limitations. For example, consider the steps to create a float-16 1920 x 1080 image with 16x supersampling. Using the traditional Resize node would require first rendering the image with a resolution of 30720 x 17280, and then using a Resize to scale this image back down to 1920 x 1080. Simply producing the image would require nearly 4 GB of memory. When anti-aliasing is performed on the GPU, the OpenGL renderer can use tile rendering to significantly reduce memory usage.

        The GL renderer can perform the rescaling of the image directly on the GPU more quickly than the CPU can manage it. Generally, the more GPU memory the graphics card has, the faster the operation is performed.

        Interactively, Fusion skips the anti-aliasing stage unless the HiQ button is selected in the Time Ruler. Final quality renders always include supersampling, if it is enabled.

        Because of hardware limitations, point geometry (particles) and lines (locators) are always rendered at their original size, independent of supersampling. This means that these elements are scaled down from their original sizes, and likely appear much thinner than expected.

        Anti-Aliasing of Aux Channels in the OpenGL Renderer
        The reason Fusion supplies separate anti-aliasing options for color and aux channels in the Anti-Aliasing preset is that supersampling of color channels is quite a bit slower than aux channels. You may find that 1 x 3 LowQ/HiQ Rate is sufficient for color, but for world position or Z, you may require 4 x 12 to get adequate results. The reasons color anti-aliasing is slower are that the shaders for RGBA can be 10x to even 100x or 1000x more complex, and color is rendered with sorting enabled, while aux channels get rendered using the much faster Z-buffer method.

        Do not mistake anti-aliasing with improved quality. Anti-aliasing an aux channel does not mean it’s better quality. In fact, anti-aliasing an aux channel in many cases can make the results much worse. The only aux channels we recommend you enable anti-aliasing on are WorldCoord and Z.
    • Enable (LowQ/HiQ)
      These two check boxes are used to enable anti aliasing of the rendered image
    • Supersampling LowQ/HiQ Rate
      The LowQ and HiQ rate tells the OpenGL render how large to scale the image. For example, if the rate is set to 4 and the OpenGL renderer is set to output a 1920 x 1080 image, internally a 7680 x 4320 image is rendered and then scaled back to produce the target image. Set the multiplier higher to get better edge anti-aliasing at the expense of render time. Typically 8 x 8 supersampling (64 samples per pixel) is sufficient to reduce most aliasing artifacts.

      The rate doesn’t exactly define the number of samples done per destination pixel; the width of the reconstruction filter used may also have an impact.
    • Filter Type
      When downsampling the supersized image, the surrounding pixels around a given pixel are often used to give a more realistic result. There are various filters available for combining these pixels. More complex filters can give better results but are usually slower to calculate. The best filter for the job often depends on the amount of scaling and on the contents of the image itself.
BoxThis is a simple interpolation scale of the image.
Bi-Linear (triangle)This uses a simplistic filter, which produces relatively clean and fast results
Bi-Cubic (quadratic)This filter produces a nominal result. It offers a good compromise between
speed and quality.
Bi-Spline (cubic)This produces better results with continuous tone images but is slower
than Quadratic. If the images have fine detail in them, the results may be
blurrier than desired.
Catmul-RomThis produces good results with continuous tone images which a
GaussianThis is very similar in speed and quality to Quadratic.
MitchellThis is similar to Catmull-Rom but produces better results with finely detailed
images. It is slower than Catmull-Rom.
LanczosThis is very similar to Mitchell and Catmull-Rom but is a little cleaner
and also slower.
SincThis is an advanced filter that produces very sharp, detailed results, however,
it may produce visible `ringing’ in some situations.
BesselThis is similar to the Sinc filter but may be slightly faster.
  • Window Method
    The Window Method menu appears only when the reconstruction filter is set to Sinc or Bessel.
HanningThis is a simple tapered window.
HammingHamming is a slightly tweaked version of Hanning.
BlackmanA window with a more sharply tapered falloff.
  • Accumulation Effects
    Accumulation effects are used for creating depth of field effects. Enable both the Enable Accumulation Effects and Depth of Field checkboxes, and then adjust the quality and Amount sliders.

    The blurrier you want the out-of-focus areas to be, the higher the quality setting you need. A low amount setting causes more of the scene to be in focus.

    The accumulation effects work in conjunction with the Focal plane setting located in the Camera 3D node. Set the Focal Plane to the same distance from the camera as the subject you want to be in focus. Animating the Focal Plane setting creates rack of focus effects.
  • Lighting
    • Enable Lighting: When the Enable Lighting checkbox is selected, any lights in the scene light objects. If no lights are present, all objects are black.
    • Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces shadows, at the cost of some speed.
  • Texturing
    • Texture Depth: Lets you specify the bit depth of texture maps.
    • Warn about unsupported texture depths: Enables a warning if texture maps are in an unsupported bit depth that Fusion can’t process.
  • Lighting Mode
    The Per-vertex lighting model calculates lighting at each vertex of the scene’s geometry. This produces a fast approximation of the scene’s lighting but tends to produce blocky lighting on poorly tessellated objects. The Per-pixel method uses a different approach that does not rely on the detail in the scene’s geometry for lighting, so it generally produces superior results.

    Although the per-pixel lighting with the OpenGL renderer produces results closer to that produced by the more accurate software renderer, it still has some disadvantages. The OpenGL renderer is less capable of dealing correctly with semi-transparency, soft shadows, and colored shadows, even with per-pixel lighting. The color depth of the rendering is limited by the capabilities of the graphics card in the system.
  • Transparency
    The OpenGL renderer reveals this control for selecting which ordering method to use when calculating transparency
    • Z Buffer (fast): This mode is extremely fast and is adequate for scenes containing only opaque objects. The speed of this mode comes at the cost of accurate sorting; only the objects closest to the camera are certain to be in the correct sort order. So, semi-transparent objects may not be shown correctly, depending on their ordering within the scene.
    • Sorted (accurate): This mode sorts all objects in the scene (at the expense of speed) before rendering, giving correct transparency.
    • Quick Mode: This experimental mode is best suited to scenes that almost exclusively contain particles.
  • Shading Model
    Use this menu to select a shading model to use for materials in the scene. Smooth is the shading model employed in the viewers, and Flat produces a simpler and faster shading model.
  • Wireframe
    Renders the whole scene as wireframe. This shows the edges and polygons of the objects. The edges are still shaded by the material of the objects.
  • Wireframe Anti-Aliasing
    Enables anti-aliasing for the Wireframe render.
  • OpenGL UV Renderer
    The OpenGL UV renderer is a special case render engine. It is used to take a model with existing textures and render it out to produce an unwound flattened 2D version of the model. Optionally, lighting can be baked in. This is typically done so you can then paint on the texture and reapply it.

Render 3D Node Image and Settings Tabs

The remaining controls for the Image and Settings tabs are common to many 3D nodes. Their descriptions can be found in Common Controls HERE

Replace Material 3D Node

The Replace Material 3D node replaces the material applied to all the geometry in the input scene with its own material input. Any lights or cameras in the input scene are passed through unaffected.

The scope of the replacement can be limited using Object and Material identifiers in the Inspector. The scope can also be limited to individual channels, making it possible to use a completely different material on the Red channel, for example.

Since the Text 3D node does not include a material input, you can use the Replace Material to add material shaders to the text.

Replace Material 3D Node Inputs

The Replace Material node has two inputs: one for the 3D scene, object, or 3D text that contains the original material, and a material input for the new replacement material.

  • SceneInput: The orange scene input accepts a 3D scene or 3D text that you want to replace the material.
  • MaterialInput: The green material input accepts either a 2D image or a 3D material. If a 2D image is provided, it is used as a diffuse texture map for the basic material built into the node. If a 3D material is connected, then the basic material is disabled.

Replace Material 3D Setup

The Replace Material 3D node is inserted directly after the 3D object or scene whose material you want to replace. Below, it is used to replace the default material on a Text 3D node with a chrome shader.

Replace Material 3D Node Controls Tab

  • Enable
    This checkbox enables the material replacement. This is not the same as the red switch in the upper-left corner of the Inspector. The red switch disables the tool altogether and passes the image on without any modification. The enable checkbox is limited to the effect part of the tool. Other parts, like scripts in the Settings tab, still process as normal.
  • Replace Mode
    The Replace Mode section offers four methods of replacing each RGBA channel:
    • Keep: Prevents the channel from being replaced by the input material.
    • Replace: Replaces the material for the corresponding color channel.
    • Blend: Blends the materials together.
    • Multiply: Multiplies the channels of both inputs.
  • Limit by Object ID/Material ID
    When enabled, a slider appears where the desired IDs can be set. Other objects keep their materials. If both options are enabled, an object must satisfy both conditions.

Replace Material 3D Node Material and Settings Tabs

The remaining controls for the Material and Settings tabs are common to many 3D nodes. Their descriptions can be found in Common Controls HERE

Replace Normals 3D Node

In 3D modeling, normals are vectors used to determine the direction light reflects off surfaces. The Replace Normals node is used to replace the normals/tangents on incoming geometry, effectively adjusting the surface of an object between smooth and flat. All geometry connected to the scene input on the node is affected. Lights/Cameras/PointClouds/Locators/Materials, and other non-mesh nodes are passed through unaffected. The normals/tangents affected by this node are Per-vertex normals/ tangents, not Per-face normals/tangents. The input geometry must have texture coordinates in order for tangents to be computed. Sometimes geometry does not have texture coordinates, or the texture coordinates were set to All by the FBX import because they were not present on the FBX.

Replace Normals 3D Node Inputs

The Replace Normals node has a single input for the 3D scene or incoming geometry.

  • SceneInput: The orange scene input accepts a 3D scene or 3D geometry that contains the normal coordinates you want to modify.

Replace Normals 3D Node Setup

The Replace Normals 3D node is inserted directly after the 3D object or scene whose normals you want to modify. Below, it is used to smooth the material on an imported 3D model.

Replace Normals 3D Node Control Tab

The options in the Control tab deal with repairing 3D geometry and then recomputing normals/tangents.

  • Pre-Weld Position Vertices
    Sometimes position vertices are duplicated in a geometry, even though they have the same position, causing normals/tangents to be miscomputed. The results of pre-welding are thrown away; they do not affect the output geometry’s position vertices.
  • Recompute
    Controls when normals/tangents are recomputed.
    • Always: The normals on the mesh are always recomputed.
    • If Not Present: The normals on the mesh are recomputed only if they are not present.
    • Never: The normals are never computed. This option is useful when animating.
  • Smoothing Angle
    Adjacent faces with angles in degrees smaller than this value have their adjoining edges smoothed across. A typical value one might choose for the Smoothing Angle is between 20 and 60 degrees. There is special case code for 0.0f and 360.0f (f stands for floating-point value). When set to 0.0f, faceted normals are produced; this is useful for artistic effect.
  • Ignore Smooth Groups
    If set to False, two faces that have different Smooth Groups are not smoothed across (e.g., the faces of a cube or the top surfaces of a cylinder have different Smooth Groups). If you check this On and set the smoothing angle large enough, the faces of a cube are smoothed across. There is currently no way to visualize Smooth Groups within Fusion.
  • Flip Normals
    Flipping of tangents can sometimes be confusing. Flip has an effect if the mesh has tangent vectors. Most meshes in Fusion don’t have tangent vectors until they reach a Renderer 3D, though. Also, when viewing tangent vectors in the viewers, the tangent vectors are created if they don’t exist. The confusing thing is if you view a Cube 3D that has no tangent vectors and press the FlipU/FlipV button, nothing happens. This is a result of there being no tangent vectors to create, but later the GL renderer can create some (unflipped) tangent vectors.

Replace Normals 3D Node Settings Tab

The Settings tab is common to many 3D nodes. The description of these controls can be found in Common Controls HERE

Replicate 3D Node

The Replicate 3D node replicates input geometry at positions of destination vertices. The vertices can be mesh vertices as well as particle positions. For each copy of the replicated input geometry, various transformations can be applied. The options in the Jitter tab allow non-uniform transformations, such as random positioning or sizes.

Replicate 3D Node Inputs

There are two inputs on the Replicate 3D node: one for the destination geometry that contains the vertices, and one for the 3D geometry you want to replicate.

  • Destination: The orange destination input accepts a 3D scene or geometry with vertex positions, either from the mesh or 3D particle animations.
  • Input: The input accepts the 3D scene or geometry for replicating. Once this input is connected, a new input for alternating 3D geometry is created.

At least one connected input is required.

Replicate 3D Node Setup

In the example below, a Replicate 3D node is inserted directly after the pRender node. A spaceship FBX node is connected to the green input representing the object that will be replicated based on the particles. Each particle cell takes on the shape of the 3D geometry connected to the input.

Replicate 3D Node Controls Tab

  • Step
    Defines how many positions are skipped. For example, a step of 3 means that only every third vertice of the destination mesh is used, while a step of 1 means that all positions are used.

    The Step setting helps to keep reasonable performance for big destination meshes. On parametric geometry like a torus, it can be used to isolate certain parts of the mesh.

    Point clouds are internally represented by six points once the Make Renderable option has been set. To get a single point, use a step of 6 and set an X offset of –0.5 to get to the center of the point cloud. Use –0.125 for Locator 3Ds. Once these have been scaled, the offset may differ.
  • Input Mode
    This menu defines in which order multiple input scenes are replicated at the destination. No matter which setting you choose, if only one input scene is supplied this setting has no effect.
    • When set to Loop, the inputs are used successively. The first input is at the first position, the second input at the second position, and so on. If there are more positions in the destination present than inputs, the sequence is looped.
    • When set to Random, a definite but random input for each position is used based on the seed in the Jitter tab. This input mode can be used to simulate variety with few input scenes.
    • The Death of Particles setting causes the input geometries’ IDs to change; therefore, their copy order may change.
  • Time Offset
    Use the Time Offset slider to offset any animations that are applied to the input geometry by a set amount per copy. For example, set the value to –1.0 and use a cube set to rotate on the Y-axis as the source. The first copy shows the animation from a frame earlier; the second copy shows animation from a frame before that, etc.

    This can be used with great effect on textured planes—for example, where successive frames of a video clip can be shown.
  • Alignment
    Alignment specifies how to align the copies in respect of the destination mesh normal or particle rotation.
    • Not Aligned: Does not align the copy. It stays rotated in the same direction as its input mesh.
    • Aligned: This mode uses the point’s normal and tries to reconstruct an upvector. It works best with organic meshes that have unwelded vertices, like imported FBX meshes, since it has the same rotations for vertices at the same positions. On plane geometric meshes, a gradual shift in rotation is noticeable. For best results, it is recommended to use this method at the origin before any transformations.
    • Aligned TBN: This mode results in a more accurate and stable alignment based on the tangent, binormal, and normal of the destination point. This works best for particles and geometric shapes. On unwelded meshes, two copies of multiple unwelded points at the same position may lead to different alignments because of their individual normals.
  • Color
    Affects the diffuse color or shader of each copy based on the input’s particle color.
    • Use Object Color: Does not use the color of the destination particle.
    • Combine Particle Color: Uses the shader of any input mesh and modifies the diffuse color to match the color from the destination particle.
    • Use Particle Color: Replaces the complete shader of any input mesh with a default shader. Its diffuse color is taken from the destination particle.
  • Translation
    These three sliders tell the node how much offset to apply to each copy. An X Offset of 1 would offset each copy one unit, one unit along the X-axis from the last copy.
  • Rotation Order
    These buttons can be used to set the order in which rotations are applied to the geometry. Setting the rotation order to XYZ would apply the rotation on the X-axis first, followed by the Y-axis rotation, and then the Z-axis rotation.
  • XYZ Rotation
    These three rotation sliders tell the node how much rotation to apply to each copy
  • XYZ Pivot
    The pivot controls determine the position of the pivot point used when rotating each copy.
  • Lock XYZ
    When the Lock XYZ checkbox is selected, any adjustment to the scale is applied to all three axes simultaneously.

    If this checkbox is disabled, the Scale slider is replaced with individual sliders for the X, Y, and Z scales.
  • Scale
    The Scale control sets how much scaling to apply to each copy
  • Jitter Tab
    The Jitter tab can be used to introduce randomness to various parameters.
  • Random Seed/Randomize
    The Random Seed is used to generate the jitter applied to the replicated objects. Two Replicate nodes with identical settings but different random seeds will produce two completely different results. Click the Randomize button to assign a Random Seed value.
  • Time Offset
    Use the Time Offset slider to offset any animations that are applied to the source geometry. Unlike Time Offset on the Controls tab, Jitter Time Offset is random, based on the Random Seed setting.
  • Translation XYZ Jitter
    Use these three controls to adjust the variation in the translation of the replicated objects.
  • Rotation XYZ Jitter
    Use these three controls to adjust the variation in the rotation of the replicated objects.
  • Pivot XYZ Jitter
    Use these three controls to adjust the variation in the rotational pivot center of the replicated objects. This affects only the additional jitter rotation, not the rotation produced by the rotation settings in the Controls tab.
  • Scale XYZ Jitter
    Use this control to adjust the variation in the scale of the replicated objects. Uncheck the Lock XYZ checkbox to adjust the scale variation independently on all three axes.

Replicate 3D Node Settings Tab

The Settings tab is common to many 3D nodes. These common controls are described in detail in Common Controls HERE

Ribbon 3D Node

Ribbon 3D generates an array of subdivided line segments or a single line between two points. It is quite useful for motion graphics, especially in connection with Replicate 3D to attach other geometry to the lines, and with Displace3D for creating lightning bolt-like structures. The array of lines is, by default, assigned with texture coordinates, so they can be used with a 2D texture. As usual, UVMap 3D can be used to alter the texture coordinates. This node relies heavily on certain OpenGL features and does not produce any visible result when the Renderer 3D node is set to use the software renderer.

Furthermore, the way lines are drawn is completely up to the graphics card capabilities, so the ribbon appearance may vary based on your computer’s graphics card.

Ribbon 3D Node Inputs

There are two inputs on the Ribbon 3D node: one for the destination geometry that contains the vertices, and one for the 3D geometry you want to replicate.

  • 3D Scene: The orange input accepts a 3D scene or geometry.
  • Material: The input accepts the 2D texture for the ribbon.

Neither connected input is required.

Ribbon 3D Node Setup

In the example below, a Ribbon 3D node is used to generate lines. A gradient background is connected to “colorize” the lines. Additional nodes are then used after the Ribbon 3D to bend and distort the lines.

Ribbon 3D Node Controls Tab

The Controls tab determines the number of ribbon strands, their size, length, and spacing.

  • Number of Lines
    The number of parallel lines drawn between the start point and end point.
  • Line Thickness
    Line thickness is allowed in the user interface to take on a floating-point value, but some graphics cards allow only integer values. Some cards may only allow lines equal to or thicker than one, or max out at a certain value.
  • Subdivision Level
    The number of vertices on each line between start point and end points. The higher the number, the more precise and smoother 3D displacement appears.
  • Ribbon Width
    Determines how far the lines are apart from each other.
  • Start
    XYZ control to set the start point of the ribbon.
  • End
    XYZ control to set the end point of the ribbon.
  • Ribbon Rotation
    Allows rotation of the ribbon around the virtual axis defined by start point and end points.
  • Anti-Aliasing
    Allows you to apply anti-aliasing to the rendered lines. Using anti-aliasing isn’t necessarily recommended. When activated, there may be be gaps between the line segments. This is especially noticeable with high values of line thickness. Again, the way lines are drawn is completely up to the graphics card, which means that these artifacts can vary from card to card.

Ribbon 3D Node Controls, Materials, and Settings Tabs

The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the Controls tab are common in many 3D nodes. The Materials tab and Settings tab in the Inspector are also duplicated in other 3D nodes. These common controls are described in detail in Common Controls HERE

Shape 3D Node

The Shape 3D node is used to produce several basic primitive 3D shapes, including planes, cubes, spheres, and cylinders.

Shape 3D Node Inputs

There are two optional inputs on the Shape 3D. The scene input can be used to combine additional geometry with the Shape 3D, while the material input can be used to texture map the Shape 3D object.

  • SceneInput: Although the Shape 3D creates its own 3D geometry, you can use the orange scene input to combine an additional 3D scene or geometry.
  • MaterialInput: The green input accepts either a 2D image or a 3D material. If a 2D image is provided, it is used as a diffuse texture map for the basic material built into the node. If a 3D material is connected, then the basic material is disabled.

Shape 3D Node Setup

In the example below, four Shape 3D nodes are used to create the primitives of a 3D set. Two of the Shape 3D nodes are connected creating a more complex primitive shape. Those shapes can then be used with a Projector 3D to texture them with a realistic material.

Shape 3D Node Controls Tab

The Controls tab allows you to select a shape and modify its geometry. Different controls appear based on the specific shape that you choose to create.

  • Shape
    This menu allows you to select the primitive geometry produced by the Shape 3D node. The remaining controls in the Inspector change to match the selected shape.
    • Lock Width/Height/Depth: [plane, cube] If this checkbox is selected, the width, height, and depth controls are locked together as a single size slider. Otherwise, individual controls over the size of the shape along each axis are provided.
    • Size Width/Height/Depth: [plane, cube] Used to control the size of the shape.
  • Cube Mapping
    When Cube is selected in the shape menu, the Cube uses cube mapping to apply the Shape node’s texture (a 2D image connected to the material input on the node).
  • Radius
    When a Sphere, Cylinder, Cone, or Torus is selected in the shape menu, this control sets the radius of the selected shape.
  • Top Radius
    When a cone is selected in the Shape menu, this control is used to define a radius for the top of a cone, making it possible to create truncated cones.
  • Start/End Angle
    When the Sphere, Cylinder, Cone, or Torus shape is selected in the Shape menu, this range control determines how much of the shape is drawn. A start angle of 180° and end angle of 360° would only draw half of the shape.
  • Start/End Latitude
    When a Sphere or Torus is selected in the Shape menu, this range control is used to crop or slice the object by defining a latitudinal subsection of the object.
  • Bottom/Top Cap
    When Cylinder or Cone is selected in the Shape menu, the Bottom Cap and Top Cap checkboxes are used to determine if the end caps of these shapes are created or if the shape is left open.
  • Section
    When the Torus is selected in the Shape menu, Section controls the thickness of the tube making up the torus.
  • Subdivision Level/Base/Height
    The Subdivision controls are used to determine the tessellation of the mesh on all shapes. The higher the subdivision, the more vertices each shape has.
  • Wireframe
    Enabling this checkbox causes the mesh to render only the wireframe for the object.

Shape 3D Node Controls, Materials, Transform and Settings Tabs

The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the Controls tab are common in many 3D nodes. The Materials tab, Transforms tab, and Settings tab in the Inspector are also duplicated in other 3D nodes. These common controls are described in detail at the end of this chapter in Common Controls HERE

Sphere Map vs. Connecting the Texture to a Sphere Directly

You can connect a LatLong (equirectangular) texture map directly to a sphere instead of piping it through the Sphere Map node first. This results in a different rendering if you set the start/end angle and latitude to less than 360°/180°. In the first case, the texture is squashed. When using the Sphere Map node, the texture is cropped.

Soft Clip Node

The Soft Clip node is used to fade out geometry and particles that get close to the camera. This helps avoid the visible “popping off” that affects many particle systems and 3D flythroughs.

This node is very similar to the Fog 3D node, in that it is dependent on the geometry’s distance from the camera.

Soft Clip Node Inputs

The Soft Clip includes only a single input for a 3D scene that includes a camera connected to it

  • SceneInput: The orange scene input is a required connection. It accepts a 3D scene input that includes a Camera 3D node.

Soft Clip Node Setup

The Soft Clip node is usually placed just before the Renderer 3D node to ensure that downstream adjustments to lighting and textures do not affect the result. It can be placed in any part of the 3D portion of the node tree if the soft clipping effect is only required for a portion of the scene.

Soft Clip Node Controls Tab

The Controls tab determines how an object transitions between opaque and transparent as it moves closer to the camera.

  • Enable
    This checkbox can be used to enable or disable the node. This is not the same as the red switch in the upper-left corner of the Inspector. The red switch disables the tool altogether and passes the image on without any modification. The Enable checkbox is limited to the effect of the tool. Other parts, like scripts in the Settings tab, still process as normal.
  • Smooth Transition
    By default, an object coming closer and closer to the camera slowly fades out with a linear progression. With the Smooth Transition checkbox enabled, the transition changes to a nonlinear curve, arguably a more natural-looking transition
  • Radial
    By default, the soft clipping is done based on the perpendicular distance to a plane (parallel with the near plane) passing through the eye point. When the Radial option is checked, the Radial distance to the eye point is used instead of the Perpendicular distance. The problem with Perpendicular distance soft clipping is that when you move the camera about, as objects on the left or right side of the frustum move into the center, they become less clipped, although they remain the same distance from the eye. Radial soft clip fixes this. Sometimes Radial soft clipping is not desirable.

    For example, if you apply soft clip to an object that is close to the camera, like an image plane, the center of the image plane could be unclipped while the edges could be fully clipped because they are farther from the eye point
  • Show In Display Views
    Normally, the effect is only visible when the scene is viewed using a Camera node. When enabled, the soft clip becomes visible in the scene from all points of view.
  • Transparent/Opaque Distance
    Defines the range of the soft clip. The objects begin to fade in from an opacity of 0 at the Transparent distance and are fully visible at the Opaque distance. All units are expressed as distance from the camera along the Z-axis.

Soft Clip Node Settings Tab

The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are described in detail in Common Controls HERE

Spherical Camera Node

The Spherical Camera allows the 3D Renderer node to output an image covering all viewing angles, laid out in several different formats. This image may be used, for example, as a skybox texture or reflection map or viewed in a VR headset. The Image Width setting in the 3D Renderer sets the size of each square cube face, so the resulting image may be a multiple of this size horizontally and vertically.

Spherical Camera Node Inputs

The Spherical camera node has two inputs.

  • Image: This orange image input requires an image in a spherical layout, which can be any of LatLong (2:1 equirectangular), Horizontal/Vertical Cross, or Horizontal/Vertical Strip.
  • Stereo Input: The green input for a right stereo camera if you are working in stereo VR.
  • Neither input is required.

Spherical Camera Node Setup

In many ways, the Spherical Camera is set up identically to the regular Camera 3D node. The output of the camera connects into a Merge 3D. Typically, the Merge 3D has an image from a LatLong or H Cross/V Cross formatted image either directly or through a Panomap node. The image is wrapped around a sphere, and the camera is placed inside the sphere.

Spherical Camera Node Controls Tab

  • Layout
    • VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross, vertical or horizontal, with the forward view in the center of the cross, in a 3:4 or 4:3 image.
    • VStrip and HStrip: VStrip and HStrip are the six square faces of a cube laid vertically or horizontally in a line, ordered as Left, Right, Up, Down, Back, Front (+X, -X, +Y, -Y, +Z, -Z), in a 1:6 or 6:1 image.
    • LatLong: LatLong is a single 2:1 image in equirectangular mapping.
  • Near/Far Clip
    The clipping plane is used to limit what geometry in a scene is rendered based on the object’s distance from the camera’s focal point. This is useful for ensuring that objects that are extremely close to the camera are not rendered and for optimizing a render to exclude objects that are too far away to be useful in the final rendering.

    The default perspective camera ignores this setting unless the Adaptively Adjust Near/Far Clip checkbox control below is disabled.

    The values are expressed in units, so a far clipping plane of 20 means that any objects more than 20 units from the camera are invisible to the camera. A near clipping plane of 0.1 means that any objects closer than 0.1 units are also invisible.
  • Adaptively Adjust Near/Far Clip
    When selected, the Renderer automatically adjusts the camera’s near/far clipping plane to match the extents of the scene. This setting overrides the values of the Near and Far clip range control described above. This option is not available for orthographic cameras.
  • Viewing Volume Size
    The Viewing Volume Size control appears only when the Projection Type is set to Orthographic. It determines the size of the box that makes up the camera’s field of view. The Z distance of an orthographic camera from the objects it sees does not affect the scale of those objects, only the viewing size does.
  • Plane of Focus (for Depth of Field)
    This value is used by the OpenGL renderer to calculate depth of field. It defines the distance to a virtual target in front of the camera.
  • Stereo Method
    This control allows you to adjust your stereoscopic method to your preferred working model.
  • Toe In
    Both cameras point at a single focal point. Though the result is stereoscopic, the vertical parallax introduced by this method can cause discomfort by the audience.
  • Off Axis
    Often regarded as the correct way to create stereo pairs, this is the default method in Fusion. Off Axis introduces no vertical parallax, thus creating less stressful stereo images.
  • Parallel
    The cameras are shifted parallel to each other. Since this is a purely parallel shift, there is no Convergence Distance control. Parallel introduces no vertical parallax, thus creating less stressful stereo images.
  • Eye Separation
    Defines the distance between both stereo cameras. If the Eye Separation is set to a value larger than 0, controls for each camera are shown in the viewer when this node is selected. There is no Convergence Distance control in Parallel mode.
  • Convergence Distance
    This control sets the stereoscopic convergence distance, defined as a point located along the Z-axis of the camera that determines where both left and right eye cameras converge.
  • Control Visibility
    Allows you to selectively activate the onscreen controls that are displayed along with the camera.
    • Frustum: Displays the actual viewing cone of the camera.
    • View Vector: Displays a white line inside the viewing cone, which can be used to determine the shift when in Parallel mode.
    • Near Clip: The Near clipping plane. This plane can be subdivided for better visibility.
    • Far Clip: The Far clipping plane. This plane can be subdivided for better visibility.
    • Plane of Focus: The camera focal point according to the Plane of Focus slider explained above. This plane can be subdivided for better visibility.
    • Convergence Distance: The point of convergence when using Stereo mode. This plane can be subdivided for better visibility.

Spherical Camera Node Settings Tab

The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are described in detail in Common Controls HERE

Text 3D Node

The Text3D node is a 3D version of the 2D Text+ node. The controls for this node are mostly identical to the controls for the 2D version in almost all respects, except Text 3D supports only one shading element.

The Text 3D node was based on a tool that predates the Fusion 3D environment. So, some of the controls found in the basic primitive shapes and geometry loaders, such as many of the material, lighting, and matte options, are not found in this node’s controls. The Text 3D node has a built-in material, but unlike the other 3D nodes it does not have a material input. The Shading tab contains controls to adjust the diffuse and specular components. To replace this default material with a more advanced material, follow the Text Plus node with a Replace Material 3D node. The Override 3D node can be used to control the lighting, visibility, and matte options for this node.

When network rendering a comp that contains Text 3D nodes, each render machine is required to have the necessary fonts installed or the network rendering fails. Fusion does not share or copy fonts to render slaves.

Text 3D Node Inputs

  • SceneInput: The orange scene input accepts a 3D scene that can be combined with the 3D text created in the node.
  • ColorImage: The green color image input accepts a 2D image and wraps it around the text as a texture. This input is visible only when Image is selected in the Material Type menu located in the Shading tab.
  • BevelTexture: The magenta bevel texture input accepts a 2D image and wraps it around the bevel as a texture. This input is visible only when one Material is disabled in the Shader tab and Image is selected in the Bevel Type menu.

Text 3D Node Setup

The Text 3D node generates text, so most often this node starts a branch of your node tree. However, to apply more realistic materials, a replace Material node is often added after the Text 3D, prior to connecting into a Merge 3D node.

Text 3D Node Text Tab

The Text 3D text tab in the Inspector is divided into three sections: Text, Extrusion, and Advanced Controls. The Text section includes parameters that are familiar to anyone who has used a word processor. It includes commonly used text formatting options. The Extrusion section includes controls to extrude the text and create beveled edges for the text. The Advanced controls are used for kerning options.

  • Styled Text
    The Edit box in this tab is where the text to be created is entered. Any common character can be typed into this box. The common OS clipboard shortcuts (Command-C or Ctrl-C to copy, Command-X or Ctrl-X to cut, Command-V or Ctrl-V to paste) also work; however, right-clicking on the Edit box displays a custom contextual menu with several modifiers you can add for more animation and formatting options.
  • Font
    Two Font menus are used to select the font family and typeface such as Regular, Bold, and Italic.
  • Color
    This control sets the basic tint color of the text. This is the same Color control displayed in the Material type section of the Shader tab
  • Size
    This control is used to increase or decrease the size of the text. This is not like selecting a point size in a word processor. The size is relative to the width of the image.
  • Tracking
    The Tracking parameter adjusts the uniform spacing between each character of text.
  • Line Spacing
    Line Spacing adjusts the distance between each line of text. This is sometimes called leading in wordprocessing applications.
  • V Anchor
    The Vertical Anchor controls consist of three buttons and a slider. The three buttons are used to align the text vertically to the top, middle, or bottom baseline of the text. The slider can be used to customize the alignment. Setting the Vertical Anchor affects how the text is rotated but also the location for line spacing adjustments. This control is most often used when the Layout type is set to Frame in the Layout tab.
  • V Justify
    The Vertical Justify slider allows you to customize the vertical alignment of the text from the V Anchor setting to full justification so it is aligned evenly along the top and bottom edges. This control is most often used when the Layout type is set to Frame in the Layout tab.
  • H Anchor
    The Horizontal Anchor controls consist of three buttons and a slider. The three buttons justify the text alignment to the left edge, middle, or right edge of the text. The slider can be used to customize the justification. Setting the Horizontal Anchor affects how the text is rotated but also the location for tracking (leading) spacing adjustments. This control is most often used when the Layout type is set to Frame in the Layout tab.
  • H Justify
    The Horizontal Justify slider allows you to customize the justification of the text from the H Anchor setting to full justification so it is aligned evenly along the left and right edges. This control is most often used when the Layout type is set to Frame in the Layout tab.
  • Direction
    This menu provides options for determining the direction in which the text is to be written.
  • Line Direction
    These menu options are used to determine the text flow from top to bottom, bottom to top, left to right, or right to left.
  • Write On
    This range control is used to quickly apply simple Write On and Write Off animation to the text. To create a Write On effect, animate the End portion of the control from 1 to 0 over the length of time required. To create a Write Off effect, animate the Start portion of the range control from 0 to 1.
  • Extrusion Depth
    An extrusion of 0 produces completely 2D text. Any value greater than 0 extrudes the text to generate text with depth.
  • Bevel Depth
    Increase the value of the Bevel Depth slider to bevel the text. The text must have extrusion before this control has any effect.
  • Bevel Width
    Use the Bevel Width control to increase the width of the bevel.
  • Smoothing Angle
    Use this control to adjust the smoothing angle applied to the edges of the bevel.
  • Front/Back Bevel
    Use these checkboxes to enable beveling for the front and back faces of the text separately.
  • Custom Extrusion
    In Custom mode, the Smoothing Angle controls the smoothing of normals around the edges of a text character. The spline itself controls the smoothing along the extrusion profile. If a spline segment is smoothed, for example by using the shortcut Shift-S, the normals are smoothed as well. If the control point is linear, the shading edge is sharp. The first and last control points on the spline define the extent of the text.
  • Custom Extrusion
    Subdivisions Controls the number of subdivisions within the smoothed portions of the extrusion profile.
  • Force Monospaced
    This slider control can be used to override the kerning (spacing between characters) that is defined in the font. Setting this slider to zero (the default value) causes Fusion to rely entirely on the kerning defined with each character. A value of one causes the spacing between characters to be completely even, or monospaced.
  • Use Font Defined Kerning
    This enables kerning as specified in the True Type font and is on by default.
  • Manual Font Kerning
    Manual Font Kerning is only performed using the Text+ node. To perform manual kerning on Text3D, create the text using the Text+ node and kern it in that tool. Then, right-click over the tool’s name in the Inspector and choose Copy. Once the settings are copied, select the Text 3D node and choose Paste Settings from the Inspector’s contextual menu. Once the manual kerning is pasted in the Text 3D node, the two buttons in the Inspector clear either the selected character’s kerning or all the kerning adjustment in the current text.

Text 3D Node Layout Tab

The Layout Tab is used to position the text in one of four different layout types.

  • Layout Type
    This menu selects the layout type for the text.
    • Point: Point layout is the simplest of the layout modes. Text is arranged around an adjustable center point.
    • Frame: Frame layout allows you to define a rectangular frame used to align the text. The alignment controls are used to justify the text vertically and horizontally within the boundaries of the frame.
    • Circle: Circle layout places the text around the curve of a circle or oval. Control is offered over the diameter and width of the circular shape. When the layout is set to this mode, the Alignment controls determine whether the text is positioned along the inside or outside of the circle’s edge, and how multiple lines of text are justified.
    • Path: Path layout allows you to shape your text along the edges of a path. The path can be used simply to add style to the text, or it can be animated using the Position on Path control that appears when this mode is selected.
  • Center X, Y, and Z
    These controls are used to position the center of the layout. For instance, moving the center X, Y, and Z parameters when the layout is set to Frame moves the position of the frame the text is within.
  • Size
    This slider is used to control the scale of the layout element. For instance, increasing size when the layout is set to Frame increases the frame size the text is within.
  • Width and Height
    The Width and Height controls are visible when the Layout mode is set to Circle or Frame. The Width and Height controls are visible only when the Layout mode is set to Frame. They are used to adjust the dimensions and aspect of the Layout element.
  • Rotation Order
    These buttons allow you to select the order in which 3D rotations are applied to the text.
  • X, Y, and Z
    These angle controls can be used to adjust the angle of the Layout element along any axis.
  • Fit Characters
    This menu control is visible only when the Layout type is set to Circle. This menu is used to select how the characters are spaced to fit along the circumference.
  • Position on Path
    The Position on Path control is used to control the position of the text along the path. Values less than zero or greater than one cause the text to move beyond, continuing in the same direction set by the last two points on the path.
  • Right-Click Here for Shape Animation
    This label appears only when the Layout type is set to Path. It is used to provide access to a contextual menu that provides options for connecting the path to other paths in the node tree, and animating the spline points on the path over time.

Text 3D Node Transform Tab

There are actually two Transform tabs in the Text 3D Inspector. The first Transform tab is unique to the Text 3D tool, while the second is the common Transform tab found on many 3D nodes. The Text 3D-specific Transform tab is described below since it contains some unique controls for this node.

  • Transform
    This menu determines the portion of the text affected by the transformations applied in this tab. Transformations can be applied to line, word, and character levels simultaneously. This menu is only used to keep the number of visible controls to a reasonable number.
    • Characters: Each character of text is transformed along its own center axis.
    • Words: Each word is transformed separately on the word’s center axis.
    • Lines: Each line of the text is transformed separately on that line’s center axis.
  • Spacing
    The Spacing slider is used to adjust the amount of space between each line, word, or character. Values less than one usually cause the characters to begin overlapping.
  • Pivot X, Y, and Z
    This provides control over the exact position of the axis. By default, the axis is positioned at the calculated center of the line, word, or character. The pivot control works as an offset, such that a value of 0.1, 0.1 in this control would cause the axis to be shifted downward and to the right for each of the text elements. Positive values in the Z-axis slider move the axis further along the axis (away from the viewer). Negative values bring the axis of rotation closer.
  • Rotation Order
    These buttons are used to determine the order in which transforms are applied. X, Y, and Z would mean that the rotation is applied to X, then Y, and then Z.
  • X, Y, and Z
    These controls can be used to adjust the angle of the text elements in any of the three dimensions.
  • Shear X and Y
    Adjust these sliders to modify the slanting of the text elements along the X- and Y-axis.
  • Size X and Y
    Adjust these sliders to modify the size of the text elements along the X- and Y-axis.
  • Shading
    The Shading tab for the Text 3D node controls the overall appearance of the text and how lights affect its surface.
  • Opacity
    Reducing the material’s opacity decreases the color and Alpha values of the specular and diffuse colors equally, making the material transparent and allowing hidden objects to be seen through the material.
  • Use One Material
    Deselecting this option reveals a second set of Material controls for the beveled edge of the text.
  • Type
    To use a solid color texture, select the Solid mode. Selecting the Image mode reveals a new external input on the node that can be connected to another 2D image.
  • Specular Color
    Specular Color determines the color of light that reflects from a shiny surface. The more specular a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular highlights, whereas metallic surfaces like gold have specular highlights that tend to inherit their color from the material color. The basic shader material does not provide an input for textures to control the specularity of the object. Use nodes from the 3D Material category when more precise control is required over the specular appearance.
  • Specular Intensity
    Specular Intensity controls the strength of the specular highlight. If the specular intensity texture port has a valid input, then this value is multiplied by the Alpha value of the input.
  • Specular Exponent
    Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper the falloff, and the smoother and glossier the material appears. The basic shader material does not provide an input for textures to control the specular exponent of the object. Use nodes from the 3D Material category when more precise control is required over the specular exponent.
  • Image Source
    This control determines the source of the texture applied to the material. If the option is set to Tool, then an input appears on the node that can be used to apply the output of a 2D node as the texture. Selecting Clip opens a file browser that can be used to select an image or image sequence from disk. The Brush option provides a list of clips found in the Fusion\brushes folder.
  • Bevel Material
    This option appears only when the Use One Material checkbox control is selected. The controls under this option are an exact copy of the Material controls above but are applied only to the beveled edge of the text.
  • Position, Rotation, Shear, and Size
    These transform controls act similarly to the transform controls in the Transform tab when a single shading element is enabled from the top of the Shading tab. However, when two or more shading elements are enabled, these transform controls are applied to the currently selected shading element. This allows you to independently control the position, rotation, shearing, and size of borders, fill colors, and shadows.

Text 3D Node Transform and Settings Tabs

The Transform and Settings tabs in the Inspector are duplicated in other 3D nodes. These common controls are described in detail here Common Controls HERE

Text 3D Node Modifiers

Right-clicking within the Styled Text box displays a menu with the following text modifiers. Only one modifier can be applied to a Text 3D Styled Text box. Below is a brief list of the text specific modifiers

  • Animate
    Use the Animate command to set to a keyframe on the entered text and animate the content over time.
  • Character Level Styling
    The Text 3D node doesn’t support Character Level Styling directly. However, you can create a Text+ node first and modify its text field with a Character Level Styling modifier. Then either connect the Text 3D’s text field to the modifier that is now available or copy the Text+ node and paste its settings to the Text 3D node (right-click > Paste Settings)
  • Comp Name
    Comp Name puts the name of the composition in the Styled Text box and is generally used as a quick way to create slates.
  • Follower
    Follower is a text modifier that can be used to ripple animation applied to the text across each character in the text.
  • Publish
    Publish the text for connection to other text nodes.
  • Text Scramble
    A text modifier ID is used to randomize the characters in the text.
  • Text Timer
    A text modifier is used to count down from a specified time or to output the current date and time.
  • Time Code
    A text modifier is used to output Time Code for the current frame.
  • Connect To
    Use this option to connect the text generated by this Text node to the published output of another node.

Transform 3D Node

The Transform 3D node can be used to translate, rotate, or scale all the elements within a scene without requiring a Merge 3D node. This can be useful for hierarchical transformations or for offsetting objects that are merged into a scene multiple times. Its controls are identical to those found in other 3D nodes’ Transformation tabs.

Transform 3D Node Inputs

The Transform node has a single required input for a 3D scene or 3D object.

  • Scene Input: The orange scene input is connected to a 3D scene or 3D object to apply a second set of transformation controls.

Transform 3D Node Setup

The Transform 3D node adds 3D position, rotation, and pivot control onto any existing transforms in the 3D node prior to it. You can combine multiple Transform 3D nodes together to build parenting or hierarchical movement.

Transform 3D Node Controls Tab

The Controls tab is the primary tab for the Transform 3D node. It includes controls to translate, rotate, or scale all elements within a scene without requiring a Merge 3D node.

  • Translation
    • X, Y, Z Offset: Controls are used to position the 3D element in 3D space.
  • Rotation
    • Rotation Order: Use these buttons to select the order used to apply the rotation along each axis of the object. For example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis, and then the Z-axis.
    • X, Y, Z Rotation: Use these controls to rotate the object around its pivot point. If the Use Target checkbox is selected, then the rotation is relative to the position of the target; otherwise, the global axis is used.
  • Pivot Controls
    • X, Y, Z Pivot: A pivot point is the point around which an object rotates. Normally, an object rotates around its own center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from the center.
  • Scale
    • X, Y, Z Scale: If the Lock X/Y/Z checkbox is checked, a single scale slider is shown. This adjusts the overall size of the object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are displayed to allow scaling in any dimension.
  • Use Target
    Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When Use Target is enabled, the object always rotates to face the target. The rotation of the object becomes relative to the target.
  • Import Transform
    Opens a file browser where you can select a scene file saved or exported by your 3D application. It supports the following file types:
LightWave Scene.lws
Max Scene.ase
Maya Ascii Scene.ma
dotXSI .xsi .xsi
  • Onscreen Transformation Controls
    Onscreen Transformation controls provide an alternative way of using the controls in the Inspector. The viewer includes modes for transformation, rotation, and scaling. To change the mode of the onscreen controls, select one of the three buttons in the toolbar along the side of the viewer. The modes can also be toggled using the keyboard shortcut Q for translation, W for rotation, and E for scaling. In all three modes, an individual axis of the control may be dragged to affect just that axis, or the center of the control may be dragged to affect all three axes.

    The scale sliders for most 3D nodes default to locked, which causes uniform scaling of all three axes. Unlock the Lock X/Y/Z Scale checkbox to scale an object on a single axis only

Triangulate 3D Node

The Triangulate 3D node is a unique node in that it has no controls. This node turns polygon shapes into triangles. For instance, a quad that is four points becomes two triangles. It is used to convert complex polygon shapes into a mesh for easier processing.

Triangulate 3D Node Inputs

The Triangulate 3D node has a single required input for a 3D scene or 3D object.

  • Scene Input: The orange scene input is connected to the 3D scene or 3D object you want to triangulate.

Triangulate 3D Node Setup

The Triangulate 3D node is placed after the geometry you want to triangulate

Triangulate 3D Node Controls Tab

There are no controls for this node

Triangulate 3D Node Settings Tab

The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are described in detail at the end of this chapter in Common Controls HERE

UV Map 3D Node

The UV Map 3D node replaces the UV texture coordinates on the geometry in the scene. These coordinates tell Fusion how to apply a texture to an object. While it is possible to adjust the global properties of the selected mapping mode, it is not possible to manipulate the UV coordinates of individual vertices directly from within Fusion. The onscreen controls drawn in the viewers are for reference only and cannot be manipulated.

Camera Projections with UV Map 3D
The Camera Mapping mode makes it possible to project texture coordinates onto geometry through a camera. Once you select Camera from the Mapping mode menu, then connect the Camera 3D node that you want to use to create the UV coordinates.

See the Camera 3D and Projector 3D nodes for alternate approaches to projection.

The projection can optionally be locked to the vertices as it appears on a selected frame.

This fails if the number of vertices in the mesh changes over time, as Fusion must be able to match up the mesh at the reference time and the current time. To be more specific, vertices may not be created or destroyed or reordered. So, projection locking does not work for many particle systems, or for primitives with animated subdivisions, or with duplicate nodes using non-zero time offsets.

UV Map 3D Node Inputs

The UV Map 3D node has two inputs: one for a 3D scene or 3D object and another optional input for a Camera 3D node.

  • Scene Input: The orange scene input is connected to the 3D scene or 3D object you want to triangulate.
  • CameraInput: This input expects the output of the Camera 3D node. It is only visible when the Camera Map mode menu is set to Camera.

UV Map 3D Node Setup

The UV Map 3D node is placed after all the geometry and set to Camera Map. Connecting a camera to the UV map allows you to line up the texture based on a centered camera position and 3D geometry.

UV Map 3D Node Controls Tab

The UV Map 3D Controls tab allows you to select Planar, Cylindrical, Spherical, XYZ, and Cubic mapping modes, which can be applied to basic Fusion primitives as well as imported geometry. The position, rotation, and scale of the texture coordinates can be adjusted to allow fine control over the texture’s appearance. An option is also provided to lock the UV produced by this node to animated geometry according to a reference frame. This can be used to ensure that textures applied to animated geometry do not slide.

  • Map Mode
    The Map mode menu is used to define how the texture coordinates are created. You can think of this menu as a way to select the virtual geometry that projects the UV space on the object.
    • Planar: Creates the UV coordinates using a plane.
    • Cylindrical: Creates the UV coordinates using a cylindrical-shaped object.
    • Spherical: The UVs are created using a sphere.
    • XYZ to UVW: The position coordinates of the vertices are converted to UVW coordinates directly. This is used for working with procedural textures.
    • CubeMap: The UVs are created using a cube.
    • Camera: Enables the Camera input on the node. After connecting a camera to the node, the texture coordinates are created based on camera projection.
  • Orientation X/Y/Z
    Defines the reference axis for aligning the Map mode.
  • Fit
    Clicking this button fits the Map mode to the bounding box of the input scene.
  • Center
    Clicking this button moves the center of the Map mode to the bounding box center of the input scene.
  • Lock UVs on Animated Objects
    If the object is animated, the UVs can be locked to it by enabling this option. The option also reveals the Ref Time slider, where it is possible to choose a reference frame for the UV mapping. Using this feature, it is not required to animate the UV map parameters. It is enough to set up the UV map at the reference time.
  • Size X/Y/Z
    Defines the size of the projection object.
  • Center X/Y/Z
    Defines the position of the projection object.
  • Rotation/Rotation Order
    Use these buttons to select which order is used to apply the rotation along each axis of the object. For example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis, and then the Z-axis.
  • Rotation X/Y/Z
    Sets the orientation of the projection object for each axis, independent from the rotation order.
  • Tile U/V/W
    Defines how often a texture fits into the projected UV space on the applicable axis. Note that the UVW coordinates are transformed, not a texture. This works best when used in conjunction with the Create Texture node.
  • Flip U/V/W
    Mirrors the texture coordinates around the applicable axis.
  • Flip Faces (Cube Map Mode Only)
    Mirrors the texture coordinates on the individual faces of the cube.

UV Map 3D Node Settings Tab

The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are described in detail at the end of this chapter in Common Controls HERE

Weld 3D Node

Sometimes 3D geometry has vertices that should have been joined when the geometry was created, but for one reason or another they are not joined. This can cause artifacts, especially when the two vertices have different normals.

For example, you may find:

  • The different normals produce hard shading/lighting edges where none were intended.
  • If you try to Displace 3D the vertices along their normals, they crack.
  • Missing pixels or doubled-up pixels in the rendered image.
  • Particles pass through the tiny invisible cracks.

Instead of round tripping back to your 3D modeling application to fix the “duplicated” vertices, the Weld 3D node allows you to do this in Fusion. Weld 3D welds together vertices with the same or nearly the same positions. This can be used to fix cracking issues when vertices are displaced by welding the geometry before the Displace. There are no user controls to pick vertices. Currently, this node welds together just position vertices; it does not weld normals, texcoords, or any other vertex stream. So, although the positions of two vertices have been made the same, their normals still have their old values. This can lead to hard edges in certain situations.

Weld 3D Node Inputs

The Weld 3D node has a single input for a 3D scene or 3D object you want to repair.

  • Scene Input: The orange scene input is connected to the 3D scene or 3D object you want to fix.

Weld 3D Node Setup

The Weld 3D node is placed after the geometry that has duplicate vertices problems. Sometimes problems are exposed when displacing the geometry. In that case, placing the weld after the geometry but before the Displace 3D can repair the issues.

Weld 3D Node Controls Tab

The Controls tab for the Weld 3D node includes a simple Weld Mode menu. You can choose between welding vertices or fracturing them.

  • Fracture
    Fracturing is the opposite of welding, so all vertices are unwelded. This means that all polygon adjacency information is lost. For example, an Image Plane 3D normally consists of connected quads that share vertices. Fracturing the image plane causes it to become a bunch of unconnected quads.
  • Tolerance
    In auto mode, the Tolerance value is automatically detected. This should work in most cases. It can also be adjusted manually if needed.

Weld 3D Node Settings Tab

The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are described in Common Controls HERE

3D Node Modifier

Coordinate Transform 3D

Because of the hierarchical nature of the Fusion 3D node tree, the original position of an object in the 3D scene often fails to indicate the current position of the object. For example, an image plane might initially have a position at 1, 2, 1, but then be scaled, offset, and rotated by other nodes further downstream in the 3D scene, ending up with an absolute location of 10, 20, 5.

This can complicate connecting an object further downstream in the composition directly to the position of an upstream object. The Coordinate Transform modifier can be added to any set of XYZ coordinate controls and calculate the current position of a given object at any point in the scene hierarchy.

To add a Coordinate Transform modifier, simply right-click a number field on any node and select Modify With/CoordTransform Position from the Controls’ contextual menu.

  • Target Object
    This control should be connected to the 3D node that produces the original coordinates to be transformed. To connect a node, drag and drop a node from the node tree into the Text Edit control, or right-click the control and select the node from the contextual menu. It is also possible to type the node’s name directly into the control.
  • Sub ID
    The Sub ID slider can be used to target an individual sub-element of certain types of geometry, such as an individual character produced by a Text 3D node or a specific copy created by a Duplicate 3D node.
  • Scene Input
    This control should be connected to the 3D node that outputs the scene containing the object at the new location. To connect a node, drag and drop a node from the node tree into the Text Edit control, or right-click the control and select an object from the Connect To submenu.

3D Node Common Controls

Nodes that handle 3D geometry share several identical controls in the Inspector. This section describes controls that are common among 3D nodes.

3D Node Common Controls Tab

These controls are often displayed in the lower half of the Controls tab. They appear in nodes that create or contain 3D geometry

  • Visibility
    • Visible: If this option is enabled, the object is visible in the viewers and in final renders. When disabled, the object is not visible in the viewers nor is it rendered into the output image by the Renderer 3D node. Also, a non-visible object does not cast shadows.
    • Unseen by Cameras: When the Unseen by Cameras checkbox is enabled, the object is visible in the viewers (unless the Visible checkbox is disabled), except when viewed through a camera. Also, the object is not rendered into the output image by the Renderer 3D node. However, shadows cast by an unseen object are still visible when rendered by the software renderer in the Renderer 3D node, though not by the OpenGL renderer.
    • Cull Front Face/Back Face: Use these options to eliminate rendering and display of certain polygons in the geometry. If Cull Back Face is selected, polygons facing away from the camera are not rendered and do not cast shadows. If Cull Front Face is selected, polygons facing toward the camera are not rendered and do not cast shadows. Enabling both options has the same effect as disabling the Visible checkbox.
    • Suppress Aux Channels for Transparent Pixels: In previous versions of Fusion, transparent pixels were excluded by the software and Open GL render options in the Renderer 3D node. To be more specific, the software renderer excluded pixels with R,G,B,A set to 0, and the GL renderer excluded pixels with A set to 0. This is now optional. The reason you might want to do this is to get aux channels (e.g., Normals, Z, UVs) for the transparent areas. For example, suppose you want to replace the texture on a 3D element that is transparent in certain areas with a texture that is transparent in different areas. It would then be useful to have transparent areas set aux channels (particularly UVs). As another example, suppose you are adding depth of field. You probably do not want the Z-channel to be set on transparent areas, as this gives you a false depth. Also, keep in mind that the exclusion is based on the final pixel color including lighting, if it is on. So, if you have a specular highlight on a clear glass material, this checkbox does not affect it.
  • Lighting
    • Affected by Lights: Disabling this checkbox causes lights in the scene to not affect the object. The object does not receive nor cast shadows, and it is shown at the full brightness of its color, texture, or material.
    • Shadow Caster: Disabling this checkbox causes the object not to cast shadows on other objects in the scene.
    • Shadow Receiver: Disabling this checkbox causes the object not to receive shadows cast by other objects in the scene.
  • Matte
    Enabling the Is Matte option applies a special texture, causing the object to not only become invisible to the camera, but also making everything that appears directly behind the camera invisible as well. This option overrides all textures. For more information on Fog 3D and Soft Clipping, see Chapter 85, “3D Compositing Basics,” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion Reference Manual.
    • Is Matte: When activated, objects whose pixels fall behind the matte object’s pixels in Z do not get rendered. Two additional options are displayed when the Is Matte checkbox is activated.
    • Opaque Alpha: When the Is Matte checkbox is enabled, the Opaque Alpha checkbox sets the Alpha value of the matte object to 1.
    • Infinite Z: This option sets the value in the Z-channel to infinite. This checkbox is visible only when the Is Matte option is enabled.
  • Blend Mode
    A Blend mode specifies which method is used by the renderer when combining this object with the rest of the scene. The blend modes are essentially identical to those listed in the section for the 2D Merge node. For a detailed explanation of each mode, see the section for that node.

    The blending modes were originally designed for use with 2D images. Using them in a lit 3D environment can produce undesirable results. For best results, use the Apply modes in unlit 3D scenes using the software option in the Renderer 3D node.
    • OpenGL Blend Mode: Use this menu to select the blending mode that is used when the geometry is processed by the OpenGL renderer in the Renderer 3D node. This is also the mode used when viewing the object in the viewers. Currently the OpenGL renderer supports a limited number of blending modes.
    • Software Blend Mode: Use this menu to select the blending mode that is used when the geometry is processed by the software renderer. Currently, the software renderer supports all the modes described in the Merge node documentation, except for the Dissolve mode.
  • Normal/Tangents
    Normals are imaginary lines perpendicular to each point on the surface of an object. They are used to illustrate the exact direction and orientation of every polygon on 3D geometry. Knowing the direction and orientation determines how the object gets shaded. Tangents are lines that exists along the surface’s plane. These lines are tangent to a point on the surface. The tangent lines are used to describe the direction of textures you apply to the surface of 3D geometry.
    • Scale: This slider increases or decreases the length of the vectors for both normals and tangents.
    • Show Normals: Displays blue vectors typically extending outside the surface of the geometry. These normal vectors help indicate how different areas of the surface are illuminated based on the angle at which the light hits it.
    • Show Tangents: Displays green vectors for Y and red vectors of X. The X and Y vectors represent the direction of the image or texture you are applying to the geometry.
  • Object ID
    Use this slider to select which ID is used to create a mask from the object of an image. Use the Sample button in the same way as the Color Picker to grab IDs from the image displayed in the viewer. The image or sequence must have been rendered from a 3D software package with those channels included.

3D Node Common Materials Tab

The controls in the Materials tab are used to determine the appearance of the 3D object when lit. Most of these controls directly affect how the object interacts with light using a basic shader. For more advanced control over the objects appearance, you can use tools from the 3D Materials category of the Effects Library. These tools can be used to assemble a more finely detailed and precise shader.

When a shader is constructed using the 3D Material tools and connected to the 3D Object’s material input, the controls in this tab are replaced by a label that indicates that an external material is currently in use.

  • Diffuse
    Diffuse describes the base surface characteristics without any additional effects like reflections or specular highlights.
    • Diffuse Color
      The Diffuse Color determines the basic color of an object when the surface of that object is either lit indirectly or lit by an ambient light. If a valid image is provided to the tools diffuse texture input, then the RGB values provided here are also multiplied by the color values of the pixels in the diffuse texture. The Alpha channel of the diffuse material can be used to control the transparency of the surface.
    • Alpha
      This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally, and affects the Alpha value of the material in the rendered output. If the tools diffuse texture input is used, then the Alpha value provided here is multiplied by the Alpha channel of the pixels in the image.
    • Opacity
      Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse colors equally, making the material transparent and allowing hidden objects to be seen through the material.
  • Specular
    The Specular section provides controls for determining the characteristics of light that reflects toward the viewer. These controls affect the appearance of the specular highlight that appears on the surface of the object.
    • Specular Color
      Specular Color determines the color of light that reflects from a shiny surface. The more specular a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular highlights, whereas metallic surfaces like gold have specular highlights that tend to inherit their color from the material color. The basic shader material does not provide an input for textures to control the specularity of the object. Use tools from the 3D Material category when more precise control is required over the specular appearance.
    • Specular Intensity
      Specular Intensity controls how strong the specular highlight is. If the specular intensity texture input has a valid connection, then this value is multiplied by the Alpha value of the input.
    • Specular Exponent
      Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper the falloff, and the smoother and glossier the material appears. The basic shader material does not provide an input for textures to control the specular exponent of the object. Use tools from the 3D Material category when more precise control is required over the specular exponent.
  • Transmittance
    Transmittance controls the way light passes through a material. For example, a solid blue sphere casts a black shadow, but one made of translucent blue plastic would cast a much lower density blue shadow.

    There is a separate opacity option. Opacity determines how transparent the actual surface is when it is rendered. Fusion allows adjusting both opacity and transmittance separately. This might be a bit counter-intuitive to artists who are unfamiliar with 3D software at first. It is possible to have a surface that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/ emissive surface.
    • Attenuation
      Attenuation determines how much color is transmitted through the object. For an object to have transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, red light passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of the red arriving at the surface but none of the green or blue light. This allows “stained glass” shadows.
    • Alpha Detail
      When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object cast a shadow.
    • Color Detail
      The Color Detail slider modulates light passing through the surface by the diffuse color + texture colors. Use this to throw a shadow that contains color details of the texture applied to the object. Increasing the slider from 0 to 1 brings in more of diffuse color + texture color into the shadow. Note that the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a solid Alpha to still transmit its color to the shadow.
    • Saturation
      The Saturation slider controls the saturation of the color component transmitted to the shadow. Setting this to 0.0 results in monochrome shadows.
    • Receives Lighting/Shadows
      These checkboxes control whether the material is affected by lighting and shadows in the scene. If turned off, the object is always fully lit and/or unshadowed.
    • Two-Sided Lighting
      This makes the surface effectively two-sided by adding a second set of normals facing the opposite direction on the back side of the surface. This is normally off, to increase rendering speed, but can be turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior surfaces to be visible as well.

      Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite direction on the backside. Thus, when you revolve around the back, you see the second image plane that has its normals facing the opposite way.

      Fusion does exactly the same thing as 3D applications when you make a surface two sided. The confusion about what two-sided lighting does arises because Fusion does not cull backfacing polygons by default. If you revolve around a one-sided plane in Fusion, you still see it from the backside (but you are seeing the frontside bits duplicated through to the backside as if it were transparent). Making the plane two sided effectively adds a second set of normals to the backside of the plane.

      Note that this can become rather confusing once you make the surface transparent, as the same rules still apply and produce a result that is counterintuitive. If you view from the frontside a transparent twosided surface illuminated from the backside, it looks unlit.
  • Material ID
    This control is used to set the numeric identifier assigned to this material. The Material ID is an integer number that is rendered into the MatID auxiliary channel of the rendered image when the Material ID option is enabled in the Renderer 3D tool. For more information, see Chapter 85, “3D Compositing Basics,” in the DaVinci Resolve Reference Manual or Chapter 25 in the Fusion Reference Manual.

3D Node Common Transform Tab

Many tools in the 3D category include a Transform tab used to position, rotate, and scale the object in 3D space.

  • Translation
    • X, Y, Z Offset
      These controls can be used to position the 3D element.
  • Rotation
    • Rotation Order
      Use these buttons to select which order is used to apply rotation along each axis of the object. For example, XYZ would apply the rotation to the X axis first, followed by the Y axis and then finally the Z axis.
    • X, Y, Z Rotation
      Use these controls to rotate the object around its pivot point. If the Use Target checkbox is selected, then the rotation is relative to the position of the target; otherwise, the global axis is used.
  • Pivot
    • X, Y, Z Pivot
      A Pivot point is the point around which an object rotates. Normally, an object rotates around its own center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from the center.
  • Scale
    • X, Y, Z Scale
      If the Lock X/Y/Z checkbox is checked, a single Scale slider is shown. This adjusts the overall size of the object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are displayed to allow individual scaling in each dimension. Note: If the Lock checkbox is checked, scaling of individual dimensions is not possible, even when dragging specific axes of the Transformation Widget in scale mode.
  • Use Target
    Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When target is enabled, the object always rotates to face the target. The rotation of the object becomes relative to the target.
  • Import Transform
    Opens a file browser where you can select a scene file saved or exported by your 3D application. It supports the following file types:
LightWave Scene.lws
Max Scene.ase
Maya Ascii Scene.ma
dotXSI.xsi

The Import Transform button imports only transformation data. For 3D geometry, lights, and cameras, consider using the File > FBX Import option.

Onscreen Transformation Controls
Most of the controls in the Transform tab are represented in the viewer with onscreen controls for transformation, rotation, and scaling. To change the mode of the onscreen controls, select one of the three buttons in the toolbar in the upper left of the viewer. The modes can also be toggled using the keyboard shortcut Q for translation, W for rotation, and E for scaling. In all three modes, individual axes of the control may be dragged to affect just that axis, or the center of the control may be dragged to affect all three axes.

The scale sliders for most 3D tools default to locked, which causes uniform scaling of all three axes. Unlock the Lock X/Y/Z Scale checkbox to scale an object on a single axis only.

3D Node Settings Tab

The Common Settings tab can be found on most tools in Fusion. The following controls are specific settings for 3D nodes.

  • Hide Incoming Connections
    Enabling this checkbox can hide connection lines from incoming nodes, making a node tree appear cleaner and easier to read. When enabled, empty fields for each input on a node are displayed in the Inspector. Dragging a connected node from the node tree into the field hides that incoming connection line as long as the node is not selected in the node tree. When the node is selected in the node tree, the line reappears.
  • Comment Tab
    The Comment tab contains a single text control that is used to add comments and notes to the tool. When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon and a text bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the node for a moment. The contents of the Comments tab can be animated over time, if required.
  • Scripting Tab
    The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts that process when the tool is rendering. For more details on the contents of this tab, please consult the scripting documentation.
justin_robinson

Justin Robinson

Justin Robinson is a DaVinci Resolve & Fusion instructor who is known for simplifying concepts and techniques for anyone looking to learn any aspect of the video post-production workflow. Justin is the founder of JayAreTV, a training and premade asset website offering affordable and accessible video post-production education. You can follow Justin on Twitter at @JayAreTV YouTube at JayAreTV or Facebook at MrJayAreTV

Get 30+ hr of DaVinci Resolve courses & 200+ pre-made assets

As little as $15/month for all courses and pre-made assets

Similar Posts

Leave a Reply

Your email address will not be published.