Get 30+ hr of DaVinci Resolve courses & 400+ pre-made assets
As little as $15/month for all courses and pre-made assets
Camera tracking is match moving, and a vital link between 2D and 3D, allowing compositors to integrate 3D renders into live-action scenes. The Camera Tracker node is used to calculate the path of a liveaction camera and generate a virtual camera in 3D space. This virtual camera’s motion is intended to be identical to the motion of the actual camera that shot the scene. Using the calculated position and movement of the virtual camera provides the flexibility to add 3D elements to a live-action scene. Also, the Camera Tracker creates a point cloud in 3D space that can be used to align objects and other 3D models to the live-action scene.
Camera Tracker Node Inputs
The Camera Tracker has two inputs:
- Background: The orange image input accepts a 2D image you want tracked.
- Occlusion Mask: The white occlusion mask input is used to mask out regions that do not need to be tracked. Regions where this mask is white will not be tracked. For example, a person moving in front of and occluding bits of the scene may be confusing to the tracker, and a quickly-created rough mask around the person can be used to tell the tracker to ignore the masked-out bits.
Camera Tracker Node Setup
The Camera Tracker background input is used to connect the image you want tracked. Polygon masks can be connected into the occlusion mask input to identify areas the tracker should ignore.
Camera Tracker Node Track Tab
The Track tab contains the controls you need to set up an initial analysis of the scene.
Automatically detects trackable features and tracks them through the source footage. Tracks will be automatically terminated when the track error becomes too high, and new tracks are created as needed. The values of the Detection Threshold and Minimum Feature Separation sliders can be used to control the number and distribution of auto tracks.
Deletes all the data internal to the Camera Tracker node, including the tracking data and the solve data (camera motion path and point cloud). To delete only the solve data, use the Delete button on the Solve tab.
Preview AutoTrack Locations
Turning this checkbox on will show where the auto tracks will be distributed within the shot. This is helpful for determining if the Detection Threshold and Minimum Feature Separation need to be adjusted to get an even spread of trackers.
Determines the sensitivity to detect features. Automatically generated tracks will be assigned to the shot and the Detection Threshold will force them to be either in locations of high contrast or low contrast.
Minimum Feature Separation
Determines the spacing between the automatically generated tracking points. Decreasing this slider causes more auto tracks to be generated. Keep in mind that a large number of tracking points will also result in a lengthier solve.
Used to nominate a color channel to track: red, green, blue, or luminance. When nominating a channel, choose one that has a high level of contrast and detail.
Used to determine which frames are tracked:
- Global: The global range, which is the full duration of the Timeline.
- Render: The render duration set on the Timeline.
- Valid: The valid range is the duration of the source media.
- Custom: A user determined range. When this is selected, a separate range slider appears to set the start and end of the track range.
Enabling this will force the tracker to track backward after the initial forward tracking. When tracking backward, new tracks are not started but rather existing tracks are extended backward in time. It is recommended to leave this option on, as long tracks help give better solved cameras and point clouds.
Trackers can become unstable when they get close to the edge of the image and either drift or jitter or completely lose their pattern. The Camera Tracker will automatically terminate any tracks that enter the gutter region. Gutter size is given as a percentage of pattern size. By default, it’s 100% of pattern size, so a 0.04 pattern means a 0.04 gutter.
New Track Defaults
There are three methods in which the Camera Tracker node can analyze the scene, and each has its own strengths when dealing with certain types of camera movement.
- Tracker: Internally, all the Trackers use the Optical Flow Tracker to follow features over time and then further refine the tracks with the trusted Fusion Tracker or Planar Tracker. The Planar Tracker method allows the pattern to warp over time by various types of transforms to find the best fit. These transforms are:
- Translation and Rotation Translation, Rotation, and Scale Affine
It is recommended to use the default TRS setting when using the Planar Tracker. The Affine and Perspective settings need large patterns in order to track accurately.
- Close Tracks When Track Error Exceeds: Tracks will be automatically terminated when the tracking error gets too high. When tracking a feature, a snapshot of the pixels around a feature are taken at the reference time of the track. This is called a pattern, and that same pattern of pixels is searched for at future times. The difference between the current time pattern and the reference time pattern is called the track error. Setting this option higher produces longer but increasingly less accurate tracks.
- Solve Weight: By default, each track is weighted evenly in the solve process. Increasing a track’s weight means it has a stronger effect on the solved camera path. This is an advanced option that should be rarely changed.
Auto Track Defaults
Set a custom prefix name and/or color for the automatically generated tracks. This custom color will be visible when Track Colors in the Options tab is set to User Assigned.
Camera Tracker Node Camera Tab
The controls of the Camera tab let you specify the physical aspects of the live-action camera, which will be used as a starting point when searching for solve parameters that match the real-world camera. The more accurate the information provided in this section, the more accurate the camera solve. The Camera tab includes controls relating to the lens and gate aspects of the camera being solved for.
Specify the known constant focal length used to shoot the scene or provide a guess if the Refine Focal Length option is activated in the Solve tab.
Choose a film gate preset from the drop-down menu or manually enter the film back size in the Aperture Width and Aperture Height inputs. Note that these values are in inches.
In the event that the camera used to shoot the scene is not in the preset drop-down menu, manually enter the aperture width (inches).
In the event that the camera used to shoot the scene is not in the preset drop-down menu, manually enter the aperture height (inches).
Resolution Gate Fit
This defines how the image fits the sensor size. Often, film sensors are sized to cover a number of formats, and only a portion of the sensor area is recorded into an image.
For example, a 16:9 image is saved out of a full aperture-sized sensor.
Typically, fit to Width or Height is the best setting. The other fit modes are Inside, Outside, or Stretched.
This is where the camera lens is aligned to the camera. The default is (0.5, 0.5), which is the middle of the sensor.
Use Source Pixel Aspect
This will use the squeeze aspect of the pixels that is loaded in the image. HD is square pixels, but NTSC has a pixel aspect ratio of 0.9:1, and Anamorphic CinemaScope is 2:1 aspect. Disabling this option exposes Pixel X and Y number fields where you can customize the source pixel aspect.
Auto Camera Planes
When enabled, the camera’s image plane and far plane are automatically moved to enclose the point cloud whenever a solve completes. Sometimes, though, the solver can atypically fling points off really deep into the scene, consequently pushing the image plane very far out. This makes the resulting scene unwieldy to work with in the 3D views. In these cases, disable this option to override this default behavior (or delete the offending tracks).
Camera Tracker Node Solve Tab
The Solve tab is where the tracking data is used to reconstruct the camera’s motion path along with the point cloud. It is also where cleanup of bad or false tracks is done, and other operations on the tracks can be performed, such as defining which marks are exported in the Point Cloud 3D. The markers can also have their weight set to affect the solve calculations.
For example, a good camera solve may have already been generated, but there are not enough locators in the point cloud in an area where an object needs to be placed, so adding more tracks and setting their Solve Weight to zero will not affect the solved camera but will give more points in the point cloud.
Pressing Solve will launch the solver, which uses the tracking information and the camera specifications to generate a virtual camera path and point cloud, approximating the motion of the physical camera in the live-action footage. The console will automatically open, displaying the progress of the solver.
Delete will remove any solved information, such as the camera and the point cloud, but will keep all the tracking data.
Average Solve Error
Once the camera has been solved, a summary of the solve calculation is displayed at the top of the Inspector. Chief among those details is the Average Solve Error. This number is a good indicator of whether the camera solve was successful. It can be thought of as the difference (measured in pixels) between tracks in the 2D image and the reconstructed 3D locators reprojected back onto the image through the reconstructed camera. Ultimately, in trying to achieve a low solve error, any value less than 1.0 pixels will generally result in a good track. A value between 0.6 and 0.8 pixels is considered excellent.
Clean Tracks by Filter
Clicking this button selects tracks based on the Track Filtering options. If the Auto Delete Tracks By Filter checkbox is activated, the selected tracks will be deleted as well.
Clean Foreground Tracks
Clicking this button makes a selection of the tracks on fast-moving objects that would otherwise cause a high solve error. The selection is determined by the Foreground Threshold slider.
This slider sets the detection threshold for finding the tracks on fast-moving objects. The higher the value, the more forgiving.
Auto Delete Tracks by Filter
With this checkbox enabled, tracks that are selected by the Clean Tracks By Filter button will be deleted. Enable the checkbox, and then press Clean Tracks By Filter. Any track that meets the filtering options is then selected and deleted.
Auto Delete Foreground Tracks
With this checkbox enabled, tracks that are selected by the Clean Foreground Tracks button will be deleted. Enable the checkbox, and then press Clean Foreground Tracks. Any track that meets the foreground threshold criteria is deleted.
Accept Solve Error
This slider sets an acceptable maximum threshold level for the solve error. If the solve error is greater than this value, the Camera Tracker will sweep the focal length setting in an attempt to bring the solve error under the Accept Solve Error value. If the solver cannot find a solution, the Camera Tracker will display a message in the console that the solver failed. If a solution cannot be found, ideally you should try to input the correct focal length or alternatively manually clean some noisy tracks then re-solve.
Auto Select Seed Frames
With this enabled, the Camera Tracker nominates two frames that will be used as a reference for initiating the solve. These two frames are initially solved for and a camera is reconstructed, and then gradually more frames are added in, and the solution is “grown” outward from the seed frames. The choice of seed frames strongly affects the entire solve and can easily cause the solve to fail. Seed frames can be found automatically or defined manually.
Disabling this will allow the user to select their own two frames. Manual choice of seed frames is an option for advanced users. When choosing seed frames, it is important to satisfy two conflicting desires: the seed frames should have lots of tracks in common yet be far apart in perspective (i.e., the baseline distance between the two associated cameras is long).
Refine Focal Length
Enabling this will allow the solver to adjust the focal length of the lens to match the tracking points. You can prevent the focal length being adjusted by setting the Focal Length parameter in the Camera Tab.
Enable Lens Parameter
When enabled, lens distortion parameters are exposed to help in correcting lens distortion when solving.
- Refine Center Point: Normally disabled, camera lenses are normally centered in the middle of the film gate but this may differ on some cameras For example, a cine camera may be set up for Academy 1.85, which has a sound stripe on the left, and shooting super35, the lens is offset to the right.
- Refine Lens Parameters: This will refine the lens distortion or curvature of the lens. There tends to be larger distortion on wide angle cameras
Enable Lens Parameters
When disabled, the Camera Tracker does not do any lens curvature simulations. This is the default setting and should remain disabled if there is a very low distortion lens or the lens distortion has already been removed from the source clip. Activating the Enable Lens Parameters checkbox determines which lens parameters will be modeled and solved for. Parameters that are not enabled will be left at their default values. The following options are available:
- Radial Quadratic: Model only Quadratic radial lens curvature, which is either barrel or pincushion distortion. This is the most common type of distortion. Selecting this option causes the low and high order distortion values to be solved for.
- Radial Quartic: Model only Quartic radial lens curvature, which combines barrel and pincushion distortion. This causes the low and high order distortion values to be solved for.
- Radial & Tangential: Model and solve for both radial and tangential distortion. Tangential relates to misaligned elements in a lens.
- Division Quadratic: Provides a more accurate simulation of Quadratic radial lens curvature. This causes the low and high order distortion values to be solved for. — Division Quartic: Provides a more accurate simulation of Quartic radial lens curvature. This causes the low and high order distortion values to be solved for.
- Lower Order Radial Distortion: This slider is available for all simulations. It determines the quadratic lens curvature.
- Higher Order Radial Distortion: This slider is available only for Quartic simulations. Determines the quartic lens curvature.
- Tangential Distortion X/Y: These sliders are available only for Tangential simulations. Determines skew distortion.
The Camera Tracker can produce a large number of automatically generated tracks. Rather than spending a lot of time individually examining the quality of each track, it is useful to have some less timeintensive ways to filter out large swaths of potentially bad tracks. The following input sliders are useful for selecting large amounts of tracks based on certain quality metrics, and then a number of different possible operations can be made on them. For example, weaker tracks can selected and deleted, yielding a stronger set of tracks to solve from. Each filter can be individually enabled or disabled.
Minimum Track Length (Number of Markers)
Selects tracks that have a duration shorter than the slider’s value. Short tracks usually don’t get a chance to move very far and thus provide less perspective information to the solver than a longer track, yet both short and long tracks are weighted evenly in the solve process, making long tracks more valuable to the solver. Locators corresponding to shorter tracks are also less accurately positioned in 3D space than those corresponding to longer tracks. If the shot has a lot of long tracks, it can be helpful to delete the short tracks. For typical shots, using a value in the range of 5 to 10 is suggested. If there are not a lot of long tracks (e.g., the camera is quickly rotating, causing tracks to start and move off frame quickly), using a value closer to 3 is recommended.
Maximum Track Error
Selects tracks that have an average track error greater than the slider’s value. When tracking, tracks are automatically terminated when their track error exceeds some threshold. This auto termination controls the maximum track error, while this slider controls the average track error. For example, tracks following the foliage in a tree tend to be inaccurate and sometimes may be detected by their high average error.
Maximum Solve Error
Selects tracks that have a solve error greater than the slider’s value. One of the easiest ways to increase the accuracy of a camera solve is to select the 20% of the tracks with the highest solve error and simply delete them (although this can sometimes make things worse).
Select Tracks Satisfying Filters
Selects the tracks within the scene that meet the above Track Filtering values. Note that when this button is pressed, the tracks that satisfy the filter values are displayed in the Selected Tracks area of the Solve tab and are colored in the viewer. This button is useful when Auto Select Tracks While Dragging Sliders is turned off or if the selection, for example, was accidentally lost by mis-clicking in the viewer.
Auto Select Tracks While Dragging Sliders
When this is ticked on, dragging the above sliders (Minimum Track Length, Maximum Track Error, Maximum Solve Error) will cause the corresponding tracks to be interactively selected in the viewer.
Operations on Selected Tracks
Tracks selected directly in the viewer with the mouse or selected via track filtering can have the following operations applied:
|Delete||Will remove the tracks from the set. When there are bad tracks, the simplest and easiest option is to simply delete them.|
|Trim Previous||Will cut the tracked frames from the current frame to the start of the track. Sometimes it can be more useful to trim a track than deleting it. For example, high quality long tracks that become inaccurate when the feature they are tracking starts to become occluded or when the tracked feature moves too close to the edge of the image.|
|Trim Next||Will cut the tracked frames from the current frame to the end of the track.|
|Rename||Will replace the current auto generated name with a new name.|
|Set Color||Will allow for user assigned color of the tracking points.|
|Export Flag||This controls whether the locators corresponding to the selected tracks will be exported in the point cloud. By default all locators flagged as exportable.|
|Solve Weight||By default, all the tracks are used and equally weighted when solving for|
the camera’s motion path. The most common use of this option is to set
a track’s weight to zero so it does not influence the camera’s motion path
but is still has a reconstructed 3D locator. Setting a tracks’ weight to values
other than 1.0 or 0.0 should only be done by advanced users.
Onscreen display of track names and values are controlled by
|None||Will clear/hide the selected tracks.|
|Toggle||Will swap the selected tracks and unselect sets.|
|All||Will select all tracks.|
|Show Names||Will display the track name, by default these are a number.|
|Show Frame Range||Will display the start and end frame of a track.|
|Show Solve Error||Will display the amount of solve error each selected track has.|
This area displays the properties of a track point or group of points. It has options to:
- Clear: Deselects all tracks and clears this area.
- Invert: Deselects the current selected tracks and selects the other tracks.
- Visible: Selects all the trackers at the current frame.
- All: Selects all trackers on all frames.
- Search: Selects tracks whose names contain a substring
Camera Tracker Node Export Tab
The Export tab lets you turn the tracked and solved data this node has generated into a form that can be used for compositing.
The Export button will create a basic setup that can be used for 3D match moving:
- A Camera 3D with animated translation and rotation that matches the motion of the live-action camera and an attached image plane.
- A Point Cloud 3D containing the reconstructed 3D positions of the tracks.
- A Shape 3D set to generate a ground plane.
- A Merge 3D merging together the camera, point cloud, and ground plane. When the Merge 3D is viewed through the camera in a 3D viewer, the 3D locators should follow the tracked footage.
- A Renderer 3D set to match the input footage.
The export of individual nodes can be enabled/disabled in the Export Options tab.
Update Previous Export
When this button is clicked, the previously exported nodes are updated with any new data generated. These previously exported nodes are remembered in the Previous Export section at the bottom of this section. Here’s an example of how this is handy:
- Solve the camera and export.
- Construct a complex Node Editor based around the exported nodes for use in set extension.
- The camera is not as accurate as preferred or perhaps the solver is rerun to add additional tracks to generate a denser point cloud. Rather than re-exporting the Camera 3D and Point Cloud 3D nodes and connecting them back in, just press the Update Previous Export button to “overwrite” the existing nodes in place.
Automatically Update Previous Export After Solves
This will cause the already exported nodes (Camera 3D, Point Cloud 3D, Lens Distort, Renderer 3D, and the ground plane) to auto update on each solve.
3D Scene Transform
Although the camera is solved, it has no idea where the ground plane or center of the scene is located. By default, the solver will always place the camera in Fusion’s 3D virtual environment so that on the first frame it is located at the origin (0, 0, 0) and is looking down the -Z axis. You have the choice to export this raw scene without giving the Camera Tracker any more information, or you can set the ground plane and origin to simplify your job when you begin working in the 3D scene. The 3D Scene Transform controls provide a mechanism to correctly orient the physical ground plane in the footage with the virtual ground plane in the 3D viewer. Adjusting the 3D Scene Transform does not modify the camera solve but simply repositions the 3D scene to best represent the position of the live-action camera.
The Aligned/Unaligned menu locks or unlocks the origin and ground plane settings. When set to Unaligned, you can select the ground plane and origin either manually or by selecting locators in the viewer. When in unaligned mode, a 3D Transform control in the 3D viewer can be manually manipulated to adjust the origin.
Once alignment of the ground plane and origin has been completed, the section is locked by switching the menu to Aligned.
Set from Selection
When set to unaligned, buttons labeled Set from Selection are displayed under the Origin, Orientation, and Scale sections. Clicking these buttons takes the selecting locators in the viewer and aligns the ground plane or origin based on the selection.
For instance, to set the ground plane, do the following:
- After solving, set the 3D Scene Transform menu to Unaligned.
- Find a frame where the ground plane is at its largest and clearest point.
- In the viewer, drag a selection rectangle around all the ground plane locators.
- Hold Shift and drag again to add to the selection.
- In the Orientation section, make sure the Selection Is menu correctly matches the orientation of the selected locators.
- Click the Set from Selection button located under the Orientation parameters.
- Set the 3D Scene Transform menu back to Aligned.
To get the best result when setting the ground plane, try to select as many points as possible belonging to the ground and having a wide separation.
Setting the origin can help you place 3D objects in the scene with more precision. To set the origin, you can follow similar steps, but only one locator is required for the origin to be set. When selecting a locator for the origin, select one that has a very low solve error.
Ground Plane Options
These controls let you adjust the ground plane for the scene, which is a crucial step in making sure the composite looks correct.
|Color||Will set the color of the ground plane.|
|Size||Controls how big the ground plane can be set.|
|Subdivision Level||Shows how many polygons are in the ground plane.|
|Wireframe||Sets whether the ground plane is set as wireframe or solid surface when displayed in 3D.|
|Line Thickness||Adjusts how wide the lines will draw in the view.|
|Offset||By default, the center of the ground plane is placed at the origin (0, 0, 0). This can be used to shift the ground plane up and down along the y-axis.|
Provides a checkbox list of what will be exported as nodes when the Export button is pressed. These options are Camera, Point Cloud, Ground Plane, Renderer, Lens Distortion, and Enable Image Plane in the camera.
The Animation menu allows you to choose between animating the camera and animating the point cloud. Animating the camera leaves the point cloud in a locked position while the camera is keyframed to match the live-action shot. Animating the point cloud does the opposite. The camera is locked in position while the entire point cloud is keyframed to match the live-action shot.
When the Update Previous Export button is clicked, the previously exported nodes listed here are updated with any new data generated (this includes the camera path and attributes, the point cloud, and the renderer).
Camera Tracker Node Options Tab
The Options tab lets you customize the Camera Tracker’s onscreen controls so you can work most effectively with the scene material you have.
Displays trail lines of the tracks overlaid on the viewer. The amount of frames forward and back from the current frame is set by length.
In the 3D viewer, the point cloud locators can be sized by this control.
Track Colors, Locator Colors, and Export Colors each have options for setting their color to one of the following:
- User Assigned
- Solve Error
- Take From Image
|Track Colors||Onscreen tracks in the 2D view.|
|Locator Colors||Point Cloud locators in the 3D view.|
|Export Colors||Colors of the locators that get exported within the Point Cloud node.|
Dims the brightness of the image in viewers to better see the overlaid tracks. This affects both the 2D and 3D viewers.
Toggles which overlays will be displayed in the 2D and 3D viewers. The options are Tracker Markers, Trails, Tooltips in the 2D Viewer, Tooltips in the 3D viewer, Reprojected Locators, and Tracker Patterns.
Sets the color of the overlays.
- Selection Color: Controls the color of selected tracks/locators.
- Preview New Tracks Color: Controls the color of the points displayed in the viewer when the Preview AutoTrack Locations option is enabled.
- Solve Error Gradient: By default, tracks and locators are colored by a green-yellow-red gradient to indicate their solve error. This gradient is completely user adjustable.
Outputs various parameters and information to the Console.
Understanding Camera Tracking
On large productions, camera tracking or 3D match moving is often handed over to experts who have experience with the process of tracking and solving difficult shots. There is rarely a shot where you can press a couple of buttons and have it work perfectly. It does take an understanding of the whole process and what is essential to get a good solved track.
The Camera Tracker must solve for hundreds of thousands of unknown variables, which is a complex task. For the process to work, it is essential to get good tracking data that exists in the shot for a long time. False or bad tracks will skew the result. This section explains how to clean up false tracks and other techniques to get a good solve.
Getting a good solve is a repeated process.
Track > Solve > Refine Filters > Solve > Cleanup tracks > Solve > Cleanup from point cloud > Solve > Repeat.
Initially, there are numerous tracks, and not all are good, so a process of filtering and cleaning up unwanted tracks to get to the best set is required. At the end of each cleanup stage, pressing Solve ideally gives you a progressively lower solve error. This needs to be below 1.0 for it to be good for use with full HD content, and even lower for higher resolutions. Refining the tracks often but not always results in a better solve.
False tracks are caused by a number of conditions, such as moving objects in a shot, or reflections and highlights from a car. There are other types of false tracks like parallax errors where two objects are at different depths, and the intersection gets tracked. These moiré effects can cause the track to creep. Recognizing these False tracks and eliminating them is the most important step in the solve process.
Getting a good set of long tracks is essential; the longer the tracks are, the better the solve. The BiDirectional tracking option in the Tracker tab is used to extend the beginning of tracks in time. The longer in time a track exists and the more tracks that overlap in time of a shot, the more consistent and accurate the solve.
Two seed frames are used in the solve process. The algorithm chooses two frames that are as far apart in time yet share the same tracks. That is why longer tracks make a more significant difference in the selection of seed frames.
The two Seed frames are used as the reference frames, which should be from different angles of the same scene. The solve process will use these as a master starting point to fit the rest of the tracks in the sequence.
There is an option in the Solve tab to Auto Detect Seed Frames, which is the default setting and most often a good idea. However, auto detecting seed frames can make for a longer solve. When refining the Trackers and re-solving, disable the checkbox and use the Seed 1 and Seed 2 sliders to enter the previous solve’s seed frames. These seed frames can be found in the Solve Summary at the top of the Inspector after the initial solve.
After the first solve, all the Trackers will have extra data generated. These are solve errors and tracking errors.
Use the refine filters to reduce unwanted tracks, like setting minimum tracker length to eight frames. As the value for each filter is adjusted, the Solve dialog will indicate how many tracks are affected by the filter. Then Solve again.
Under the Options tab, set the track to 20. This will display each track on footage with +-20 frame trail. When scrubbing/playing through the footage, false tracks can be seen and selected onscreen, and deleted by pressing the Delete key. This process takes an experienced eye to spot tracks that go bad. Then Solve again.
You can view the exported scene in a 3D perspective viewer. The point cloud will be visible. Move and pan around the point cloud, select and delete points that seem to have no inline with the image and the scene space. Then Solve again.
Repeat the process until the solve error is below 1.0 before exporting
Camera Tracker Node Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls are described in detail HERE.
About the Author
Justin Robinson is a Certified DaVinci Resolve, Fusion & Fairlight instructor who is known for simplifying concepts and techniques for anyone looking to learn any aspect of the video post-production workflow. Justin is the founder of JayAreTV, a training and premade asset website offering affordable and accessible video post-production education. You can follow Justin on Twitter at @JayAreTV YouTube at JayAreTV or Facebook at MrJayAreTV
Get 30+ hr of DaVinci Resolve courses & 400+ pre-made assets
As little as $15/month for all courses and pre-made assets