the how to
In this tutorial I cover how to setup and use the UV projection mode in the ScanlineRender node to make cleanup a breeze in Nuke. In my experience, I've found myself using this method to paint out dolly tracks or other objects on a flat surface. It's also came in handy when doing cleanup on pattern surfaces where painting from the camera's perspective might be challenging. The only thing you need to get this method working is a tracked camera for your shot. The better the camera track, the easier this type of cleanup will be.
To get started, you'll need to place a card in 3D space over the area which you want to do your cleanup on. Think of this card as your cleanup canvas, anything that falls inside this card's area you'll be able to work on, anything that falls outside the card's area will not be able to be used. This method can be repeated using multiple cards if necessary. I used a PointCloudGenerator node to help me confirm that my card was where I wanted it in 3D space. Next, take a Project3D node and hook the inputs in to the tracked camera and the BG plate. Take your card and plug that into the Project3D. Next, drop in a ScanlineRender and hook that into the card. Set the "projection mode" knob on the ScanlineRender to "UV". This will have the ScanlineRender output the UVs of whatever geometry is plugged in, which is where we're going to be doing the cleanup. This ScanlineRender does not require a camera to be plugged in because UV maps can exist independently of the camera. Think of it like modeling geometry when in a 3D package. You can model the geo with UVs without needing a camera in your scene. With the viewer hooked up to the ScanlineRender set to UV, start working through the cleanup.
After you've finished the cleanup, it's time to reverse the UV stabilize that we had set up above. Copy the card that was created earlier and hook it up at the bottom of your cleanup. Next, place a ScanlineRender node and connect the camera input to the tracked cam, and the obj/scn input to the copied card. This ScanlineRender needs to have its "projection mode" knob set to the default "render cam". Finally, merge your completed cleanup over the BG plate. All done!
The image below shows the basic setup for using this stabilization technique.
Thanks for reading and happy comping!
When I first read about Han Cao's AutoFlare tool, I was itching to try it out on my next comp that needed a tasteful lens flare. I've used the H_AutoFlare countless times, it's intuitive and produces great looking lens flares quickly; leaving the need for building your own setup from scratch each time at the door. I often found myself only using a handful of the preset components at a time, I never needed the entire kit at once. The biggest headache I noticed when grabbing a few of the components was how long the AutoFlare node took to compute. It would quickly bog down my comp due to the large amount of Directional Blur nodes inside the AutoFlare group. I'd find myself setting the look inside the AutoFlare group, then picking out the pieces I needed before deleting the group entirely. This was a pain when I needed to go back and customize the look or feel of the flare. All the GUI sliders were gone and I had to search out which node was linked to what and how it was controlling the rest of the flare generator.
I took Han's sweet AutoFlare tool and added some simple disable boolean knobs for each of the components. This way, I'm able to tick which components I want enabled and have the components I don't want not be processed by Nuke when caching or at render time. Furthermore, I won't have to disassemble the group when I want to extract a small portion of the flare generator. The Directional Blur node is great at adding flavor to many of these lens flare components but it bogs down Nuke very fast and should be disabled when possible. My small change speeds up the performance of the group greatly, especially when there's only 1 or 2 components enabled.
The images below show the profiles of an example comp with the regular AutoFlare on the left and the disable option AutoFlare on the right. The nodes are color coded using Nuke's built in Python performance timer. The timer calculates how resource intensive each node is as its being processed. The colors start at green (not intensive) and work their way up to red (most intensive). White denotes no calculations are needed. Both of these node graphs output the same image. In this example, I created a flare without the "Mirror1", "Ghost1", "Circle1" components, and without using a texture. You can see that the regular AutoFlare on the left, still has to process every single component, even when their values are set to 0. With the added ability to disable the components you don't need, amount of resources needed to calculate the final image is significantly less. This keeps the node customizable without needless extra computations.
You might see the extra green on the regular AutoFlare and think that those processes won't take much time to calculate. Every extra computation you force Nuke to do will add up in the end, especially as your comp gets more complex. Saving a Nuke a bit computation time here and there can mean less waiting for caching or rendering, less precomping when the script gets heavy, and perhaps most important, more time for drinks at the end of the day.
Hope you learned something!
a long while ago...
A long while ago, the CG supervisor at our studio asked for a hand in making a Nuke tool that would help him and his team better display the on-set references to the rendered references along with the CG render. He wanted a tool that could display the on-set chrome ball, matte ball, MacBeth chart, and reference objects in an adjustable way to fit any rendered shot. To meet these challenges, I created the TurntableOverlay node for Nuke.
how do I use TurntableOverlay?
To set up the overlay, begin by plugging in the pipes in the TurntableOverlay node to their respective images. Plug the Chrome and Diffuse pipe into an image that contains the chrome ball and diffuse ball, respectively. Plug the ColourChart into the MacBeth chart or similar colour reference guide. Finally, plug the ref pipe into any reference image. The pipes can be plugged into the same input, different input, or no input. Use the checkboxes at the bottom of the TurntableOverlay tab to select which inputs will be displayed in the overlay. The crop boxes for each input mark the rectangular section of the image will be displayed in the overlay. Use the appearance tab to adjust the background and border colours of the overlay, as well as the transparency. The offset position and scale knobs will move the overlay around the viewport while the position knob has a few predetermined positions to snap the overlay to.
how does it work?
The guts of the group works by taking each piped in input and reformats the input image to the dimensions that the user specifies with the crop boxes. The reformatted image is merged with the background tile and border before being placed on a card. There are two sets of cards for each input, one for a vertical display, such as when the position knob is set to right or left, and the horizontal layout for when displayed horizontally. The cards are transformed and scaled uniformly to move anywhere in the viewport before exiting the group.
This project is an ongoing learning process for myself and I’m sure there are many adjustments to be made to improve this tool. Feel free to reach out with any ideas on how to sharpen this concept and work out the kinks. I hope you find this tool useful in some capacity.
Add this code to your menu.py which lives in the .nuke directory
Download the script from Nukepedia here:
I've written a handy Python script that takes a look at the selected read node and sets the project full-size format, aspect ratio, and frame range to that of the read node. If the full-size format does not exist in the current user's list of formats, it prompts the user to name the new format and adds the new format to the dropdown menu before setting the project settings. Useful when first creating a script and you want to quickly set your project settings.
I've used this script a handful of times when starting up a fresh Nuke project. The script makes setting project information snappy when you know that the read node you're bringing in is exactly right resolution, frame range, and aspect ratio as the output. The idea for this script came from some programming phrase that's something along the lines of "If you have to do a dull simple task more than twice, automate it." And that's exactly what this script aims to address. Setting the project settings has to be done every single time you start on a new shot and it requires multiple steps. Having the ability to adjust all the project information saves a few clicks and lets you get working faster.
Part 1: The Label
The first step I take to organizing my Tracker node is to display the Reference Frame as well as the type of motion that the Tracker is currently using. Being able to view the Reference Frame without having to dive into the node is quick and can help when creating roto-shapes that will be attached to this Tracker. Seeing the output type is helpful for a similar reason, with a glance I can tell what type of motion I should be seeing when viewing the Tracker. This can be achieved with some simple Python code added to your menu.py file.
Furthermore, I find it helpful to have the Tracker's reference frame automatically set to the frame when the Tracker was created. When I'm looking for an element to track, I place my Tracker node on the frame where I want to start my track. This line of code allows me to skip the extra clicks of going inside the node and selecting 'set to current frame'.
Finally, there are a few more defaults that I adjust for my Tracker node. These are mostly personal preferences and are pretty straight forward so I'll refrain from writing more than what's necessary for them.
I set my shutter offset to centered because cameras in the real world always have motion blur that's centered around an object. It's strange that nuke even has this as a default for so many nodes. However, I rarely use the Tracker for calculating transforms because it's quite slow compared to a baked Transform or Corner Pin (more on that later).
I check 'adjust for luminance changes' which I find quite often gets me a better track.
I also check 'hide progress bar', which is a total personal preference, I don't like seeing the bar pop up when I start tracking. I've seen some compers who like to put the progress bar in its own windowpane within Nuke's UI as well.
I set the default export dropdown to 'Transform (match-move, baked)', more on that later on. In the end, the code in my menu.py looks like this
Part 2: Baked Tracker/Corner Pin Naming
When tracking, I like to use the 'Transform (match-move, baked)' export option. I've found that linking Transform nodes to Trackers slows down my comps. The slow down may be caused by Nuke having to process both the Transform and the Tracker nodes when drawing up a frame.
The big downside to this is that, by default, the baked Transform/CornerPin created from the Tracker is difficult to differentiate from regular old Transforms. Additionally, there's no easy way to tell which Tracker created that Transform. Being able to tell where a baked Transform came from is super helpful for other compers picking up shots and when I'm iterating through multiple Trackers. I've written a Python script that is called when the user presses the 'create' button with the Transform (match-move, baked) or CornerPin (match-move, baked) dropdown selected. I've left out the stabilize versions of these as I typically check the 'invert' checkbox within these nodes, to swap between matchmove and stabilize transformations. The script creates a Transform/CornerPin node and sets the nodes name with this format 'TrackerName_Matchmove' or 'TrackerName_CornerPin' depending on the drop down. It also adds in the Trackers reference frame to the label of the Transform. Here's a link to a GitHub page with the 'customTrackerIS.py' script.
To use this script, you have to add these lines of code to your menu.py file:
Part 3: Fast Toggle Tracker TRS Checkboxes
The final amend to Nuke's native Tracker node that I've implemented is a Python button, built into the node that allows me to quickly check or uncheck the translate, rotate and, scale options for all tracks within the Tracker. I do a lot of my tracking using Mocha and when I copy over a Tracker node from Mocha into Nuke, all the tracks inside the node are toggled off by default. I'd find myself having to click through every translate, rotate and, scale checkbox toggling them all on. Using the 'manage user knobs' in the Tracker node, I added three boolean knobs, one for each translate, rotate and, scale and a PyScript button. When pressed, the PyScript button will look at the three checkboxes, and change all tracks in the node to match the selected boxes. So if the user ticks off just the translate and presses the button, only the T (translate) knobs for all trackers will be checked. Here's a GitHub link to the TRS custom tracker node.
Additionally, I've added a separate python script that allows me to quickly tick a specific column using a hotkey. To check all the boxes I can press shift+t, shift+r, and shift+e for translate, rotate, and scale, respectively (shift+s is bound by default for nuke's settings menu) Here's a GitHub link to the script. This script also needs some code added to the menu.py:
That about wraps it up! I'm always looking for more ways to streamline my compositing process, if there's a better way I could be running things, shoot me a chat.
Recently, I've been working on comps that required me to use the native Nuke 3D particle system to create a basic simulation to enhance live-action elements in a few shots. I had various particle nodes piping down into a scanline render at the bottom of the comp, before being merged with the background plate. Many compositors are all too familiar in knowing that nodes like the scanline render can easily become like sandbags in your comp. It only takes a few of these nodes to get the machine grinding, especially with high sample counts. The main issue was that when I previewing my work, a low sample count was acceptable to get the gist of what's going on in the shot, but when I sent the render to the farm, I wanted a higher sample count for the final output.
I tried a few different methods to help solve this issue such as creating a NoOp adding a slider knob and expression linking every samples slider in every scanline render node to that NoOp. This quickly got messy with expression lines everywhere and I still had to remember to change the value in the NoOp when I was ready to render. Furthermore, this method assumed that all scanline renders would use the same sample count. If I wanted to have a scanline with a different sample count, I'd have to create a new NoOp, link all the values and remember to change that node too at render time. The project lead saw me struggling with this pile of jank and offered up a much simple solution, the $gui expression.
the good stuff.
This simple TCL expression checks to see if Nuke's GUI is active and returns a boolean value, True if the GUI is active and False if the GUI is not active. When working on the box, the GUI is running and returns True, but when the script goes out to the farm, the GUI is no longer active with a farm license and will return False. This provides an avenue to be able to tell Nuke how to proceed if a shot is being viewed on the box or the farm. We can use an expression in a switch node to swap inputs between a scanline render with low samples and a scanline render with high samples. In the image below you can see the setup I was using for many of the shots on this project. The switch node has the expression '$gui' in the 'which' knob. The switch pipe will be set to 1 when working on my machine because the GUI is active and will return True. This will enable the low sample, faster to preview scanline render. When the shot goes off to the farm, the GUI will no longer be active and the expression will return False and the switch will be set to 0 which enables the high sample scanline render.
This solution is faster to preview and easy to read if another artist picks up the shot. The script automatically swaps inputs at render-time so we don't have to be bothered with increasing the sample count before the shot goes to the farm. Furthermore, this allows us to have unique sample counts for each scanline render node if the shot requires. This TCL snippet is not restricted to just the scanline render node, it can be used in any scenario where the user wants to change values in a node at render-time. I find myself using this method often with any node that contains a 'samples' or similar knob; motion blur, transforms, defocus, vector blur, god rays in addition to the scanline render to name a handful.
Let's get a tiny bit more technical here. We can apply further boolean logic with TCL to swap samples in a node without the need for the switch node. In the expression editor for a knob a TCL 'if' statement will work just fine in setting knob values that check to see if Nuke's GUI is enabled. The syntax would be as such:
Again, with TCL, you can invert the '$gui' expression to get the opposite value using '!$gui'. I find that adding the expression inside the node, as shown above, is a less user intuative solution. If another artist picks up the shot, they may not be able to tell that there is an expression inside the node which could lead to problems when troubleshooting a shot. Nuke scripts are a lot like handwriting, each person has their own unique style but overall your handwriting should be readable to everyone else! Using the switch is a user intuative method with a clear visual language that displays what you've done in your comp to make it work.
I hope this brief write up is helpful. Happy comping!
One of my favorite aspects of nuke is how easily customizable the interface and user experience is. Having only a minuscule amount of programming knowledge in Python can allow you to tailor nuke in fluid way that fits your personal compositing style. The ability to assign custom hotkeys for specific nodes or operations, adding custom menus for tools, and altering the default properties of nodes have helped save me time as well as keep my node graph more readable. The key ingredient to flavoring Nuke to your heart’s content is editing the menu.py file found in your .nuke folder. Foundry’s online learning database for python is an excellent resource to turn towards if you’re unfamiliar with what the menu.py and the .nuke folder and are looking for a way to get the ball rolling.
menu.py and init.py files
Customizing the UI with menu.py
Recently, I registered for Ben McEwan’s Python for Nuke 101 course and quickly learned some simple snippets of python code that have helped me speed up my compositing workflow. I’d highly recommend this course for anyone who’s new to python and integrating it within nuke. The second lesson from the course was focused on adding shortcuts, menus, and default parameters to nodes for nuke and I’d like to share with you some of my new additions to my menu.py after completing this module. These are primarily shortcuts that I’ve thought up recently and they’ve already made an impact in my compositing workflow. You’ll notice that many of the hotkey changes save only a few keystrokes, but in my experience being able to get the information I need in a shorter time, greatly improves my mental flow when compositing. Saving 2 or 3 clicks here and there may not sound like much but the few seconds it takes to open a menu and find the option or value that you’re looking for can add up through the workweek. The more time you save now, the more time you’ll have for getting started on that next shot, learning something new, or drinking beers at the end of the day.
I first learned of the IBK stacked technique from the Compositing Mentor blog at compositingmentor.com. I’m not going to go into meticulous detail on how this works under hood so check out the Compositing Mentor’s blog for more information. In short, stacking IBKColour nodes will likely lead you to a better result than just using a single IBKColour node. This method works by first creating a “main IBKColour” in which the user sets the screen size to a low value (1) and fiddles with the darks, lights, and erode until the subject is covered with black. Then, the user duplicates the main IBKColour, sets the erode to 0 and patch black to 1. The user continues to duplicate the main IBKColour and setting an erode value of 0 and patch black that increases exponentially with each copy (1, 2, 4, 8 etc.)
I was working on a commercial spot that involved heavy amounts of keying with multiple keying setups and found that the process of copy/pasting the IBKColour nodes and adjusting their values was prone to human error and was super time inefficient. So, I challenged myself to create a python script that would automate the stacking process and speed up my work when keying using IBK.
The video below shows how increasing the stack size creates new IBKColour nodes with exponentially increasing patch black values while also keeping the values of other knobs the same. In this example, there are 5 nodes in the stack (IBK01 through 05) with IBK00 being the “main IBKColour”.
The next video shows how the IBKStacker_IS tool creates the same outcome as the large stack of IBKColour nodes. Overall, using this tool can save a few seconds of time when keying but those few extra seconds can add up when you have lots of keying to do or when experimenting with different methods or values.
Add these lines of code to your menu.py file, located in your .nuke folder.