The Train is coming (VFX Breakdown)
- The Train is coming...A project by Kevin George, Nestor Prado and Ross Macaluso
This project was started in a graduate preparation class at SCAD in the Fall of 2011. The first and main idea was to create a viral video that would integrate a photorealistic CG element into a plate.
As it was supposed to be a viral video, some key points had to be reached:
It had to be plausible, it had to be handheld and it had to be a continuous shot.
When the idea was finalized, pre-production started. The type of train was established and the modeling stage begun. Once we had the first boxcar modeled, we went to do a first survey of the location. We gathered some high dynamic range imaging of the set, although the lighting conditions were not the ones we planned to shoot in, as it was a cloudy overcast day. We also shot a first rehearsal of the whole scene with the actual props that we intended to use on the day of the shoot.
This first shoot gave us an idea of what we were up against and also gave us a little more knowledge on what the cinematography of the shot was going to be. The final shot was indeed very similar to that first rough sequence as the intensity and feel of the first sequence we thought was the correct one. We also used this shoot to come up with a storyboard of the whole sequence.
- Modeling & Texturing Part I: Tank Cars, Box Car and Woodchip Car
The modeling started in parallel with the texturing until the boxcar, woodchip car and tank cars were all finished and ready to texture. The whole set of models were correctly UVed to create more detailed and photorealistic textures.
- The Shoot
On the day of the shoot we had the following used a: Panasonic HPX- for the main photography and a Nikon D90 and a Cannon Rebel T2i for shooting reference pictures, HDR pictures of the mirror ball and also we used them as reference cameras during the shoot.
We set tracking markers on the trailers that served as a backdrop in our sequence. We also used tennis balls on the tracks and in the grass for the anticipated whip-pans. For the last part of the shot we didn’t use tracking markers because we decided that we had enough elements that had clear and distinct features to track. The camera move was modified slightly to include more dollying to give us enough parallax to get a good solve on that part. Our data wrangler took detailed measurements of the set in order to have even more information for the tracking stage.
Right after finishing the shoot we took two mirror ball HDR’s of two different parts of the set. A clear and sunny area and a shaded area where a big tree occluded the sun. This was done in order to obtain a great number of lighting information about the scene in case we wanted to recreate the set later on. Also as lighting reference we had a 18% grey card filmed down the tracks of the set with the same video camera of the shoot to get clear lighting and color balancing reference throughout the entire set.
- Tracking stage
After the shoot we decided on the take that worked best and tracking begun with that take. In the beginning we used PFTrack to begin tracking the shot as our tracker was more familiarized with this tool. The first out of the box approach of trying to use auto-tracking to solve any portion of the shot proved to be unsuccessful due to the amount of motion blur that occurred by the fact that the whole shot was shot handheld without any stabilization equipment.
- It was clear then that a hand tracking approach should be used to get a workable solve for this entire shot. Using this method, although it was infinitely tedious, gave us some promising results on the first portion of the sequence. Our first intention was to track the different parts of the sequence as three different shots and then have the solves combined into one master solve. But in order to make the clean plate process of erasing the tracking markers and the C-stand easier, we finally agreed to get a track for the entire continuous sequence. That meant a solve for 2200 frames of a continuous hand-held shot. Using the hand tracking approach we started to have all the parts of the shot tracked. When trying to solve with PFTrack we encountered some solving problems that seemed to confirm that getting an entire solve for the whole 2200 frames was nearly impossible.
At that point of the tracking process, we decided to try a different tracking software. Syntheyes was our final choice. In order to transfer all our 2D-hand-tracking data created in PFTrack we used a MEL script called (Survey Solver 2D Import & Export Converter V1.1) that transcripts 2D tracks from one tracking software to another.
- The first solve obtained from Syntheyes was promising - the track seemed to hold up for most of the track. This time in Syntheyes we continued to refine the 3D solution of the camera by adding more hand-tracked 2D features that gave the program more information about the parallax in all the parts of the shot. When we had a solve that was solid enough we exported it to Maya and created a series of playblasts to confirm how the track was holding through the sequence.
- Clean plate stage
Once we confirmed we had a workable solve we imported the camera solve to Nuke and used this 3D data to create a clean version of the plate. We used Nuke’s 3D cards and camera projection systems to create clean plates for the whole sequence. This approach proved to be a very successful, fast and fairly simple way to clean plate most of the tracking elements in the raw plate.
- HDR creation and lighting approach
The first thing we did to recreate the lighting was to create an HDR image from the mirror ball HDR photos we had gathered the day of the shoot. To do that we used the different shots of the mirror ball taken at 120º angles to erase the the photographer from it. We also erased the main light source, the sun from the HDR image and finally we graded the HDR image to match the grade of the plate. By doing so we had effectively created a fairly accurate indirect lighting source that matched the plate we had shot.
- Here is where the footage of the 18% gray card really paid off. We used extractions of that footage to see if the indirect lighting of the HDR we created was correctly matching the plate. To do that we created a simple CG plane and sphere and assigned a lambert shader with 18% gray as the output color. Then we used an image of the gray card in the shaded area of the plate and matched it to the real gray card refere
- The next step was to create a CG light to act as the sun. We determined the correct position and orientation of the sun and then used the gray card footage from a sunny part of the plate to try and recreate the color direction and intensity of the CG light. Once we were happy with the results we had effectively created a lighting environment that recreated fairly accurate the real lighting conditions that the plate was shot in. This became our main light rig that we use to render our final images with.
- Modeling and Texturing Part II: The Engine
The freight train engine was one of the last things to be finished. As it was the central piece for most of the sequence we spent a lot of time trying to put as much detail as we could before having to render, so the process of look development with this element of the train was done until there was no more time for the first deadline. After that first deadline, the train engine was further refined adding in more geometric detail as well as texture and shader detail. The other cars were also modified to ensure a unified scale and level of detail across all of the train elements.
- Look development
The look development stage was ongoing to the end of the pipeline continuously to be able to refine the look and feel of the CG element in context with the shot and the different camera angles it was viewed.
Using the lighting rig created in the match lighting stage, we were able to have a sandbox to further develop the shader and textures on the engine and boxcars until the very end of the production.
Significant challenges were encountered in this stage pertaining to the use of Prman with Maya. A great deal of time was invested in learning the quirks of Prman’s operation to get results comparable to something we could accomplish with a traditional raytracing renderer like mental ray.
- Shader creation
For this project we used Pixar’s Prman to render all the elements on the shot. Furthermore we explored the possibilities of Slim to create the shaders for the models. It was an entire self-learning experience for all the team in trying to recreate all the shaders in Slim and make them look as photo-real as possible.
There were quite a few bumps along the way with Slim misbehaving and losing all connections between the nodes. To deal with this issue, we opted to develop a universal layered shader that we could compile out of Slim and use within Hypershade as a traditional shader. The shader was based on Prman’s All-Purpose Shader in Slim and consisted of 5 layers (metal, rust, paint, grime, dust) and various map inputs internally perturbed by procedural effects. All the relevant parameters were made visible to the compiled shader and the one shader was used for all the shading of all the cars.
The first step in the compositing process was building a clean plate. By having such a solid track and tracking markers on geometrically simple objects, it was very simple to remove the tracking markers with Nuke’s 3d system and correctly placed cards and camera projections.
Next up, the train was roughed in to allow only the minimum required amount of rotoscoping to be performed (just enough to cover the interaction with the train, shadows or any clean plate patches that may cover the actors). The roto was quite challenging due to all of the motion blur but the handheld camerawork was also fairly forgiving and the roto could be fairly loose in some places.
With the rendering pipeline and required passes worked out well in advance, the actual composite of the train was fairly straightforward. We did however end up using a fresnel pass to darken down the diffuse contribution, since our pass structure out of Prman did not appear to be energy conserving and was causing distracting overbright effects at glancing angles on the lighter colored cars. Motion vectors were generated for the wheels and other elements separately and combined with mattes to avoid problems with occlusions and the different directions of motion, and saved a great deal of time compared with rendering the motion blur in-camera.
Some rough rotoscoping and matte passes were used to make the wheels appear to ride on top of the train track. Train transforms were brought into Nuke with the camera for placement of the train engineers and other elements like headlight flares and engine smoke/heat haze. The final step of the initial composite was to match the plate grain, sharpening, and bloom effects in each channel to really sit the train into the backplate.
For the youtube viral-video effect, Nuke expressions were written to retrieve the delta magnitude of the camera’s rotation and drive a skew effect to quickly and cheaply simulate rolling shutter effects. Also, Nuke’s Curve tool was used to find the average intensity of a cropped region of the whole shot for later use in driving an exposure tool to simulate a cheap cell phone’s auto-exposure effect. Initial tests were very distracting, so the effect was reduced for the final export.
- Please be kind enough to hit the appreciate button down below if you've enjoyed this breakdown. Thanks!