Thursday, January 22, 2009

Monsters vs. Aliens

MvA directors Rob Letterman and Conrad Vernon introduce us to the film's title monsters.

The Curious Case of Benjamin Button

Digital Domain" shows us what it took and all of the technology used to transform Brad Pit into Benjamin Button.

Nominees for the 81st Academy Awards

The Nominees for the 81st Academy Awards list




Best animated short film

* “La Maison en Petits Cubes” A Robot Communications Production, Kunio Kato
* “Lavatory - Lovestory” A Melnitsa Animation Studio and CTB Film Company Production, Konstantin Bronzit
* “Oktapodi” (Talantis Films) A Gobelins, L’école de l’image Production, Emud Mokhberi and Thierry Marchand
* “Presto” (Walt Disney) A Pixar Animation Studios Production, Doug Sweetland
* “This Way Up”, A Nexus Production, Alan Smith and Adam Foulkes


Achievement in visual effects

* “The Curious Case of Benjamin Button” (Paramount and Warner Bros.), Eric Barba, Steve Preeg, Burt Dalton and Craig Barron
* “The Dark Knight” (Warner Bros.), Nick Davis, Chris Corbould, Tim Webber and Paul Franklin
* “Iron Man” (Paramount and Marvel Entertainment), John Nelson, Ben Snow, Dan Sudick and Shane Mahan


Best animated feature film of the year

* “Bolt” (Walt Disney), Chris Williams and Byron Howard
* “Kung Fu Panda” (DreamWorks Animation, Distributed by Paramount), John Stevenson and Mark Osborne
* “WALL-E” (Walt Disney), Andrew Stanton

Featurette : DreamWorks Animation discuss about 3-D

Featurette : DreamWorks Animation discuss about 3-D



Saturday, January 3, 2009

The Making of Bacardi "Sundace"

The Making of Bacardi "Sundace"
Digital Domain got the party started by transforming sparkling liquid into swirling dancers for this spot. Motion-captured dancers provided the moves that artists used to drive the CG liquid figures to life with Digital Domain’s Academy Award-winning fluid simulation technology.


The Making of : SPEED RACER

Born to race cars, Speed Racer (Emile Hirsch) is aggressive, instinctive and, most of all, fearless. His only real competition is the memory of the brother he idolized-the legendary Rex Racer, whose death in a race has left behind a legacy that Speed is driven to fulfill. Speed is loyal to the family racing business, led by his father, Pops Racer (John Goodman), the designer of Speed’s thundering Mach 5. When Speed turns down a lucrative and tempting offer from Royalton Industries, he not only infuriates the company’s maniacal owner (Roger Allam) but uncovers a terrible secret-some of the biggest races are being fixed by a handful of ruthless moguls who manipulate the top drivers to boost profits. If Speed won’t drive for Royalton, Royalton will see to it that the Mach 5 never crosses another finish line. The only way for Speed to save his family’s business and the sport he loves is to beat Royalton at his own game. With the support of his family and his loyal girlfriend, Trixie (Christina Ricci), Speed teams with his one-time rival-the mysterious Racer X (Matthew Fox)-to win the race that had taken his brother’s life: the death-defying, cross-country rally known as The Crucible.


The Making of Mummy :The Tomb of The Dragon Emperor

The blockbuster global "Mummy" franchise takes a spellbinding turn as the action shifts to Asia for the next chapter in the adventure series, "The Mummy: Tomb of the Dragon Emperor." Brendan Fraser returns as explorer Rick O'Connell to combat the resurrected Han Emperor (Jet Li) in an epic that races from the catacombs of ancient China high into the frigid Himalayas. Rick is joined in this all-new adventure by son Alex (newcomer Luke Ford), wife Evelyn (Maria Bello) and her brother, Jonathan (John Hannah). And this time, the O'Connells must stop a mummy awoken from a 2,000-year-old curse who threatens to plunge the world into his merciless, unending service.

Behind Scenes : Vettayadu Vilayadu Tamil movie

Behind Scenes : Vettayadu Vilayadu Tamil movie


Friday, January 2, 2009

Hold the Line

Going back to future - Tata water






Mount Everest Mineral Water, a Tata enterprise, recently launched the Himalayan brand in its new, international and aspirational avatar. Rediffusion DYR, the creative agency for Himalayan worked on a 60-second TVC with the VFX and post production done by Prime Focus India. The commercial is on air with the complete 60-seconder and shorter versions. The project was significant as it was the first ad created using Prime Focus‘ Indo-US pipeline with the tree sequence being executed at Frantic Films
. A total of 19 professionals worked on the advertisement for one-and-a-half months. The advertisement uses VFX extensively to convey the creative theme of ‘going back to nature‘. The TVC was directed by Ravi Udyawar and produced by Kalpana Udyawar of Ravi Udyawar films.

Raj Tambaku, VFX Head, Prime Focus told AnimationXpress.com, "This is the first time that the complete VFX of a TVC - which uses special effects to such a large extent - has been done in India. Besides, the quality and feel of the VFX is of international standards. This was possible due to the trust placed in us by the agency and the Director Ravi Udyawar."

"The Himalayan is obtained from the origin of Ganges. The idea in the creative brief was to take everything back to the original form
. In the TVC, the scarf goes to sheep, the shell to the snail and the shirt to the cotton-ball, so on and so forth. We had to make great efforts to get the aesthetics right. It helped that our US arm Frantic Films worked on the complex tree sequence. We had to revert them at certain times but overall the pipeline functioned smoothly," added Raj.

"The Prime Focus team working on the TVC prepared 10 drafts of each VFX sequence before we achieved the desired realistic look. Coming from a team that comprised mostly of freshers, this is an excellent job."


The Ad division of Prime Focus in Lower Parel has recently worked on the VFX and post production for TVCs on Toyota, Skoda and Tropicana Twister.


A blue screen chroma shot. Trees and Landscape addition is post VFX.


Chroma shot of backpack breaking up into straw. Since the backpack sequence could not be worked out, the entire bag was CG generated and then animated.


Blue screen chroma shot. The CG generated butterflies are carefully modelled so that the motion of wings is smooth and as r
eal as possible. Besides, the texture has been carefully designed so that every individual butterfly looks exotic yet not too unreal.


Trees and Landscape addition post VFX and the entire CG of the tree sequence has been done by Frantic Films, US.



The already ripped t-shirt was animated to show the ripping process.

Matte Painting Series Part 1 : Painting it BIG

GMC “Big Dig� is big - the carmaker’s vision, as well as the task laid at the feet of the filmmakers.
The spot was helmed by Space Program’s Justin Klarenbeck, and completed by rhinofx in NYC. We spoke to Arman Matin, Creative Director/CG Director of rhinofx and not only do we review before and after shots, but we reprint the original pitch document for the visual effects - so you can judge for yourself how the team performed.

As the ad’s VO attests, “you may not have the contract to build the next great suspension bridge, or the work force to dig the next Panama Canal; and you may not be working on the next big sports complex – but your truck could.�

View the final spot

Download a Quicktime before and after comparison of shots

To communicate the scope of these enormous undertakings, exhaustive research was done on locations and shot in HD. These were combined with extensive CGI to create the proper look and atmosphere. The spot remained the same from agency board to completion, with the exception of the stadium, which was supposed to be the West Side Stadium in NYC but was changed to be a stadium of the future. All in all, the effects team was required to build 3 huge locations, a suspension bridge, a canal, and a sports stadium.

The matte paintings were done in phases. First, they team gathered relevant references of bridge construction, Panama Canal, and stadium constructions. Based upon these photographs and some creative conversations with the director, the team chose the shot angles and decided on an approach for each shot. After creating a 3d rough layout or previs, the team agreed on the camera movements. During this time, they analyzed the amount of parallax each shot would have and whether a 2d or 3d matte painting might be required. Composition was created based upon location and reference photographs. And the director decided on color palates for each vignette before the shoot so that concept frame work could begin.

These rough previs frames were used, along with the photographs, to create concept frames. "We tried to incorporate some of the architectural style of the original Panama Canal into modern day construction sites to create a balance of nostalgia and modern day engineering marvels" explains Arman Matin. These concept frames that were based upon 3d previs, served as crucial elements in the whole production process, including client approval, on set supervision, and final compositing.

Based upon these concept frames, 3d models were created and textured and light direction defined. The resolution of the matte paintings was kept above 2K and the team actually created all the comps at 2k but down converted to NTSC for delivery. From a 3d tracking, match modeling or compositing standpoint, working on 2K provided rhinofx with the necessary details and clarity.

A major part of the production was modeling and laying out the sites. The team relied heavily on references to pick every single model that they built: "specific cranes that would be used for specific types of the construction, the type of scaffolding that goes around a bridge versus a stadium, just to mention a few" he adds.


Despite being 2d as the end result, all the matte paintings were based upon textured and lit 3d models. For shots that had moving cameras, they rendered textured occlusion, highlight, and other passes that were composited using Digital Fusion. "For compositing we used After Effects, Digital Fusion and Inferno based upon the preferences of the compositors" explains Matin.


A 2 day shoot provided the elements required. The crew shot around LA under the Vincent Thomas Bridge and in a mining quarry. The parking lot of the Vincent Thomas Bridge served as the location of the stadium element shoot. Due to the limited time, the team had to end up shooting passes in the middle of the day with hard shadows for the stadium shots. "All these shots required extensive lighting and shadow passes to change the noon light to an afternoon light" says Matin. Spark, dust and background action plates were also shot. Plus the team ended up adding these elements in CG as well.


Once shot, plates were transferred at HD resolution. Every shot that required a background change was tracked in 3d using Boujou. Shots that utilized a 2d background were also tracked in 3d to confirm the amount of parallax that might happen. This ensured that the 2d plates could be positioned and tracked correctly.

Once tracked, CGI models were aligned to the provided matte painting, allowing the team to generate a multitude of passes which included dust, glows, fog, close up rain, sheets of rain, rain on lens, rain splatter on ground, workers, sparks, and more. Rhinofx used Maya and Softimage for all the 3d work and Digital Fusion, After Effects and Inferno for compositing. The Matte painters also provided the creatives with rough comps of the live action and matte painting "which were very helpful for the direction of compositing". Rendering was all done in HD resolution and 16bit depth.


"Due to the extremely tight deadline and the scope of the spot, the project took careful planning and orchestration of all facets of production. Complete cooperation between the Director and VFX Supervisor/Director was necessary to create a project like this" explains Matin.




What follows is the visual effects original treatment presented to the director and agency. Read this treatment and judge for yourself how the team at Rhinofx performed.



Reprint of Original Treatment

GMC Sierra notes:

I am very excited about this project and its potential. Due to our tight schedule, I have put together some thoughts that will be helpful in planning the production from day one till delivery. Here are some thoughts in no particular order:

Location: This is a key part of our production process. The locations will determine what we will be creating at the end. As we are representing 3 locations by filming in 2, preplanning the shots will be very helpful for a convincing result. My recommendation would be to try to take digital stills of the location from the angles that you would want to shoot from. These photographs can be used to create previsualizations of complex shots and also provide material for the conceptualization phase.

Concept art: Based upon your direction, we would use the location photographs to create a rough idea of what the final structures and shots might look like. This phase is important, as this is where we lay the foundation for all digital matte and 3d work. As our post shoot schedule is limited, we have to make as much headway as we can. Based upon these concept art, we would create our rough models that can be used for previsualization.

Previs: This is where we work with 3d camera in 3d space to create a blueprint for shots that we would be shooting. Using low res models, concept art, storyboards and location photographs, we can work out complex camera moves, composition, lens etc. Not all shots need to be prevised, however my recommendation would be to plan and frame at least all the VFX heavy shots.

Complexity: Any moving camera shot is complex in terms of 3d matchmoving and tracking. We have technology to address moving camera shots, however a large amount of moving shots can add complexity to the whole post process and become very time consuming. Simple pans, dolly and crane shots are easily trackable, however a combination of zoom and any one of the above is very tricky to track in 3d.

However we can always use 2d tracking which is fairly easy to do and can apply for all shots where our structures are far away and don’t have much perspective change as the camera moves. Any time you expect to see a lot of perspective change and parallax, we have to track the shot in 3d. If you have something in the foreground that is practical in camera and we are dropping in a structure behind the foeground element, and there is no perspective change visible in the background calls for 2d tracking which is simplier.

Another limitation of 3d tracking is that not all 35mm lenses can be correctly tracked. We have to be shooting lens and distortion charts for all wide angle lens, as each lens handles distortion compensation differently.

Normally we are very relaxed about these issues and try to allow maximum flexibility, however the time factor is a big one in this case, and we would like to help you create gorgeous imagery without being bogged down by technical challenges.

Look: This is also important as we would like to start working on digital extensions and matte paintings before the shoot. Wanted to know from you how you felt about keeping each location in three different times of the day, as in dawn, daylight, sunset, dusk or night. This will give us direction in terms of the lighting and mood of the day.
I recommend atmospheric elements, like rain or dust.
Nighttime construction site might look great. This kind of specific times of day and atmospherics can provide directionality and increase believability in all the effects work versus a midday flatly lit shot.
The concept arts can be based upon initial direction of look and lighting.

Shoot prep: We will work with you to determine based upon the previs the elements we might need for the effects work.
We will cover our own reference of still photographs which will be used to build the environment, however certain elements need to be shot with 35mm motion camera for consistency of color, sharpness, lens and movement. The previs will help us tremendously in figuring out all the necessary elements and what we need to create in 3d.

Post shoot: If all goes well, we should have already started on the master matte paintings and have already modeled most of the 3d models that we might need. This will all be based on previs and the concept art. After the transfer we start the process of tracking all the shots, whether in 2d or 3d based upon the type of shot.

Using matte paintings, digital stills, transfer footage and elements, 3d elements we start creating the shots. Not all shots will require all these elements, and some can be reused and some shots might not need any digital work beside stabilization and cleanup. We would like to create most elements before the transfer and start compositing the shots right after the shoot. This way we will have the most amount of time to work on compositing all the elements, which is where time needs to be spend for creating believability.

Edit: In most cases VFX shots should be approved in the edit as soon as possible. In this case it is imperative that we solve most camera framing and movements of VFX heavy shots in the previs state and try to capture the same shot as close as possible in film which in turn is used in the edit. This way complex shots have maximum time to be worked on and less complex or effectsless shots can be played around in edit.

We hope to work with you closely on creating this stunning spot. Please feel free to call, email me any time to discuss your vision and ideas that you would want realized.
We plan to help you in any way and at every possible stage of the process to make this vision a reality.

How to Paint a Digital Car

How to Paint a Digital Car
Posted on May 30, 2008 by Mike Seymour
Cars are the backbone of the advertising industry and car finishes are equally important in films. Just this month, new releases Iron Man and Speed Racer needed to produce photorealistic car finishes. In this special technical report, we explore how the best facilities in the world go about creating car finishes that are beyond 'photoreal' and are now 'damn near perfect'.

For some aspects of computer graphics, the motion or animation is the key to realism. But for one class of cgi objects it often comes down to just how real it looks standing still. Long, slow moving shots over digital cars gives the lighting team nowhere to hide. A car is an object we all know; we have all seen beautifully lit cars a hundred if not a thousand times in real ads. It is here where subtle highlights and broad reflections rule.

But the issues and lessons are not restricted to cars per se. Cars that transform into robots, super hero suits with car-like finishes, and the look of car paint work and metal paneling is the focus of our article this week. To find out the tricks of the trade we spoke to some real experts at Digital Domain, ILM, Pixar and the Orphanage. With complex car finishes required for blockbusters such as Speed Racer, Iron Man, Transformers , Cars and others, these teams have researched and developed some of the most accurate car shaders in both Renderman and Mental Ray.

Speed Racer



We start by talking to the heroes of the digital cars Mach 6 and others : Kim Libreri and Richard Morton at Digital Domain.

small
Speed Racer
While Speed Racer used digital HDR 'Environment bubbles" for the now famous Speed Racer environments, the actual car sequences were fairly much standard 3D digital cars in digital environments. The 'bubble' technique was used in some of the races, such as in the work done by BUF and Sony Pictures Imageworks for the mountain rally race (see separate Speed Racer Story). The Thunderhead race, for example, is fantastic - but still uses 'traditional' 3D animation techniques with cars that needed to look completely real.

Even with the complex saturated lighting environments, making the hero Mach 5 and Mach 6 cars look real was of paramount importance to the extremely experienced and award winning team at Digital Domain.

Two prop cars (the Mach 5 and the Shooting Star) were made for the film before the actual designs of the cars were finalized, but these were only used in a few static scenes. As such, the film makers were not primarily focused on matching to a real car and intercutting the paint work and details of a digital car with a live action car. Instead, they were focused on producing the best digital car they could. This is in stark contrast to the paint work in say Iron Man where 'car like' paint work was digital created to both intercut with live action and in the same shot extend the real with digital (see below).

However, the racing cars in Speed Racer such the Mach 6 however, had no real world equivalents. To make the computer generated cars look real they needed to be built with correct suspension and dynamics. But to achieve plot points, the animators also had to be able to override these 'correct' dynamics and let the cars do the impossible. In the film the cars do the impossible, namely car-fu: a ballet of leaping and rolling car choreography.

small
Cars on the Thunderhead track
Yet as crazy as the animation was to become, the team at Digital Domain started with real world references. At the beginning of the process they got four different Corvettes. "We shot them in different lighting environments - it was most definitely the most comprehensive car study I have been involved in," explains Richard Morton, the head of DD CGI car efforts and a veteran of many digital cars from DD's commercials division. "It was interesting to look at the data we got and compare that to the last ten years of our digital cars, 'cause up until now we've been doing it pretty much by eye".

For Digital Domain, the secret behind digital cars and finished painted metal work is ray tracing (Whitted 1980), "It has to reflect the environment - that is the number one thing", explains Morton. On Speed Racer the team utilized Mental Ray "which is a very good ray tracer" , all the shaders were completely physically accurate, they were calibrated to work in the same lighting conditions, direct lighting, indirect lighting, night lighting, it was interesting because some of the artists had not worked in a ray tracer before, with physically accurate lighting" . The team had to walk through the lighting process of lighting a car much in the same way a DOP does, relying less on tricks and traditional 3D approaches. In the same way the surrounding environments (such as the road surfaces) needed to be properly dealt with as they would be reflecting as well as reflected in the cars.


This lead to a two-tiered approach. The first was to calibrate all the shaders and light the set. Then there was a second stage consisting of beauty lighting. "This is the stage that makes it look not only real but beautiful and here the team used techniques learned from their experience with car commercials," says Morton. So the team used long digital lighting boxes, "which is what makes a car look so great, producing these long gradated reflections".

In the races there were two very different environments for DD. The Thunderhead was "pretty much one type of look. It was old it was rusty, it was very much just warm tones," while in the later Grand Prix the team had a lot more variety of lighting setups," says Morton. "The approach I took was to work out the primary lighting and then look for some secondary complementary lighting that would bring out a cyan or green, but always with that lovely lighting box reflections."

DD's render pipeline is set up as full HDR lighting, not a partial solution of simulating HDRs with a few hundred point lights but rather a full image-based lighting model. Their system is true image based lighting and global illumination, built with help from Autodesk's Mental Ray team. The DD system needed augmentation from the standard Mental Ray process.

small
Accurate and artistic lighting was needed
Too often, lighters spend their time trying to make things look photo-real, says Kim Libreri. But with their fully physically based HDR lighting model implementation, DD's lighters get "photo-real out of the box," he says. "It is then about getting lighters that can know how to make things look beautiful. It should have to fight for photo-real, they should not have to fight with ambient occlusion and spot lights -- these other old world tricks -- spending 3/4 of their time just trying to make it look real. That should be guaranteed...we want to make our lighters more like directors of photography and gaffers than traditional computer scientists," relates Libreri.

Actually getting the sheet metal work correct is much much harder than leather or rubber tires. "The only way to make car paintwork look real," explains Libreri is "to model the clear coat." As the name implies, clear coat is a final layer of clear paint that protects the actual paint below on any real car. And it is the ray tracing interaction and specular highlight properties of the clear coat that DD feels is vital for good CGI car paint work. The team had to work with what they called a multilayered approach of dealing with a quite thick layer of clear paint and bending rays as the light refracted through that. After the clear coat, the team would then deal with the metallic flakes which are often mixed with car paints in the real world. Libreri claims they threw out previous models of how to solve these problems and really focused on a new, very accurate solution.

In general terms, global illumination works by estimating how much light arrives from the lighting environment and the rest of the scene at each surface point. In large part, this is a matter of visibility: light from visible parts of the environment (lights, sky, sun etc) and light that is blocked by other parts of the scene, with the addition of an estimation of light reflected from other surfaces. This becomes the colour of that pixel in the shot. It is the last part the estimation of light reflected that leads to modeling radiosity, as every object is also considered a bounce light source. The process is very computationally expensive. For further reading, see the book High Dynamic Range Imaging by Reinhard, Ward, Pattanaik and Debevec.


small
Each shot could have up to 40 layers
For most shots there was a complex set of passes rendered out for compositing. From 3D a file would be transmitted with image pixel data (rgb) with multiple renders for multiple layers (background, mid-ground, cars etc ) and then they would also have the alphas, z depth pass and then the full 3d motion vectors for each pass. Motion blur needed to be dealt with creatively, allowing the filmmakers to blur different parts of the shot at different rates, so the final cgi files that were passed to the compositors had multiple data sets.

Each image had about 40 buffers in Nuke and there was a Nuke Gizmo built called "the car relight tool" which would allow the Nuke compositors to tweak the cars. The renderings from 3D were completely real but then in Nuke the "speed racer" look was applied, allowing the Nuke artist to add selective motion blur, and say re-colour the reflections in the side of the cars, explains Morton.

The team decided to split out the primary key lights in a scene, the final gather - which is effectively the global illumination component so a Nuke artists could balance key and fill light. There would be extra passes for say the light tunnel so extra interesting lights could be added to the cars - which were technically unmotivated but would give "extra bling and liquid reflection lines" on the surface of the car says Libreri.

The techniques that Digital Domain has developed are expected to be generalized and then re-introduced back into their commercials pipeline, feeding back into their general car commercial work. These include car motion racing simulators and lighting techniques.

ILM and Iron Man



While Iron Man is not a digital car film, it does deal with much of the same issues in its use of car-like highly polished metallic paint work and trim. We spoke to Ben Snow, ILM's VFX supervisor on Iron Man.

In ILM's early tests for the film the team created a "hot rod" look, not unlike "a gleaming red Ferrari". This was around the time of Transformers at ILM. Although visual effects supervisor Ben Snow was not on the film himself, he found the same issues had been looked at for the cars and trucks in Transformers.

small
Iron man uses car style finishes
For Snow, clear coat was part of the original plan for Iron Man, as Transformers had "perfected a really nice clear coat", he explained. It was used in the test, but for the majority of the actual metal suit in the film, Snow did not use clear coat. This was influenced by the need to match to the practical on-set suit made by Stan Winston Productions. The real prop suit was not made with clear coat, but "with an auto paint finish. It was an extremely expensive red auto paint...but it did not have that clear coat effect," Snow points out.

Clear coat does provide a little bit of reaction between the top and bottom coats, but according to Snow it is the variation caused by the two different reflections and specular highlights that really is key, rather than refraction. While the concept of a clear coat was not used, the idea of having "two possibly, nearly always three" different specular highlight contributions was key to ILM achieving close-ups of the suit, explains Snow.

"We call them three lobes of specular; three different looks to the highlight. One might be soft and not quite as intense, the other might be sharp and gives you a hot glint when it hits the Sun. You layer those things together and so when you have a clear coat on top of the normal surface you might have two lots of that happening, so it can pretty complicated". Snow points out in our podcast interview that not all of them play all the time. "Attempting to model a specular highlight, you can really get a sense of (the complexity) from a BRDF".


small
ILM used renderman
A BRDF is a Bidirectional Reflectance Distribution Function. While a perfect mirror would reflect, and a completely diffuse object absorb all, most real surfaces lie somewhere in between and a BRDF captures the real properties such as anisotropy and retroreflection. In other words, partial reflection that can be directionally dependent AND can reflect back at odd angles not just reflect like a mirror along the line of the angle of incident. Just like we see with metal car paint, due to the metallic flakes in the coat under the clear coat. The BRDF shows how light bounces at all angles and in that way it captures other very important aspects of car paint such as the mirror like quality non mirror finishes show at oblique angles: things we need to otherwise fake with tricks such as a Fresnel CG pass.

ILM sent paint chips from Stan Winston Studios to a lab for DRDF. The real brushed metal proved too complex for the lab to provide a BRDF, but the red paint chips did provide a meaningful scan. These were incorporated into an ILM in-house complex visualisation tool showing graphically the specular highlights.

Unlike Digital Domain who used Mental Ray, ILM primarily used Renderman. Snow points out that the brickmapping Pixar introduced to renderman helped a lot with render times. Brickmapping is able to use a voxel type approach to store irradiance from a photon scattering passes, to increase render performance dramatically. A brickmap is a "3d, sparse, mip-mapped, octree," which basically means it's a fast, memory efficient, 3d texture (as opposed to point clouds / kd trees, which are fast, but are memory hogs and hard to filter). The advantage of brickmaps over baked 2D textures is that they are independent of UV's and able to be blurred spatially (3D blur). Using spatial blurring instead of ray tracing can get you killer speed wins (think rough reflections or sss). (Source: CGtalk 06-01-2006)


Pixar and Renderman


Pixar had to solve similar issues with Cars. Although the cars in the film were not photo-real, they still required ray tracing and extensions to Renderman. Since the release of Cars, Pixar's Renderman has moved further toward technologies for implementing color bleeding effects in film production, from new methods like point based color bleeding, to optimizing traditional photon map techniques for production.
We asked Per Christensen at Pixar about the differences between Ray tracing and Photon tracing.

Christensen: In ray tracing, the rays are traced starting at the *camera*. In a pure ray-tracing renderer, one or more rays are traced from the camera through each pixel in the image that we're rendering, and further rays are traced from the points that those rays hit (to compute reflections, shadows, etc.). The end result is a color value for each pixel in the image.

In photon mapping, the photons are traced from the *light sources*. In other words, the photons are shot in the opposite direction as rays from the camera.Each time a photon hits a diffuse surface it is stored in the photon map. The result is a photon map -- a collection of little points (stored photons) in 3D space.

Other than that, there are many similarities between ray tracing and photon tracing. We actually use the same algorithms to compute the intersection of a photon with a surface as we use to compute the intersection of a ray with a surface. The main difference is the direction of the rays/photons and whether the goal is to compute pixels or 3D points (photons).

As the DD mentioned, ray tracing only gets one so far with producing photo-real cars mathematically. To get the level of quality they wanted, they moved to a full global illumination model that incorporated ray tracing, using Mental Ray. This is similar to what Pixar discovered. To get the realism and scene complexity need for feature film "ray tracing," Pixar combined photon tracing and global illumination with the addition of the brickmaps mentioned above. Christensen describes ray tracing as global visibility, and lighting calculations that make use of this facility are called global Illumination calculations.

There are a few global illumination methods:

Finite element methods
  • radiosity

Monte Carlo simulation
  • distribution ray tracing
  • path tracing
  • bi-directional path tracing
  • photon mapping

Pixar's chosen method was to build on the Monte Carlo approach of photon mapping which they then extended with the brickmap representation of photon information. The Pixar method is an extension of the photon map method used in ray tracing.

Assume a complex scene with complex shapes such as a car with lots of other objects around it. The first step is photon tracing. The photons are stored in a collection of photon maps that together cover the entire scene. Pixar calls this collection of photon maps a photon atlas and importantly at this stage it is camera independent. In the second step, the irradiance is estimated at each photon position and for each photon map, a brickmap representation of the irradiance is constructed. Pixar calls this collection of irradiance brick maps an irradiance atlas. The last step is rendering using final gathering, with the irradiance atlas providing a rough estimate of the global illumination.

A normal Photon mapping method is very general and flexible but it is constrained by memory limits on the number of bounces that can be done - as Snow mentions when discussing their ray tracing in our fxpodcast.

Historically the photon map method for computation of global illumination was introduced by H. Jensen. It is a three-pass method:

  • First photons are emitted from the light sources, traced through the scene, and stored in a photon map at every diffuse surface they hit.
  • Next,the unorganized collection of stored photons is sorted into a "kd-tree". A kd tree is a 'k' dimensional search of a space developed by Jon Louis Bentley. Photon mapping gives each photon an "energy" attribute. Each time the photon collides with an object, this attribute is also stored in the photon map. The energy is subsequently then lowered. Once the energy of the photon is below a certain pre-determined threshold, the photon stops reflecting.
  • Finally, the scene is rendered using final gathering: a single level of distribution ray tracing. The irradiance at final gather ray hit points is estimated from the density and power of the nearest photons. Irradiance interpolation is also used to reduce the number of final gathers. The final gathering can be sped up rendering by a factor of 5 to 7.

Cars require reflections to look real, but at this stage, complex film projects shots would still be prohibitively expensive to render. So to this photon mapping approach, Pixar added the brickmap.The brickmap is a tiled, 3D MIP map representation of surface and volume data, with an adaptive octree with a brick in each node. A brick is a 3D generalization of a tile - with each brick having 8^3 voxels with spare irradiance values. This representation is designed to enable efficient caching. The data stored in the brick maps are radiosity values (radiosity is watts per square meter). The Brick approach can be used to provide an irradiance map of a car, and unlike ray tracing which hits memory limits, one can now render ray tracing and global illumination in production scenes with the same complexity as say scanline.

(per Henrik Christsensen, Pixar June 2005 DTU and Dana Batali, Pixar)

The Orphanage and Iron Man



As stated above, Mental Ray can also address the issue of ray tracing complex scenes to make a car have "liquid reflections' and accurate spec highlights. Another company that worked on Iron Man was The Orphanage in California. "We developed a Orphanage shader called Omega for Ironman," says CG Supervisor Jonathan Harman. "Omega is mental ray shader that spits out a text file readable by a series of scripts we have written for Nuke. The Omega text file contains information about what passes have been output and allows a compositor to either use only the beauty render or rebuild the beauty out of all the passes with all the proper math operations to allow complete control in comp."

"The shader we used had a two layer reflection component --a blurry reflection and a sharper 'clear coat pass' -- as well as a paint only pass separated form the diffuse lighting pass. We rendered environment reflections separate from a 'Sun reflection pass' and broke the sun out of HDR images to allow for manipulation of the suns reflection in Ironman. We ran a separate smudge matte pass that allowed us to add additional blur to reflections where necessary in comp.," relates Harman.

Early on in the Iron Man project, Orphanage vfx supervisor Jonathan Rothbart wanted a way to "control the sun" in their reflection pass, so they basically removed the sun from their HDRI's and replaced the sun with a spec pass generated from a point light. The spec pass was then added on the "sunless" reflection pass and would be broken up by the smudge matte pass in comp . If they needed additional highlight reflections they would put HDRI bounce cards in the scene to add a rim or additional highlight kicks. The Orphanage shot HDRI's of various lights on set to generate maps for the bounce cards.

"Since we were using Mental Ray's ray-tracing capabilities we were able to use HDRIs as our reflection environment as well as a for our image based lighting," says Harman. "We converted all texture maps to Mental Ray .map file format which is a float mip mapped tile-able texture file format, in order to optimize memory usage and disk read times. We did not utilize an fancy BRDF measurement tools. We did however we use a conserve energy relationship between reflection and diffuse light which aided in quickly getting a photo-real metal surface this utilized a fresnel/facing ratio color ramp to change the hue and intensity of reflections as the normals face away from camera. We used Mental Ray's final gather technique for our global illumination method to calculate bounce diffuse light within our scenes."

During look development, the team at the Orphanage shot turntables of the Winston Iron Man suit as well as suit reference images in all shots. They then set out to develop a shader that was optimized for anisotropic, or directionally dependent, multi layer ray traced reflections. According to Harman, there were a few requirements for this new "Omega Shader".

First, render times needed to remain under 1 hour per frame for an average full screen 2K 3D motion-blurred Iron Man. "Due to the massive number of reflection rays, our shader utilized a in-house adaptive ray tracing technique developed by Will Anielewicz to intelligently reduce the render time," Harman adds. This technique casts a minimum number of rays and then tests each new ray for how much it would change the final color. Once this threshold is reached, no more rays need to be cast (Adaptive Ray Tracing Omega).

Next, the shader needed to be "deep raster" so the scene could be computed once then the all necessary passes could be split out (MR Omega). The passes used were:

  • Beauty
  • Paint
  • Lighting (diffuse)
  • Self Shadows
  • Refraction
  • Motion Vector (optional)
  • Ambient Occlusion
  • GI
  • Smudge Maps
  • Surface Normals
  • Fresnel
  • Dirt
  • RGB XYZ map
  • Tinted Blurred Reflection
  • Clear Sharp Reflection
  • Sun "ppec pass"
  • Four"special passes" for mattes, each with RGBA = 16 mattes

The team needed to have the ability to easily read in all passes and reconstruct the beauty pass within Nuke for complete control. (Nuke O-pass). This multi-pass approach is a popular use of Nuke and one of the reasons the industry has started moving to NUKE.

Clearly the various solutions range from highly accurate full global illumination to more creatively passes based solutions. Either way, the industry is now expected to be able to achieve a level of realism that means a real reduction in the amount of time spent trying to light and film real cars in real studios, and a move to CG cars in both CG and real environments...even if those car finishes are not used on cars!

Hollywood Visual Effects for Live Action Integration


In this new CGWorkshop, Hollywood Technical Director & Visual Effects Specialist Allan McKay will take you through visual effects from a production standpoint, and follow the process from the initial pre-production stages through to the integration of your special effect into a scene. You will be mastering all areas of FX and putting them to practice and applying it to a simulated production pipeline. Rather than following the practice of just creating FX examples, we are going to be working on real shots, from pre production to setting the shot up, creating all of the FX, dynamics sims and fluid simulations, and then looking at rendering techniques, after which we go through and actually composite our CGI over live action footage or still image plates. This workshop is designed to take things to the next level and prepare artists for real world pipelines and big studios. You’ll be using FumeFX and Afterburn to create amazing effects and then use Fusion to composite your effects into your chosen scene.

Have a look at Allan's Showreel here.

Some of Allans more recent effects sequences have been for Superman Returns’, ‘Blade Trinity’, ‘Exorcist: The Beginning’, ‘Paycheck’, ‘Scooby Doo 2’ and ‘Inspector Gadget’ , just to name a few.

By workshop end, students should be very comfortable with effects, having developed a very good insight into the creation of some of todays’ bigger budget FX films.

“The complete and through introduction to VFX. Was great to have a industry expert sharing both great tutorials with practical industry tips and tricks.".
Chad Naeger, Student


The Curious Case of Aging Visual Effects

Audiences are reeling in amazement at the artistry & technical polish of David Fincher's The Curious Case of Benjamin Button. But even those with an appreciation of the power of vfx will be stunned to learn that for the first 52 minutes of this epic motion picture, the head of Brad Pitt's character is a CGI creation. Bill Dawes speaks to DD VFX Supervisor Eric Barba about the process Barba dubbed "emotion capture".

small
digital head replacement



fxg: Eric, congratulations on The Curious Case of Benjamin Button. You must be very pleased how it all turned out. It's a stunning film.

Eric Barba: I actually am. It's taken a long time to get to that point where I can relax and be happy with it. Now that people are really responding warmly to it, it's great.




fxg: Eric, before we talk about how you created the stunning shots for this movie, can I ask you to take us back to your initial discussion with David Fincher about Benjamin Button. How did you think you would be able to achieve the visual effects in this film?


small
Director David Fincher
Eric Barba: I've been working with David since 2002, working on commercials and music videos and bidding on various feature film projects. There were a dozen or more projects before Zodiac, but since the very early days we have been talking about Benjamin Button. I'd heard about the project because Digital Domain had looked at it in a previous incarnation. David told me more and eventually I got a script, then David and I would talk about it more and more while we were working on other projects, for instance "How are you going to do the tracking,?" and "what do you think of this technology? "

At that point David was pushing the digital workflow and we were using the Viper camera. We got really excited about Benjamin Button in about 2004. We were paid to do a test which we had a really short amount of time to do. We built a maquette and it turned out to be the scene where you first see Benjamin Button as a kid sitting at the dining table banging his spoon. For the test we did a head replacement with another actor in about 4-5 weeks, and people thought we were crazy to attempt this. But because he did not have to talk or emote too much we could build it specifically for the shot.

The test went pretty well and it was received very warmly by the studio. I remember showing it to producer Frank Marshall and he through it was the most exciting thing he'd seen since the first dinosaur test walk for Jurassic Park. So then we knew we were onto something. But the movie wasn't greenlit at that point so I went back to doing music videos with David and ultimately Zodiac. When Zodiac finished we found that Benjamin wasn't dead and these commercials came up that were the perfect test bed for Benjamin Button. "

(These were three ads for iconic US Popcorn brand Orville Redenbacher in 2007.The Colonel Sanders of the Popcorn world, Orville was famous for his direct to camera promotional spots that aired on US television in the 70s and 80s. After his death, Fincher used CGI techniques to bring him back for a spot in 2007).

small
Brad Pitt on set
"We thought we could use those spots as paid R&D for Benjamin Button, testing out rendering and tracking technologies. Principal photography for Benjamin Button had already begun in 2006. We learned what ideas wouldn't work and what would need further refinement. We started working with the Mova Contour guys (developers of the Contour Reality Capture System), and we were really excited to be working with them.

They were using similar techniques that we had used to created the test scene for Benjamin Button in 2004 but these guys had a really good handle on how it all worked. Our use of Mova Contour on the Orville Redenbacher commercials didn't get the warmest reception. Part of the reason was we were doing a guy who was already dead so there was no way to capture a performance from him. We cast a voice that was similar and then different actors to do the face and the body, which ended up as three very disparate performances.

Another problem was that as you art direct a commercial you start to get away from the original performance and you tend to move into that "Uncanny Valley" territory. Doing that commercial was a great learning experience and it also humbled my team that had previously worked with David Fincher to put out some pretty amazing work. Up until that point we thought we had a handle on it and then we did Orville and realized we still had a lot to do to make it all work.




fxg: What was the difference between the approach you developed for the test shot in 2004 and how you achieved the character in Benjamin Button?


small
The first 52 minutes feature no live Brad Pitt performances
Eric Barba: The part that carried over from 2004 was taking a life cast of the actor and from there a maquette is sculpted. With Orville the maquette was not very good so we knew we would need some fantastic maquettes showing Brad aged 80 that the studio could sign off on.

In the 2004 test, because we did not have the character doing a major performance, the rigging system was fairly primitive. We knew we'd have to build a much more elaborate system. When we started Orville our character supervisor rigged and animated Orville's performance. We were going to use the Mova Contour data to help us but we had some technical problems and we still needed to do a lot of homework on it. For Benjamin Button we knew what we needed to get out of that system and they knew how to get it all working. Six months later when we got Brad into the rig we were able to get everything we needed, which was basically a series of facial expressions.

The Mova system is designed to get 24fps data and then that can be used to identically recreate that performance on a CGI version of the character. But because we were going to be using Brad Pitt's performance to drive older versions of his physiology, basically retargetting the performance to an older version of himself, that wasn't really going to work for us. And because the Mova capture system requires that the actor is seated and you can't move very much, and yet we needed to put him in an environment where he is moving around and interacting with other actors, there were going to be some limitations on what we could use that data for. So we used the Mova rig to capture Brad's facial shapes and we then built systems to use Brad's performance to drive Benjamin.




small
Brad Pitt performances were achieved with what DD called emotional capture done with the Mova contour capture
fxg: Can you describe the rig that was used to capture Brad Pitt's performance?

Eric Barba: The Mova Contour Capture rig is like a bunch of speed rails that are arrayed for 150 degrees around and the actor sitting in a chair. It holds 23 cameras and they are all aimed at the face which is covered with phosphorescent green make-up. This allows for frame by frame tracking of patterns and each point can be tracked in 3D space.

This is the first system to truly capture someone's face moving in realtime and provide a moving mesh that can be subdivided, rebuilt and then retargeted to another mesh to drive a CGI performance. Since Orville Redenbacher, Mova have been constantly working to refine the Contour capture system and we had been working with them on this. It was a collaborative effort. They were invited in to collaborate on Benjamin Button and like all of the vendors were incredibly excited.



small
Cate Blancett on set
small
Daisy also required digital work to help her dance and age - much of which was done by Hydraulx and Lola visual effects




fxg: So where does the digital Brad begin and end in this film?

Eric Barba: The baby was a mixture of a live action robotic maquette which was enhanced by Hydraulx I believe. But for the first 52 minutes of the film it's a full 3D head, there were 352 individual shots. There's no projection, there's no 2D techniques. Once our work stops about 52 minutes in, Brad takes over in makeup. Ultimately as he gets younger he wears less and less makeup until its just Brad.

As he gets very young Lola did some touchup work on his physical makeup then when he gets back to the Dance Studio Lola takes over doing the younger version of Brad. Ultimately there's a couple of child actors and a baby at the end. The bulk of the work in head replacement happens in the first 52 minutes. We prevized the U-boat battle and I planned it out with David, and I had planned on doing it and we built a bunch of the assets but ultimately we handed that off to Asylum because of time and economics, and I think they did a pretty good job of it.

"The first "digital head" shot is the one we did the test on, where there's a long dolly and pan until the audience sees Benjamin sitting at the table banging his spoon. That's the first body double for Ben in his 80s, as he grows younger we have another body double take over for him in his 70s, when he goes out on the tugboat with Cap'n Mike and goes to the bar. The bulk of our work is the "Ben70" character, and the "Ben60" when he leaves home. One of our last shots is when he is reading the letter from Daisy on the back of the tugboat. The line where he tells the Captain, "Well you do drink a lot", that's where the real Brad takes over.







fxg: Did the number of head replacement shots grow during the filmmaking process, or was it always intended to be that many?

Eric Barba: David always said that he didn't want to shoot this movie around the fact that it was a CG character. He wanted to shoot it like he was shooting an actor. That means we don't shy away from difficult shots such as seeing him naked in the bathtub or getting a haircut or getting drunk and stumbling. It allowed David to tell the story the way he wanted to tell it which I think is the right way to do it.




fxg: Were there many challenges in the job of attaching the digital Brad to the various bodies used during the live shoot?

Eric Barba: The body doubles had different neck lengths and shoulder sizes and even the arch of the neck was different, so there was a lot of massaging that had to go on to make those heads feel like they belonged. We had to create a tracking system that was far more robust than traditional tracking because traditional tracking doesn't give you accurate Z depth. It doesn't give you a spine which we needed to have so the head would follow the body exactly as it moved around. During the shoot we would have four of the principal Viper cameras running in perfect sync. We devised the software to take any of the different cameras and, once the plates were locked, we could triangulate and retrack to get the accuracy we needed. This was a huge undertaking but we learned with Orville that there wasn't anything out there that could do the tracking the way we needed to have it, so that you buy the illusion every time.






fxg: In many of the shots the Brad Pitt character is wearing a hat or glasses? Did that present additional challenges?

Eric Barba: My intention when we started shooting was to always shoot Benjamin with the actor wearing a hat so that the body was shadowed correctly. My character supervisor was really worried about that approach but I thought that it would help with the illusion, so the audience would not know where the plate stopped and Benjamin started. Most of the hats in the movie are the live hat.

There were some issues with lineups where Brad's head proportion didn't exactly match the body double. We didn't want to distort Brad's head to fit the hat because we didn't think that was the right way to go. So sometimes we would just add in a CG hat and hope the audience wouldn't notice, but I thought they needed to be in the shots so that Claudio Miranda (Director of Photography) and David could frame their shots. It did help us sell the illusion ultimately. However glasses were a huge problem.

Even back in 2004 when we were doing the first test scene David said he wanted these glasses to be as big as coke bottles. He wanted the audience to be able to see the eyes enlarged through refraction so they got the sense that he cannot see. At that stage I was going "Uh, oh" because refraction adds problems,. the main one being that when we are looking through refracted glasses with the thicker lens it often distorts our perception of where that person is looking. So maintaining focus and an eyeline was very challenging.



cleanbreak
fxg: Was there a lot of work in compositing as well?

Eric Barba: It was massive. One of the things that we learned from the initial test scene and Orville, where we had 15 shots in the three spots we did for Orville, was that even with some really good compositors (and I had my A-team on that project) there's only a handful of people that have the right sensitivity that once they get these head renders and put em in a scene, that they have the sensitivity to colour so that the skin tones don't come apart, or look ashen or too pink or too red. One of my compositing supervisors for the Orville spot had that sensitivity, although the spots didn't end up working that well for other reasons.

So part of our plan going into Benjamin Button was to build a lighting system that would give us as close to perfect as possible, out of the box, then we needed to build a compositing template system that allowed us to hand off shots to 20 other compositors and get the same results . That's a really tall order with normal rendered CG but for Benjamin Button it was off the meter in terms of difficulty. Hats off to my lighting supervisors who did an amazing job.

We used a lot of common techniques such as HDR for set reconstruction. Our lighting system was different from the normal workflow. Traditionally your tracking team hands over their data, the lighting team will render off the pieces and they will get handed off to the compositing team, so they come in last. In this show our compositing actually came in first and they really enjoyed that. The integration team that was on set with us for seven months, basically surveyed every piece of equipment and set and we took HDRs for all the different head positions as we were shooting, and then all that information was catalogued as we were shooting it.

We then built a system in 3D that would ingest that data, build the HDRs. We then had modelers rebuild the set from the survey data. We built shader systems that could take HDRs and project them back onto the set geometry, Because we had the tracking of the head, we could relight it within Nuke and we could reproject those HDRs onto the CG head, to recreate a corrected HDR for the head position.






Digital Domain emotion Capture Approach

- Eric Barba: " Since our goal was not to create Benjamin's performance in animation, but rather to 'xerox' Brad Pitt's performance onto this CG head, we had to develop a brand new process that we call emotion capture."

The overall process included:

1. Working from life-casts of Brad Pitt and body actors to create three photo-real maquettes representing Benjamin in his 80s, 70s and 60s, then shooting them in different lighting conditions using a light stage. (Rick Baker and Kazu Tsuji created the maquettes).

2. Creating 3D computer scans of each of the three maquettes.

3. Shooting scenes on set with body actors in blue hoods.

4. Creating computer-based lighting to match the on-set lighting for every frame where Benjamin appears.

5. Having Brad perform facial expressions while being volumetrically captured (with Mova/Contour), and creating a library of 'micro-expressions.'

6. Shooting Brad in high definition performing the role, from four camera angles, and using image analysis technology data to get animation curves and timings.

7. Matching the library of expressions to Brad's live performance of Benjamin.

8. Re-targeting the performance and expression data to the digital models of Benjamin (created from scanning the maquettes) at the specific age required in the shot

9. Finessing the performance to match current-Brad expressions to old-Benjamin physiology using hand animation.

10. Creating software systems for hair, eyes, skin, teeth, and all elements that make up Benjamin.

11. Creating software to track the exact movements of the body actor and the camera, to integrate the CG head precisely with the body.

12. Compositing all of Benjamin's elements to integrate animation, lighting, and create the final shot.