I'll begin with this statement:
Anything can be used to make something.
I realize that I'm weird. When I hear about some new technology, my first thought is about how I might use it to do something completely different with it. So I was instantly excited when I heard that Maya 2012 would have a new viewport shading algorithm that would support anti-aliasing and screen-space ao. I tossed my head back and giggled like a school girl. "I'm done rendering forever!" I shouted, despite the odd looks from my wife.
Then when I finally got to play with the feature, I immediately ran into some of the drawbacks of this early tech. It's certainly not feature complete, and not without flaws - as we will explore below. And this is where most people just turn VP2.0 off and shake their fists at Autodesk. Fools, they are, I say! While this is not a replacement for Mental Ray or Renderman by any means, when you compare quality of output to render time, the techniques we developed on Starfish Ninja should sound appetizing to smaller non photo real productions, or perhaps simply for animators who don't need to spend days rendering and comping their animation tests. It's my hope that more rendering technologies begin to move in the realtime direction--that this is only the beginning of a new direction for the output of art in our industry. Let's dive in.
I discovered that if you render a bunch of passes as render layers, you can overcome many of the drawbacks (which i will cover later) and get a single pass such as ambient occlusion to spit out at 2 to 4 fps. We were also able to use cgfx shaders similar to game engine shaders to cheat things like rim lighting and depth passes at incredibly high frame rates, all at full HD.
|-displayed at 50fps on a modest workstation|
I'm looking for a graphic style, not a photocopy of reality. So I want my artists to spend time pushing, pulling, shifting, and otherwise redirecting the imagery instead of paying huge overhead for a render farm. If you control the effect and build it into the style and visual vocabulary of the show, it will work, in most cases.
And now with technologies like Source Filmmaker out there, and even just improvements to Maya's VP2.0 in 2013, artists have more potential tools to get the job done cheaply than ever before. Another bit of software I'm very excited to get time to play with is the Furryball render plugin for Maya. Here you'll see the best of not only VP2.0 but also UTE and CryEngine taken to the extreme, and ready for tv production. Take a look at the features in the latest release. Realtime, in viewport indirect lighting and cheated radiosity. Realtime reflection and refraction. Not to mention particle and fluid effects. With all the R&D that has been happening in the game industry, it's a shame that more film studios aren't riding in the wake of game development for GPU processed subdivision and displacement, at least for non photorealistic production. (After drafting this up, a friend pointed me to this article, as an example of some smart thinking, imho.)
But I digress, what I would like to do next is break down the workflow we hacked together in Maya's VP2.0 so you can reap similar benefits, if it makes sense for you. If you've seen the teaser for my project Starfish Ninja, you've seen VP2.0 in action in every single frame.
What you see here is a test composite that was developed to break compositors into our workflow. Everything you see here, with the exception of the godrays streaming in from the top comes from either a hardware 1.0 or hardware 2.0 (viewport 2.0) pass from Maya, both of which are just glorified playblasts.
|Test Composite - for training purposes|
The final look is a product of the combination of about a dozen images that each contain a specific lighting or compositing pass. As seen below, the lighter can preview the composite of all the passes to make tweaks before handing it off to be put together (in our case in Nuke or After Effects).
To get passes such as depth and automated rim lighting, we made use of a few CGFX shaders developed for game purposes. With a budget to write shaders, I'm sure we could have made a more efficient water shader, though reflection maps and some compositing cheats got us through the variety of shots in the teaser.
That covers most of what is worth going over here. Keep reading if you want to see how we used it in more detail, and we'll conclude with a summary of pros and cons.
Below are some screen caps from another shot from the teaser. These are the main passes needed to composite the shot, although some others were left out for the purposes of this discussion. I've left the heads up display on to emphasize that this is what you see in viewport. Click to see larger:
For Ambient Occlusion in Maya 2012, we found there were some problems with grainy artifacts. Since this was what was available for the project, we found that if we applied a median filter and slightly blurred the occlusion in post we could achieve acceptable results. Below is first the version with artifacts, then without.
Below we compare a quick composite before the AO, and after.
And here is an enlargement of some of the problems you have with filtering the AO that has artifacts. Most of these issues were able to be tweaked and massaged out of the final composites, given a little extra time. Keep in mind, though, that everything you see in the teaser is generated in near real time, as opposed to rendering for hours. So we can afford to tweak a bit by hand.
|Maya 2012 artifacting - fixed in 2013|
**The good news is that Autodesk has fixed the problems with the AO in Maya 2013, so the images below show what AO looks like on our octopus model, artifact free! To set up a shader for an AO pass, on a new render layer, we assign a layer override with a white lambert with white ambient color. Turn on the AO in the viewport 2.0 settings (assign a layer override for this so it's not on the other render layers). The radius and filtering settings can be tweaked to get the right amount of falloff.
|Tight AO Pass - to capture fine detail|
|Broad AO Pass - to get larger shading|
|2 AO Passes - multiplied together for final result|
So let's end with a comparison of the benefits and down sides:
- What you see in the viewport is what renders.
- Very short render times: 4 sec/1080p frame for all passes. Each shot can then be rendered on an individual machine in minutes, not hours or days.
- Ambient Occlusion that renders in realtime
- Anti Aliasing - up to 16 samples
- Depth of Field - better done in AE or Nuke
- Motion Blur - it's crappy, but yes
- Limited anti-aliasing. More than 16 samples would be great in many cases.
- Hardware 2.0 does not support particles or animated textures, though Hardware 1.0 does to an extent.
- No raytracing (reflection maps can be used, soft depth map shadows can be used)
- No displacement (though this may be resolved in the future)
- Limited to 1920 x 1080 maximum resolution (perfect for tv, though)
- No light linking or negative lights, and a limit of 16 lights per pass. This is the main reason that so many passes must be output.
I hope you found this interesting. Next week, we'll look at some neat effects from the teaser that were created frame by frame, by hand. Why? Tune in next week to find out!