Watch the teaser trailer HERE!

Monday, August 27, 2012

"Realtime" Rendering with Viewport 2.0


The day is finally here!  In this week's post I will be explaining some of the secrets to how we knocked render times from minutes per frame down to mere seconds.  This is not a step-by-step, but it will be slightly technical, so with an intermediate level of knowledge, you should be able to follow the concepts.  If you have yet to watch the teaser for Starfish Ninja, check it out, then come right back :)

I'll begin with this statement:

Anything can be used to make something.


By this I mean that sometimes a great solution can come from a unexpected tool - something that is perhaps meant for a completely different task. So when you're looking for a solution to a difficult problem, try thinking of how you might "misuse" every tool you know to get the job done. Try it.

I realize that I'm weird. When I hear about some new technology, my first thought is about how I might use it to do something completely different with it. So I was instantly excited when I heard that Maya 2012 would have a new viewport shading algorithm that would support anti-aliasing and screen-space ao.  I tossed my head back and giggled like a school girl. "I'm done rendering forever!" I shouted, despite the odd looks from my wife.

Then when I finally got to play with the feature, I immediately ran into some of the drawbacks of this early tech.  It's certainly not feature complete, and not without flaws - as we will explore below.  And this is where most people just turn VP2.0 off and shake their fists at Autodesk. Fools, they are, I say!  While this is not a replacement for Mental Ray or Renderman by any means, when you compare quality of output to render time, the techniques we developed on Starfish Ninja should sound appetizing to smaller non photo real productions, or perhaps simply for animators who don't need to spend days rendering and comping their animation tests.  It's my hope that more rendering technologies begin to move in the realtime direction--that this is only the beginning of a new direction for the output of art in our industry.  Let's dive in.

I discovered that if you render a bunch of passes as render layers, you can overcome many of the drawbacks (which i will cover later) and get a single pass such as ambient occlusion to spit out at 2 to 4 fps.  We were also able to use cgfx shaders similar to game engine shaders to cheat things like rim lighting and depth passes at incredibly high frame rates, all at full HD.

-displayed at 50fps on a modest workstation
Now instantly the critics out there will say, "Wait, that's ssao which is not accurate." Yes, I am aware. I am also aware that starfish do not talk, and do not know the ways of the ninja. So there. Ha.

I'm looking for a graphic style, not a photocopy of reality. So I want my artists to spend time pushing, pulling, shifting, and otherwise redirecting the imagery instead of paying huge overhead for a render farm.  If you control the effect and build it into the style and visual vocabulary of the show, it will work, in most cases.

And now with technologies like Source Filmmaker out there, and even just improvements to Maya's VP2.0 in 2013, artists have more potential tools to get the job done cheaply than ever before.  Another bit of software I'm very excited to get time to play with is the Furryball render plugin for Maya. Here you'll see the best of not only VP2.0 but also UTE and CryEngine taken to the extreme, and ready for tv production. Take a look at the features in the latest release. Realtime, in viewport indirect lighting and cheated radiosity. Realtime reflection and refraction. Not to mention particle and fluid effects.  With all the R&D that has been happening in the game industry, it's a shame that more film studios aren't riding in the wake of game development for GPU processed subdivision and displacement, at least for non photorealistic production.  (After drafting this up, a friend pointed me to this article, as an example of some smart thinking, imho.)

But I digress, what I would like to do next is break down the workflow we hacked together in Maya's VP2.0 so you can reap similar benefits, if it makes sense for you. If you've seen the teaser for my project Starfish Ninja, you've seen VP2.0 in action in every single frame.

What you see here is a test composite that was developed to break compositors into our workflow.  Everything you see here, with the exception of the godrays streaming in from the top comes from either a hardware 1.0 or hardware 2.0 (viewport 2.0) pass from Maya, both of which are just glorified playblasts.

Test Composite - for training purposes

The final look is a product of the combination of about a dozen images that each contain a specific lighting or compositing pass.  As seen below, the lighter can preview the composite of all the passes to make tweaks before handing it off to be put together (in our case in Nuke or After Effects).


To get passes such as depth and automated rim lighting, we made use of a few CGFX shaders developed for game purposes.  With a budget to write shaders, I'm sure we could have made a more efficient water shader, though reflection maps and some compositing cheats got us through the variety of shots in the teaser.

That covers most of what is worth going over here.  Keep reading if you want to see how we used it in more detail, and we'll conclude with a summary of pros and cons.

Below are some screen caps from another shot from the teaser.  These are the main passes needed to composite the shot, although some others were left out for the purposes of this discussion.  I've left the heads up display on to emphasize that this is what you see in viewport.  Click to see larger:



 For Ambient Occlusion in Maya 2012, we found there were some problems with grainy artifacts.  Since this was what was available for the project, we found that if we applied a median filter and slightly blurred the occlusion in post we could achieve acceptable results.  Below is first the version with artifacts, then without.

 Below we compare a quick composite before the AO, and after.


And here is an enlargement of some of the problems you have with filtering the AO that has artifacts.  Most of these issues were able to be tweaked and massaged out of the final composites, given a little extra time.  Keep in mind, though, that everything you see in the teaser is generated in near real time, as opposed to rendering for hours.  So we can afford to tweak a bit by hand.

Maya 2012 artifacting - fixed in 2013

**The good news is that Autodesk has fixed the problems with the AO in Maya 2013, so the images below show what AO looks like on our octopus model, artifact free!  To set up a shader for an AO pass, on a new render layer, we assign a layer override with a white lambert with white ambient color.  Turn on the AO in the viewport 2.0 settings (assign a layer override for this so it's not on the other render layers).  The radius and filtering settings can be tweaked to get the right amount of falloff.

Tight AO Pass - to capture fine detail
Broad AO Pass - to get larger shading
2 AO Passes - multiplied together for final result


So let's end with a comparison of the benefits and down sides:


The Pros:


  • What you see in the viewport is what renders.
  • Very short render times:  4 sec/1080p frame for all passes.  Each shot can then be rendered on an individual machine in minutes, not hours or days.
  • Ambient Occlusion that renders in realtime
  • Anti Aliasing - up to 16 samples
  • Depth of Field - better done in AE or Nuke
  • Motion Blur - it's crappy, but yes


The Cons:

  • Limited anti-aliasing.  More than 16 samples would be great in many cases.
  • Hardware 2.0 does not support particles or animated textures, though Hardware 1.0 does to an extent.
  • No raytracing (reflection maps can be used, soft depth map shadows can be used)
  • No displacement (though this may be resolved in the future)
  • Limited to 1920 x 1080 maximum resolution (perfect for tv, though)
  • No light linking or negative lights, and a limit of 16 lights per pass.  This is the main reason that so many passes must be output.

I hope you found this interesting.  Next week, we'll look at some neat effects from the teaser that were created frame by frame, by hand.  Why?  Tune in next week to find out!


17 comments:

Asif said...

Great article Joe! Now I can pass this on to some friends that were trying to understand the workflow. Also, the link to autodesk talking about vector displacement maps in VP2 is amazing! Hopefully they integrate that into maya soon, I want exr's!!

Unknown said...

Ah, yes. The .exr format would be a nice option, as well as motion vector output for better motion blur. There is also a weird bug with depthmap shadows not working if there is geometry that covers the whole scene.

If anyone hears of news related to realtime gpu rendering or something similar to what we've done here, feel free to leave a comment. I'd love to check it out.
Cheers!

Unknown said...

Pretty cool. I just finished a project a couple of months ago and I had to use GPU render too, cos there would be no way to model, animate and render 7 min of 1080p animation in two weeks.

Some samples:
http://www.rodrigosenna.com.br/videos.php?id=22

Hope things evolve in this direction, its a real money and time saver.

Unknown said...

@ Rodrigo
Very nice work, and a great example of what can be done on the GPU. Was that done in the maya viewport, or with a renderer like Furryball?

Thanks for sharing. And I'm also enjoying your blog. Cheers!

Unknown said...

Basically on VP2.0. For one thing or another we had to use MR or MS, like the water and the yellow gear which has some raytracing reflections.

Depth map shadows are still not good enough, we had some flickering issues.

But the real time lighting and rendering is just great for fine tuning the colors and contrast.

Hope to see the Starfish Ninja finished soon! Its looking great!

Sam Gannaway said...

When I read that you were doing serious rendering with VP2.0, I thought a mel script I wrote might be useful for you. If you don't already have a tool for batch playblasting, check out my Shotlist Manager:
https://sites.google.com/site/shotlistmanager/
The main focus is for render farm submission, but many people have said that one of their favorite tools is the batch playblaster. It lets you playblast a series of shots (plus layers and/or cameras). If you have any questions about it (or are using something better), let me know.

Unknown said...

Hey Sam, that script looks great, I'm going to check that out, for sure. Thanks!

Unknown said...

Hi there!

Amazing work you are doing!

Can you share with us which cgfx you used for z-depth?

I am trying to do the same thing but cant find a solution to do it nor a cgfx shader that does it.

I tried using vertex normals and a point light, but some objects are not working this way

mikaljains said...

Congratulations! This is the great things. Thanks to giving the time to share such a nice information.
Rebus Media

Anonymous said...


Many thanks for the fantastic article. I found it really informative, keep up the good work

cardboard boxes

Unknown said...

Great job and very good article. Thanks for sharing! I am recently working on a project on a limited budget and I'm experimenting with viewport too. I was really inspired by your article. there are a few things that I could mention:

1- first of All, there is a solution for both the limited resolution and antialising of viewport. If you use playblast with image and set it to get the resolution from render settings, you can render up to resolution of 4096*2304, (you should enable render offscreen, otherwise your maya might crashes). this would take care of resolution limit and because of the big output, you can then shrink it in post and have a full HD with less aliasing problem.

2- with maya 2015, you can even render hair with xgen although if you do batch render, you won't see the hair in output, but if you use the playblast method, you would have perfect xgen hairs on your characters.

3- using mental ray bake texture for the background and then adding shadow of moving objects with negetive light technique would give you kind of realtime GI and FG!

4- It should be also mentioned that with new viewport, you can see tesselation of your displacement maps in realtime, too. also now, viewport support nparticles and kind of fluids, so with a little bit of tweaking, rendering godrays and volumetrics are not totally impossible! :)

Unknown said...

Thanks for the comment, Adel. It's amazing how much has changed in Maya with vp2 since Maya 2012. Many great features have been added, for sure.

I was disappointed to see that dx11 displacement is calculated after the ssAO, meaning they don't match up. This pretty much makes displacement unusable for me since realtime ao is one of the best things about the renderer. Not sure if this has been addressed in 2015 with the open subdiv displacement implementation.

As far as rendering at double resolution, what kind of video card do you have? Mine caps out at 2048 square. Time for me to upgrade, I think ;)

I have yet to use 2015. Does it support negative lights? Because 2014 had yet to implement it.

Cheers!

Aditi said...

Thanks for info! I would like to share one new online service provider company which provides the final Architectural 3D Images within 7 days of promised time period. Check out here- http://my3dhouse.com

Unknown said...

hi how can render xgen fur or hair real time

Bullion Jackpot Call said...

Its really interesting and too good blog,thanks for sharing.
Commodity Free Tips

Kidslearnwithfun said...

Be a ninja master to play ninja games Android Gameplay Download Game Here

Mycustomboxes said...

I love this Your Blog. This is the great Things. Mycustomboxes.