Jan 2012
1 / 10
Jan 2012
Jan 2012

Hi,

Can anyone tell me the difference and advantages of rendering 3D channels under following 2 ways:

Diffuse, Occlusion, Shadows as separate channels under tga or iff format
vs
all channels under one exr file.

Am using Nuke to composite these, so please let me know the effect of the above renderers over Nuke compositing.

Thanks.

Diffuse, Occlusion, Shadows as separate channels under tga or iff format

easier to finde compilant editor for the file format, say you wanted to retouch outside of your normal workflow because some data needs to be saved.

all channels under one exr file.

Everything in on package, less likely to loose part of the set, however on the otherhand you also loose all in one go if it corrupts. Wide range of depth options. But offcourse soem of the davatage is lost of you notice a layer need to be added later tp the set.

easier to finde compilant editor for the file format, say you wanted to retouch outside of your normal workflow because some data needs to be saved<<

You mean it would be easier to render in tga or iff format, if we wanted to retouch. But you can still do this even if we render this in exr format, right?

Everything in on package, less likely to loose part of the set,<<

What is that we loose?

however on the otherhand you also loose all in one go if it corrupts.<<

I agree. But still files getting corrupt would be a rare instance...

Generally, i need to understand the difference in the speed of rendering between these modes.

Thanks for your time, looking forward for the answers.

Even if you render in exr you don't have to put everything in 1 file, it is just something you can do if wanted.
My company render in exr but all passes separated.

For your passes, some softwares like maya gives you some options to split your renders for free. I explain, the soft already compute some passes to create the render and it gives you the options to add the exr layers almost for 'free' render time. If you use a specific render layer to create a pass, it is a new render that need to be done.

does it make sense ?

Yes when i use exr i dont put them together. But i can understand the rationale that lost files is a distinct possibility when your dealing with big productions. At the moment im not even doing vfx works just some really serious batches of fem calculations* but still we managed to miss file a a number of 370 gigabyte files**, seriously youd think its easy to find. But alas still theres a bit over a 10e9 files on the server in question so no luck...

  • well its exactly like wfx work just smaller audience.

** yeah imagine rendering video but instead of a 2d pixel files with multiple channels you have the same thing but in 3 dimensions, and obviously longer frame render times. So insdtea dof file sizes growing with resolution ^2 you have files the run ^3.

Yes fxnurbs, all your replies makes sense. I see that many companies render with EXR format, but eventhough they can combine all passes in 1 single EXR file, to be on the safer side they render individual passes separately in EXR format.

Do you have any idea on my following point mentioned above

"Generally, i need to understand the difference in the speed of rendering between these modes.???"

Thanks Joojaa, i understand the effect of "Missing passes" which creates an intense pressure during Production.

** yeah imagine rendering video but instead of a 2d pixel files with multiple channels you have the same thing but in 3 dimensions, and obviously longer frame render times. So insdtea dof file sizes growing with resolution ^2 you have files the run ^3.  - can you elaborate on this point

I just did a test on rendering between IFF 32bit and EXR 32bit. Both seem to render fine, until 1 of the passes in IFF 32bit rendered wrongly. So, while checking Maya documentation, we came to know that IFF only suppports until 16bit. So, now using IFF format is also rulled out.

So finally, if we are to Color correct a lot/ adjust brightness contrast while compositing, it is better to render out EXR32bit from Maya. But, if we are not going to adjust colors much in compositing, it is better to render as IFF8bit.

Am i right?

use 16 bits renders, there is plenty enough of info in them to be a good compromise between size/grade safe. Just remember some passes need to be 32bits.

as far that i know there is no rendering speed difference between exr  multi layers with other method. But as i said when you use layering render from maya, some of the passes you can split from the main render are almost zero render time as part of the caculation of the main render.

Viewing the exr is indeed a pain. you can try to use RV or framcycler, but it is still slow. use nuke, it can play an exr file in cach. If you use zip compression, that will help. Rendering a quick jpg sequence con be good for check purpose.

for color differences in maya, check your setup are correct ! if you use a linear workflow, check you have the correct lut applied to your render view (in the property) there is a gamma to add on the viewer settings too. Make some tests, but what you see in your viewer must be what you see in nuke (with the same lut applied of course, let say srgb)
And don't forget that this linear process make that your render seen from outside will be different and bright, it need a srgb lut to be seen with the proper gamma curve !

I totally agree with you that a 16bit file is good enough as it comes with proper depth range.

Thanks for referring RV which can be used for previewing exr files instead of fCheck. I will request for a trial license for trying out.

With regard to difference between "Maya Preview rendering output" and "Maya EXR batch rendering output", we tried following settings and it worked great.

Under Maya preview render
Display > 32-bit floating point (HDR)
Display > Color management > Image Color Profile > Linear
Display > Color management > Image Color Profile > sRGB (default)

Now whatever we see inside Maya, exactly matches with the EXR batch rendered frame.