POV-Ray : Newsgroups : povray.off-topic : This is another "free" unbiased engine: Indigo Render Server Time
11 Oct 2024 19:16:57 EDT (-0400)
  This is another "free" unbiased engine: Indigo Render (Message 41 to 50 of 54)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 4 Messages >>>
From: Tom York
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 21:50:00
Message: <web.4723ea7a8d03a40e7d55e4a40@news.povray.org>
John VanSickle <evi### [at] hotmailcom> wrote:
> For animations this is a show-stopper.  Picture quality *must* be
> consistent from frame to frame, and that rules out any perceptible
> degree of graininess.

I think you can have Maxwell (at least, don't know about the others) cut off
when a selected noise level is reached. Animations are possible, but I would
think the main reason against them would be the crippling render times. When
one frame can take hours to render, it's really not practical:

http://www.maxwellrender.com/img/gallery/videos/promotional/whentheyfall.mov

The noise seems to be at a consistent level in that.

> in their docs they say that the only real drawback to
> ray-tracing is the requirement that the entire scene be containable in
> memory

In RAM, they mean? If so, I don't think that can be correct (or up to date);
look for Ingo Wald's work on out-of-core (and realtime) raytracing. I think
they must have some use for raytracing or they wouldn't have bothered adding it
to PRMan 11. Apart from OpenRT, which supports this as a matter of course,
there's also

graphics.cs.uni-sb.de/Publications/2004/Boeing_EGSR2004.ppt

which describes the challenges to raytracing when rendering a 350-million
triangle model realtime (35-70GB of data, apparently).

Tom


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 21:50:53
Message: <4723eafd@news.povray.org>

> Darren New wrote:
>> Color me unimpressed. Maybe it's because I'm not an expert, but some 
>> of the sub-surface scattering stuff is the only stuff that looks 
>> particularly good to me. Balanced against most of their proud gallery 
>> being obnoxiously grainy, I don't see it as a win just from the photos.
>>
>> Is it possible to automatically know when a scene is good enough? Or 
>> does it take human intervention to say "ok, stop now and move on to 
>> the next frame"?
> 
> For animations this is a show-stopper.  Picture quality *must* be 
> consistent from frame to frame, and that rules out any perceptible 
> degree of graininess.  Letting the unbiased renderers go until the grain 
> is gone is not practical, because that requires a human to monitor the 
> render, and requires that human to decide consistently from one frame to 
> the next.  The only way an unbiased renderer could be used in animation 
> work is to let it render the first frame of every shot, decide on an 
> acceptable quality level, and then allow that much time for each frame, 
> and hope that the movement of some object or the camera doesn't increase 
> the time requirement significantly.
> 
You can always process more later. I played with an open-source 
forwards-raytracer (not sure if it's really unbiased) where you could 
save data to a file, and at any moment reload it and render some more. 
You could also start up instances on different computers, let them run 
for a few hours/days, and then merge the results.

So it would be possible to render up to a certain quality level for all 
frames, then (in some automated way) reload each frame, render a few 
more passes on them, and save them back. Or, render only one pass on 
each frame, so that you get the whole animation done pretty fast (even 
though it would be EXTREMELY grainy), and then repeatedly render single 
passes on all frames. That way all frames would slowly get better at the 
same rate.


Post a reply to this message

From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 23:08:45
Message: <4723fd3d@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> >   I suppose that if you are calculating the lightmaps into something else
> > than bitmaps you could do adaptive supersampling 

> Hmmm. Unless you're talking about how to turn a bunch of radiosity chips 
> in 3D into a bitmap of the 3D structure as seen from a particular point 
> in space, I am confused. Either you're talking about something other 
> than what I learned, or my memory of what I learned doesn't match what 
> the "radiosity" algorithm really does.

  When you wrote

"you could build it in a way that let the areas with lots of detail have
finer resolution than the areas with less detail"

it seemed clear to me that you were talking about the lightmap resolution.
Were you talking about something else?

  The basic radiosity algorithm is basically calculating the illumination
of surfaces into lightmaps (which can be applied to those surfaces).
A lightmap is basically just an image map, but instead of telling the
color of the surface at that point (which is something a texture map does)
it tells the lighting (ie. brightness and coloration) of the surface at
that point. A rendering engine filters the texture map with the light map
in order to get the final surface color.

  Radiosity is an algorithm for calculating such lightmaps. For each pixel
in the lightmap, the "camera" is put onto the surface of the object
corresponding to that lightmap pixel, facing outwards, and the half-world
seen from that point of view is averaged into that lightmap pixel. This is
done multiple times in order to get diffuse inter-reflection of light
between surfaces.

  The great thing about radiosity is that calculating the lightmaps can
be done with 3D hardware, making it quite fast (although still not
real-time).

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 23:16:36
Message: <4723ff14@news.povray.org>
John VanSickle <evi### [at] hotmailcom> wrote:
> The only way an unbiased renderer could be used in animation 
> work is to let it render the first frame of every shot, decide on an 
> acceptable quality level, and then allow that much time for each frame, 
> and hope that the movement of some object or the camera doesn't increase 
> the time requirement significantly.

  Even if the quality level is consistent in all frames, if there's *any*
graininess visible at all, it will probably flicker randomly from frame to
frame, which is probably not something very pleasant.

  I have been thinking about one thing when I look at some of the example
images made by those renderers, especially the ones which show a car.
If I understood correctly, it takes the renderer quite a humongous amount
of time to render such a picture (I think someone mentioned 12 hours
somewhere?).
  Many of the car pictures look like you could create an almost identical
picture with POV-Ray (at least with POV-Ray 3.7, thanks to its HDRI support)
and make it render it in far less than an hour. Probably less than a half
hour.

  It just feels that sometimes using "less accurate" rendering methods
which nevertheless produce a completely acceptable image is more feasible
than using 12+ hours to render a "physically accurate" picture which to
the layman doesn't look any different... :P

-- 
                                                          - Warp


Post a reply to this message

From: Fa3ien
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 01:02:22
Message: <472417de$1@news.povray.org>


> Is it possible to automatically know when a scene is good enough? Or 
> does it take human intervention to say "ok, stop now and move on to the 
> next frame"?

It might be possible to detect something like "amount of noise / 
graininess" in an image, and set a threshold.

Fabien.


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 01:37:52
Message: <47242030@news.povray.org>


> 
>> Is it possible to automatically know when a scene is good enough? Or 
>> does it take human intervention to say "ok, stop now and move on to 
>> the next frame"?
> 
> It might be possible to detect something like "amount of noise / 
> graininess" in an image, and set a threshold.
> 

See the question about stop criteria:
http://www.winosi.onlinehome.de/FAQ.htm


Post a reply to this message

From: John VanSickle
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 12:45:43
Message: <4724cac7$1@news.povray.org>
Tom York wrote:
> John VanSickle <evi### [at] hotmailcom> wrote:
> 
>>For animations this is a show-stopper.  Picture quality *must* be
>>consistent from frame to frame, and that rules out any perceptible
>>degree of graininess.
> 
> I think you can have Maxwell (at least, don't know about the others) cut off
> when a selected noise level is reached.

The only real issue here is whether the noise is really noise or merely 
the way the scene is actually supposed to look.  I suppose that the best 
way to tell from an automated standpoint is to measure subsequent 
changes to the pixels, and when those changes are consistently below a 
certain threshold, the rendering is assumed to be good enough.

> Animations are possible, but I would
> think the main reason against them would be the crippling render times. When
> one frame can take hours to render, it's really not practical.

For getting stuff out the door quickly, long render times are a 
show-stopper, but for a feature film with a five-year production time 
table, if your render farm has a thousand boxes in it (such a farm can 
be had for less than a megabuck, so the bigger houses can easily afford 
them), each box only needs to render 130 to 170 frames; a render time of 
half a day is acceptable under those particular circumstances.

> In RAM, they mean? If so, I don't think that can be correct (or up to date);
> look for Ingo Wald's work on out-of-core (and realtime) raytracing. I think
> they must have some use for raytracing or they wouldn't have bothered adding it
> to PRMan 11.

They did; while most reflective surfaces can be simulated well enough 
with an environment map (which is how they did reflection before 
_Cars_), reflecting objects that are in the scene and which are moving 
in the scene require ray tracing in order to achieve the efficiency that 
Pixar requires.  Pixar trades accuracy for speed when the accuracy is 
only going to be noticed by people like CGI hobbyists, but sometimes the 
accuracy is discernible by the average Joe, and that's when quality gets 
emphasis.

Regards,
John


Post a reply to this message

From: John VanSickle
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 12:52:10
Message: <4724cc4a$1@news.povray.org>
Warp wrote:
>   It just feels that sometimes using "less accurate" rendering methods
> which nevertheless produce a completely acceptable image is more feasible
> than using 12+ hours to render a "physically accurate" picture which to
> the layman doesn't look any different... :P

This is precisely Pixar's position on rendering.  As long as it looks 
good enough to support the story without distracting from the story, 
it's good enough.

Regards,
John


Post a reply to this message

From: Tim Attwood
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 14:57:08
Message: <4724e994$1@news.povray.org>
> The problem being that doing equivalent scenes in POV-Ray (particularly 
> interior scenes) is, from a practical point of view, impossible. Light 
> sources in POV-Ray are much too primitive for that, there's no support for 
> true area lights or good-looking blurred reflections and while there are 
> situations where it's more or less possible to simulate this (using 
> various tricks), in most cases it just doesn't work. Jaime is probably the 
> POV-Ray artist who has done the most research in that, and his best 
> results, impressive as they are from a POV-Ray perspective, are just not 
> in same league and are plagued with radiosity artifacts and area light 
> graininess.
> http://www.ignorancia.org/en/index.php?page=Modern_interior
>
> Even with the grain, the quality of the illumination in unbiaised 
> renderers is unparalleled, simply because there's no cheating involved.

I agree that there are "ease of use" issues with POV.  The work-
flow of getting the right radiosity settings, and media settings is
not very robust. Small changes can make big differences in render
time and quality.

Some "newer" features are missing from POV, SSS, blurred reflections.

Overall, many images I have seen from POV are somewhat sharp-
edged, sort of 3D cartoony.  However the best images are almost
photo-realistic, I didn't get that impression from the Indigo gallery,
the only one I remember being that way is the whisky glass.

Being able to get fast previews in Indigo, or change lighting after
the fact like Maxwell obviously is good.

It might be an OK trade-off to get a little bit grainier pictures
in exchange for spending less time messing with technical settings.

The last time I looked at Blender it had a very bad UI, maybe I should
download a newer version and see if it has changed.


Post a reply to this message

From: Jim Charter
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 15:00:05
Message: <4724ea45@news.povray.org>
Warp wrote:

>   It just feels that sometimes using "less accurate" rendering methods
> which nevertheless produce a completely acceptable image is more feasible
> than using 12+ hours to render a "physically accurate" picture which to
> the layman doesn't look any different... :P
> 

Sure, and I think we all understand that, and that furthermore, the 
point is always worth mentioning, always one leg of a platform that will 
fail if we don't constantly remind ourselves of it.  But we stabilize 
and reinforce our sanity platform precisely so we can rest on it, use it 
as a base, while we take our flights of fancy.

While reading these 'v4.0' discussions I have been forced to think a 
little about what the appeal of POV-Ray really is for me.  The idea that 
I keep coming back to is one I term, 'THE HUNT'.  I get the idea from a 
text written by the French artist Jean Dubuffet and I think I have 
mentioned it here before.  It is a text I originally read in one of the 
compilations by Herchel Chipp while I was in art school in the 70's. I 
am sorry I cannot provide a reference beyond that. The title was ,I 
think, "L'Impreints". In it Dubuffet takes the reader inside the 
creative process in a way few others have.  The medium he was using was 
a personal variant on printmaking which involved pulling monoprints from 
a surface spread with a film of india ink, not printing ink, and 
perturbed (POV pun intented) with foreign matter such as dust and sand. 
  As he pulls proof after proof he describes the worlds he discovers 
within the ink patterns with the immediacy of the still drying swirls 
and grit.  Early in the essay he says, (in English translation, and to 
the best of my memory,) "It is like a hunt."

And I believe that this is a the root of POV's appeal for me and 
probably for others,...its support for 'THE HUNT'.

POV, an elder statesman of CG, has embodied like its maturing 
colleagues, the basic desparity between the pure goal and the impure 
solution.  Artists drawn to it must navigate this basic contradiction 
philosophically and creatively.  We understand we are working with an 
analogue, but an analogue that comes wickedly, and tantalizingly close 
sometimes, to its subject.

This realization first came to me while reading the discussions 
surrounding syntax.  There was a recognition for the need to change the 
syntax to 'abstract' the syntax from the functionality.  The gain could 
be greater ease of developmen and greater variety of function. The was 
recognition that more robust syntax might leverage everything from model 
making to material attributes through the techniques of object 
orientation. At times personal stylistic preferences did seem to creep 
in. A preoccupation with syntax as an end in itself did seem to creep 
in. There was also a recognition that the current syntax holds 
properties of their own innate appeal, though the appeal is difficult to 
define.  The attempt to define that appeal more clearly for myself lead 
to my greater sense of what the appeal of POV is for me.  The Hunt.  The 
ability to start at eny point and have the chase of ideas intensify. 
What starts as a lighting test ends as a scene of breadth and scope. 
What starts as a grand plan to render a universe ends as a lighting test 
of gemlike beauty. It involves a syntax of ease, flexibility, and 
immediacy.  Improve the syntax, I say, it is innate to the creative 
process. But always support The Hunt.

Now again with the reappeal for rendering quality as a priority we are 
reminded of the primitive urge at the heart of The Hunt.  The perhaps 
irrational, Promethian hope, of making it real.  Early on, one 
individual called for the need to define the purpose of 'v4.0' on a 
general level before indulging in particulars.  Well, I thought, there 
is always someone who wants to define the rules for making the rules for 
making the rules.  But that cautionary cry has come back to haunt. What 
really is at the very heart of this. The POV community has always 
favoured the interwoven strands of purist means as a path to truth and 
cutting-edge technique in the hope of gaining it.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 4 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.