|
|
|
|
|
|
| |
| |
|
|
From: Fa3ien
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 01:02:22
Message: <472417de$1@news.povray.org>
|
|
|
| |
| |
|
|
> Is it possible to automatically know when a scene is good enough? Or
> does it take human intervention to say "ok, stop now and move on to the
> next frame"?
It might be possible to detect something like "amount of noise /
graininess" in an image, and set a threshold.
Fabien.
Post a reply to this message
|
|
| |
| |
|
|
From: Nicolas Alvarez
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 01:37:52
Message: <47242030@news.povray.org>
|
|
|
| |
| |
|
|
>
>> Is it possible to automatically know when a scene is good enough? Or
>> does it take human intervention to say "ok, stop now and move on to
>> the next frame"?
>
> It might be possible to detect something like "amount of noise /
> graininess" in an image, and set a threshold.
>
See the question about stop criteria:
http://www.winosi.onlinehome.de/FAQ.htm
Post a reply to this message
|
|
| |
| |
|
|
From: John VanSickle
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 12:45:43
Message: <4724cac7$1@news.povray.org>
|
|
|
| |
| |
|
|
Tom York wrote:
> John VanSickle <evi### [at] hotmailcom> wrote:
>
>>For animations this is a show-stopper. Picture quality *must* be
>>consistent from frame to frame, and that rules out any perceptible
>>degree of graininess.
>
> I think you can have Maxwell (at least, don't know about the others) cut off
> when a selected noise level is reached.
The only real issue here is whether the noise is really noise or merely
the way the scene is actually supposed to look. I suppose that the best
way to tell from an automated standpoint is to measure subsequent
changes to the pixels, and when those changes are consistently below a
certain threshold, the rendering is assumed to be good enough.
> Animations are possible, but I would
> think the main reason against them would be the crippling render times. When
> one frame can take hours to render, it's really not practical.
For getting stuff out the door quickly, long render times are a
show-stopper, but for a feature film with a five-year production time
table, if your render farm has a thousand boxes in it (such a farm can
be had for less than a megabuck, so the bigger houses can easily afford
them), each box only needs to render 130 to 170 frames; a render time of
half a day is acceptable under those particular circumstances.
> In RAM, they mean? If so, I don't think that can be correct (or up to date);
> look for Ingo Wald's work on out-of-core (and realtime) raytracing. I think
> they must have some use for raytracing or they wouldn't have bothered adding it
> to PRMan 11.
They did; while most reflective surfaces can be simulated well enough
with an environment map (which is how they did reflection before
_Cars_), reflecting objects that are in the scene and which are moving
in the scene require ray tracing in order to achieve the efficiency that
Pixar requires. Pixar trades accuracy for speed when the accuracy is
only going to be noticed by people like CGI hobbyists, but sometimes the
accuracy is discernible by the average Joe, and that's when quality gets
emphasis.
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
From: John VanSickle
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 12:52:10
Message: <4724cc4a$1@news.povray.org>
|
|
|
| |
| |
|
|
Warp wrote:
> It just feels that sometimes using "less accurate" rendering methods
> which nevertheless produce a completely acceptable image is more feasible
> than using 12+ hours to render a "physically accurate" picture which to
> the layman doesn't look any different... :P
This is precisely Pixar's position on rendering. As long as it looks
good enough to support the story without distracting from the story,
it's good enough.
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
From: Tim Attwood
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 14:57:08
Message: <4724e994$1@news.povray.org>
|
|
|
| |
| |
|
|
> The problem being that doing equivalent scenes in POV-Ray (particularly
> interior scenes) is, from a practical point of view, impossible. Light
> sources in POV-Ray are much too primitive for that, there's no support for
> true area lights or good-looking blurred reflections and while there are
> situations where it's more or less possible to simulate this (using
> various tricks), in most cases it just doesn't work. Jaime is probably the
> POV-Ray artist who has done the most research in that, and his best
> results, impressive as they are from a POV-Ray perspective, are just not
> in same league and are plagued with radiosity artifacts and area light
> graininess.
> http://www.ignorancia.org/en/index.php?page=Modern_interior
>
> Even with the grain, the quality of the illumination in unbiaised
> renderers is unparalleled, simply because there's no cheating involved.
I agree that there are "ease of use" issues with POV. The work-
flow of getting the right radiosity settings, and media settings is
not very robust. Small changes can make big differences in render
time and quality.
Some "newer" features are missing from POV, SSS, blurred reflections.
Overall, many images I have seen from POV are somewhat sharp-
edged, sort of 3D cartoony. However the best images are almost
photo-realistic, I didn't get that impression from the Indigo gallery,
the only one I remember being that way is the whisky glass.
Being able to get fast previews in Indigo, or change lighting after
the fact like Maxwell obviously is good.
It might be an OK trade-off to get a little bit grainier pictures
in exchange for spending less time messing with technical settings.
The last time I looked at Blender it had a very bad UI, maybe I should
download a newer version and see if it has changed.
Post a reply to this message
|
|
| |
| |
|
|
From: Jim Charter
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 15:00:05
Message: <4724ea45@news.povray.org>
|
|
|
| |
| |
|
|
Warp wrote:
> It just feels that sometimes using "less accurate" rendering methods
> which nevertheless produce a completely acceptable image is more feasible
> than using 12+ hours to render a "physically accurate" picture which to
> the layman doesn't look any different... :P
>
Sure, and I think we all understand that, and that furthermore, the
point is always worth mentioning, always one leg of a platform that will
fail if we don't constantly remind ourselves of it. But we stabilize
and reinforce our sanity platform precisely so we can rest on it, use it
as a base, while we take our flights of fancy.
While reading these 'v4.0' discussions I have been forced to think a
little about what the appeal of POV-Ray really is for me. The idea that
I keep coming back to is one I term, 'THE HUNT'. I get the idea from a
text written by the French artist Jean Dubuffet and I think I have
mentioned it here before. It is a text I originally read in one of the
compilations by Herchel Chipp while I was in art school in the 70's. I
am sorry I cannot provide a reference beyond that. The title was ,I
think, "L'Impreints". In it Dubuffet takes the reader inside the
creative process in a way few others have. The medium he was using was
a personal variant on printmaking which involved pulling monoprints from
a surface spread with a film of india ink, not printing ink, and
perturbed (POV pun intented) with foreign matter such as dust and sand.
As he pulls proof after proof he describes the worlds he discovers
within the ink patterns with the immediacy of the still drying swirls
and grit. Early in the essay he says, (in English translation, and to
the best of my memory,) "It is like a hunt."
And I believe that this is a the root of POV's appeal for me and
probably for others,...its support for 'THE HUNT'.
POV, an elder statesman of CG, has embodied like its maturing
colleagues, the basic desparity between the pure goal and the impure
solution. Artists drawn to it must navigate this basic contradiction
philosophically and creatively. We understand we are working with an
analogue, but an analogue that comes wickedly, and tantalizingly close
sometimes, to its subject.
This realization first came to me while reading the discussions
surrounding syntax. There was a recognition for the need to change the
syntax to 'abstract' the syntax from the functionality. The gain could
be greater ease of developmen and greater variety of function. The was
recognition that more robust syntax might leverage everything from model
making to material attributes through the techniques of object
orientation. At times personal stylistic preferences did seem to creep
in. A preoccupation with syntax as an end in itself did seem to creep
in. There was also a recognition that the current syntax holds
properties of their own innate appeal, though the appeal is difficult to
define. The attempt to define that appeal more clearly for myself lead
to my greater sense of what the appeal of POV is for me. The Hunt. The
ability to start at eny point and have the chase of ideas intensify.
What starts as a lighting test ends as a scene of breadth and scope.
What starts as a grand plan to render a universe ends as a lighting test
of gemlike beauty. It involves a syntax of ease, flexibility, and
immediacy. Improve the syntax, I say, it is innate to the creative
process. But always support The Hunt.
Now again with the reappeal for rendering quality as a priority we are
reminded of the primitive urge at the heart of The Hunt. The perhaps
irrational, Promethian hope, of making it real. Early on, one
individual called for the need to define the purpose of 'v4.0' on a
general level before indulging in particulars. Well, I thought, there
is always someone who wants to define the rules for making the rules for
making the rules. But that cautionary cry has come back to haunt. What
really is at the very heart of this. The POV community has always
favoured the interwoven strands of purist means as a path to truth and
cutting-edge technique in the hope of gaining it.
Post a reply to this message
|
|
| |
| |
|
|
From: Darren New
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 17:16:36
Message: <47250a44@news.povray.org>
|
|
|
| |
| |
|
|
Warp wrote:
> it seemed clear to me that you were talking about the lightmap resolution.
Given your definition of lightmap, yes, I think that's what I was
talking about. When I learned it, it was with the term "chips", which
were basically 3D polygons in space, each of which had a particular
reflectivity spectrum and emission spectrum (for glowing places like
lightbulbs).
> The basic radiosity algorithm is basically calculating the illumination
> of surfaces into lightmaps (which can be applied to those surfaces).
Yes. Good so far.
> A lightmap is basically just an image map, but instead of telling the
> color of the surface at that point (which is something a texture map does)
> it tells the lighting (ie. brightness and coloration) of the surface at
> that point. A rendering engine filters the texture map with the light map
> in order to get the final surface color.
The version I learned had each chip having a particular color. There
weren't any surfaces big enough to apply a texture to. You applied the
texture to the wall, and the resolution of the texture gave you the
resolution of the chips, basically.
> Radiosity is an algorithm for calculating such lightmaps. For each pixel
> in the lightmap, the "camera" is put onto the surface of the object
> corresponding to that lightmap pixel, facing outwards, and the half-world
> seen from that point of view is averaged into that lightmap pixel. This is
> done multiple times in order to get diffuse inter-reflection of light
> between surfaces.
Hmmmm... What you describe might be isomorphic to what I learned. What I
remember is this:
You start with your 3D surface and break it down into "chips":
triangles, for convenience, each with a normal and an
emissive/reflective color.
Then, for each chip, you calculate how much of each other chip it can
see, and at what angle, and add the reflection of that into the chip
under consideration.
The nice thing was you could do this with one giant matrix multiply (one
row and column for each chip) with everything but the diagonal being
zero to start with (IIRC). And once you've taken it to the precision you
want, you have the color of each chip, and you can redraw from different
angles without recalculating the lighting.
The bad thing, of course, is that you don't have reflection because each
chip's contribution is 100% diffuse over the surface of that chip.
And, having written that, yes, I think what you described and what I
described are describing the same thing.
I think maybe your "lightmap bitmap" concept is assigning one pixel of a
bitmap to each of the "chips" in the algorithm I know.
I was thinking it could be adaptive because if you made some chips
larger (like, a smooth plain wall with inch-square chips) and some chips
smaller (like the surfaces of the paintings on the walls) your matrix
would be smaller. If you're already assuming you're calculating a bitmap
to be layered over a surface, it would be difficult to have
variable-sized pixels in it.
> The great thing about radiosity is that calculating the lightmaps can
> be done with 3D hardware, making it quite fast (although still not
> real-time).
Yeah, it seemed the kind of algorithm that one could put in hardware
rather easily.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
| |
| |
|
|
From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 28 Oct 2007 18:31:24
Message: <47251bcb@news.povray.org>
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> You start with your 3D surface and break it down into "chips":
> triangles, for convenience, each with a normal and an
> emissive/reflective color.
That indeed reminds me of an alternative (but much less popular) method
for calculating radiosity.
Instead of calculating the lighting into lightmaps, the lighting is
instead calculated at and stored in the vertices of each polygon (basically
in the same way as you would do it for each individual pixel in the
lightmap). The polygon itself is gouraud/phong-shaded using these vertex
lighting values.
In order to get more accuracy polygons can be subdivided into smaller
polygons and radiosity lighting calculated at each of the new vertices.
This would indeed allow subdividing more at places where there is more
variation in lighting.
AFAIK this method is not very popular because rendering lightmaps onto
polygons is much faster and efficient than rendering several orders of
magnitude more polygons (which still need gouraud/phong shading, which
is not relevantly faster than lightmapping; could even be slower).
This is especially true in real-time rendering using hardware, as polygon
counts should be minimized, possibly being replaced with more detailed
textures.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
From: Darren New
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 29 Oct 2007 01:25:50
Message: <47257cee$1@news.povray.org>
|
|
|
| |
| |
|
|
Warp wrote:
> In order to get more accuracy polygons can be subdivided into smaller
> polygons and radiosity lighting calculated at each of the new vertices.
I think that's what I was talking about. But without the
multiple-normals-per-polygon, even. It was a pretty theoretical class.
> This is especially true in real-time rendering using hardware, as polygon
> counts should be minimized, possibly being replaced with more detailed
> textures.
This class was long before anyone was making special-purpose graphics
chips. Heck, this was probably a decade before software jpeg compression
was feasible. :-) I think a pen plotter and a tektronics tube was
cutting edge graphics.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
| |
| |
|
|
From: Orchid XP v7
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 29 Oct 2007 13:24:03
Message: <47262543$1@news.povray.org>
|
|
|
| |
| |
|
|
Tom York wrote:
> No, being a biased method it definitely isn't guaranteed to (even ignoring
> limits on quality settings that others have mentioned).
Well, not being a true simulation of quantum-dynamical behaviour (such
as wave-particle duality and superposition), an unbaised renderer isn't
guaranteed to produce scientifically correct images either. But who
cares? Computer graphics is all about finding something close enough
without actually simulating the entire Real World. ;-)
>> Your point?
>
> The nice thing (or one of the nice things) about the unbiased
> methods is that you can wind them up and let them go and the quality will
> definitely increase over time - minimising tweaking/re-rendering to avoid
> stubborn artefacts.
That does indeed sound nice.
There's probably a way to add this to POV-Ray without drastically
altering the algorithm though. ;-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|