|
|
|
|
|
|
| |
| |
|
|
From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 19:31:11
Message: <4723ca3f@news.povray.org>
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> My memory of the algorithm is that you could build it in a way that let
> the areas with lots of detail have finer resolution than the areas with
> less detail, and still get all the benefits of the "real" radiosity
> algoritm. Am I mistaken here?
I suppose that if you are calculating the lightmaps into something else
than bitmaps you could do adaptive supersampling (ie. if two adjacent samples
differ too much, take an additional sample in-between). I also suppose that
if you do that you lose the efficiency of having lightmaps as bitmaps...
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 19:32:37
Message: <4723ca94@news.povray.org>
|
|
|
| |
| |
|
|
Gilles Tran <gitran_nospam_@wanadoo.fr> wrote:
> You still won't get specular highlights. Unless of course you give every
> material the physically correct blurred reflection necessary to obtain
> specularity. Good luck with that.
Combining blurred reflection trick with 'exponent' should make it at
least partially possible. Why do you need luck with that?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Because Indigo has (ONLY) been tested on Windows XP(R) and Windows 2000(R)
I just want to say, give me another SDL!
****** SDL Simple DirectMedia Layer
****** SDL Specification and Description Language (CCITT)
****** SDL Space Dynamics Laboratory
****** SDL Specification and Design Language
****** SDL SAW (Surface Acoustic Wave) Delay Line
****** SDL Security Development Lifecycle
****** SDL Self Directed Learning
***** SDL Service Description Language
***** SDL Systems Development Laboratory (JPL)
**** SDL System Description Language
**** SDL Software Development Library
**** SDL Schematic Driven Layout
**** SDL System Design Language
**** SDL Storage Definition Language
**** SDL System Design Laboratory
**** SDL Sogosogo Duavata Ni Lewenivanua (United Fiji Party, Fiji)
**** SDL Software Development Laboratory
**** SDL Serial Data Link
**** SDL Search Digital Libraries
**** SDL Standard Distribution List
**** SDL Structure Description Language
**** SDL Sundsvall, Sweden - Sundsvall (Airport Code)
**** SDL Soft Defect Localization (scanning laser microscopy methodology)
**** SDL Secure Domain Logon
**** SDL Superimposed Dead Load
*** SDL Signaling Data Link
*** SDL Sample Detection Limit
*** SDL Surface Data Logging
*** SDL Structured Design Language
*** SDL Satellite Data Link
*** SDL Scouts du Liban (Lebanon)
*** SDL Stammdienststelle der Luftwaffe (German)
*** SDL State Designated Level
*** SDL Sensor Data Link
*** SDL Switched Delay Line (fiber optics)
*** SDL Synchronous Delay Line
*** SDL Screen Definition Language
*** SDL Service Delivery Lead
* SDL Shared Distribution List
* SDL Scottsdale, Arizona - Municipal (airport code)
* SDL Supplementary Defect List
* SDL Smart Data Loopback (Hekimian)
* SDL Solution Demonstration Laboratory
* SDL Standard Direct Layer (low level computer graphics)
* SDL Subcontractor Data List
* SDL Supplier Document List
* SDL Solution Defeating Lunacy
* SDL Stomach Damaging Lecture
* SDL Spirit Destroying Life
* SDL Spirit Destroying Location
* SDL Spirit Destroying Linkage
* SDL So Damn Lucky
liquidation
locomotion
lullaby
lipectomy
lesson
liberation
lost_cause
lock
lining
lechery
lamination
Post a reply to this message
Attachments:
Download '' (0 KB)
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v7 <voi### [at] devnull> wrote:
> Tom York wrote:
> > What unbiased methods give you is certainty. If you leave them long enough
> > they *will* approach the true solution.
>
> So will POV-Ray's radiosity system, if you turn the settings up high
> enough. (And wait a damn long time...)
No, being a biased method it definitely isn't guaranteed to (even ignoring
limits on quality settings that others have mentioned). The true solution isn't
the only solution that looks good, of course, so whether or not this is a
problem depends upon the scene; but the main point is that you can put in
additional rendering time using POV's radiosity without necessarily improving
the image in the way you want (some artefacts may never disappear).
> Your point?
My point in the previous message was the rest of that paragraph, the part you
didn't quote. The nice thing (or one of the nice things) about the unbiased
methods is that you can wind them up and let them go and the quality will
definitely increase over time - minimising tweaking/re-rendering to avoid
stubborn artefacts.
Tom
Post a reply to this message
|
|
| |
| |
|
|
From: Darren New
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 20:59:23
Message: <4723deeb$1@news.povray.org>
|
|
|
| |
| |
|
|
Warp wrote:
> I suppose that if you are calculating the lightmaps into something else
> than bitmaps you could do adaptive supersampling
Hmmm. Unless you're talking about how to turn a bunch of radiosity chips
in 3D into a bitmap of the 3D structure as seen from a particular point
in space, I am confused. Either you're talking about something other
than what I learned, or my memory of what I learned doesn't match what
the "radiosity" algorithm really does.
--
Darren New / San Diego, CA, USA (PST)
Remember the good old days, when we
used to complain about cryptography
being export-restricted?
Post a reply to this message
|
|
| |
| |
|
|
From: John VanSickle
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 21:04:19
Message: <4723e013@news.povray.org>
|
|
|
| |
| |
|
|
Darren New wrote:
> Color me unimpressed. Maybe it's because I'm not an expert, but some of
> the sub-surface scattering stuff is the only stuff that looks
> particularly good to me. Balanced against most of their proud gallery
> being obnoxiously grainy, I don't see it as a win just from the photos.
>
> Is it possible to automatically know when a scene is good enough? Or
> does it take human intervention to say "ok, stop now and move on to the
> next frame"?
For animations this is a show-stopper. Picture quality *must* be
consistent from frame to frame, and that rules out any perceptible
degree of graininess. Letting the unbiased renderers go until the grain
is gone is not practical, because that requires a human to monitor the
render, and requires that human to decide consistently from one frame to
the next. The only way an unbiased renderer could be used in animation
work is to let it render the first frame of every shot, decide on an
acceptable quality level, and then allow that much time for each frame,
and hope that the movement of some object or the camera doesn't increase
the time requirement significantly.
(And if you want grain for some reason, other renderers, and
post-processors too, can supply it in a way what is much easier to control.)
Ray-tracing and z-buffering deliver consistency from frame to frame,
which is why animators use those rendering algorithms. Pixar's renderer
uses a z-buffering architecture, combined with ray-tracing for certain
situations; in their docs they say that the only real drawback to
ray-tracing is the requirement that the entire scene be containable in
memory (which for Pixar's work is a show-stopper; their scenes can use
insane amounts of data). To this I'd add that z-buffering handles
displacement mapping much more efficiently than ray-tracing does.
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
John VanSickle <evi### [at] hotmailcom> wrote:
> For animations this is a show-stopper. Picture quality *must* be
> consistent from frame to frame, and that rules out any perceptible
> degree of graininess.
I think you can have Maxwell (at least, don't know about the others) cut off
when a selected noise level is reached. Animations are possible, but I would
think the main reason against them would be the crippling render times. When
one frame can take hours to render, it's really not practical:
http://www.maxwellrender.com/img/gallery/videos/promotional/whentheyfall.mov
The noise seems to be at a consistent level in that.
> in their docs they say that the only real drawback to
> ray-tracing is the requirement that the entire scene be containable in
> memory
In RAM, they mean? If so, I don't think that can be correct (or up to date);
look for Ingo Wald's work on out-of-core (and realtime) raytracing. I think
they must have some use for raytracing or they wouldn't have bothered adding it
to PRMan 11. Apart from OpenRT, which supports this as a matter of course,
there's also
graphics.cs.uni-sb.de/Publications/2004/Boeing_EGSR2004.ppt
which describes the challenges to raytracing when rendering a 350-million
triangle model realtime (35-70GB of data, apparently).
Tom
Post a reply to this message
|
|
| |
| |
|
|
From: Nicolas Alvarez
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 21:50:53
Message: <4723eafd@news.povray.org>
|
|
|
| |
| |
|
|
> Darren New wrote:
>> Color me unimpressed. Maybe it's because I'm not an expert, but some
>> of the sub-surface scattering stuff is the only stuff that looks
>> particularly good to me. Balanced against most of their proud gallery
>> being obnoxiously grainy, I don't see it as a win just from the photos.
>>
>> Is it possible to automatically know when a scene is good enough? Or
>> does it take human intervention to say "ok, stop now and move on to
>> the next frame"?
>
> For animations this is a show-stopper. Picture quality *must* be
> consistent from frame to frame, and that rules out any perceptible
> degree of graininess. Letting the unbiased renderers go until the grain
> is gone is not practical, because that requires a human to monitor the
> render, and requires that human to decide consistently from one frame to
> the next. The only way an unbiased renderer could be used in animation
> work is to let it render the first frame of every shot, decide on an
> acceptable quality level, and then allow that much time for each frame,
> and hope that the movement of some object or the camera doesn't increase
> the time requirement significantly.
>
You can always process more later. I played with an open-source
forwards-raytracer (not sure if it's really unbiased) where you could
save data to a file, and at any moment reload it and render some more.
You could also start up instances on different computers, let them run
for a few hours/days, and then merge the results.
So it would be possible to render up to a certain quality level for all
frames, then (in some automated way) reload each frame, render a few
more passes on them, and save them back. Or, render only one pass on
each frame, so that you get the whole animation done pretty fast (even
though it would be EXTREMELY grainy), and then repeatedly render single
passes on all frames. That way all frames would slowly get better at the
same rate.
Post a reply to this message
|
|
| |
| |
|
|
From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 23:08:45
Message: <4723fd3d@news.povray.org>
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> > I suppose that if you are calculating the lightmaps into something else
> > than bitmaps you could do adaptive supersampling
> Hmmm. Unless you're talking about how to turn a bunch of radiosity chips
> in 3D into a bitmap of the 3D structure as seen from a particular point
> in space, I am confused. Either you're talking about something other
> than what I learned, or my memory of what I learned doesn't match what
> the "radiosity" algorithm really does.
When you wrote
"you could build it in a way that let the areas with lots of detail have
finer resolution than the areas with less detail"
it seemed clear to me that you were talking about the lightmap resolution.
Were you talking about something else?
The basic radiosity algorithm is basically calculating the illumination
of surfaces into lightmaps (which can be applied to those surfaces).
A lightmap is basically just an image map, but instead of telling the
color of the surface at that point (which is something a texture map does)
it tells the lighting (ie. brightness and coloration) of the surface at
that point. A rendering engine filters the texture map with the light map
in order to get the final surface color.
Radiosity is an algorithm for calculating such lightmaps. For each pixel
in the lightmap, the "camera" is put onto the surface of the object
corresponding to that lightmap pixel, facing outwards, and the half-world
seen from that point of view is averaged into that lightmap pixel. This is
done multiple times in order to get diffuse inter-reflection of light
between surfaces.
The great thing about radiosity is that calculating the lightmaps can
be done with 3D hardware, making it quite fast (although still not
real-time).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
From: Warp
Subject: Re: This is another "free" unbiased engine: Indigo Render
Date: 27 Oct 2007 23:16:36
Message: <4723ff14@news.povray.org>
|
|
|
| |
| |
|
|
John VanSickle <evi### [at] hotmailcom> wrote:
> The only way an unbiased renderer could be used in animation
> work is to let it render the first frame of every shot, decide on an
> acceptable quality level, and then allow that much time for each frame,
> and hope that the movement of some object or the camera doesn't increase
> the time requirement significantly.
Even if the quality level is consistent in all frames, if there's *any*
graininess visible at all, it will probably flicker randomly from frame to
frame, which is probably not something very pleasant.
I have been thinking about one thing when I look at some of the example
images made by those renderers, especially the ones which show a car.
If I understood correctly, it takes the renderer quite a humongous amount
of time to render such a picture (I think someone mentioned 12 hours
somewhere?).
Many of the car pictures look like you could create an almost identical
picture with POV-Ray (at least with POV-Ray 3.7, thanks to its HDRI support)
and make it render it in far less than an hour. Probably less than a half
hour.
It just feels that sometimes using "less accurate" rendering methods
which nevertheless produce a completely acceptable image is more feasible
than using 12+ hours to render a "physically accurate" picture which to
the layman doesn't look any different... :P
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |