|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Is it possible that, instead of, or in addition to the currently
implemented average rendering speed indicator of the windoze GUI, a
current speed indicator (updated every second) is displayed, as well
as an estimate based on one or both of them (selectable)? With the
current implementation it is nearly impossible to predict how long a
picture will take to render if you haven't done a complete render of
it before (unless you've been raytracing too long :) ). Also, it would
be useful if, when using mosaic preview, an estimate is made after
each pass has been finished. Do these ideas seem reasonable? If so,
are they difficult to implement?
Peter
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <36bb7727.2503144@news.povray.org> , pet### [at] usanet (Peter
Popov) wrote:
> Is it possible that, instead of, or in addition to the currently
> implemented average rendering speed indicator of the windoze GUI, a
> current speed indicator (updated every second) is displayed, as well
> as an estimate based on one or both of them (selectable)? With the
> current implementation it is nearly impossible to predict how long a
> picture will take to render if you haven't done a complete render of
> it before (unless you've been raytracing too long :) ). Also, it would
> be useful if, when using mosaic preview, an estimate is made after
> each pass has been finished. Do these ideas seem reasonable? If so,
> are they difficult to implement?
For the mosaic preview part that might be possible (and help to do the
general prediction). Any other prediction of the render time based on the
lines rendered so far (the only "easy" way I can think of) is not very
useful for most scenes. For example you might have just a sky in the upper
half of the image which renders fast, and the, in the lower part you might
have a few hundret glass spheres, using a high max_trace_level and
anti-aliasing the first line that intersect a few of the spheres might take
a few times what the whole image took to the middle...
I usually use a very small preview image (like 32 * 24 for a 640 * 480
image) with the _same_ options I will use for the final image and multiply
the time it took for the small one to estimate the final render. Most of the
time an additional factor of 1.05 to 1.1 is needed to account for background
system tasks etc.
Thorsten
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Thu, 04 Feb 1999 23:14:48 -0600, "Thorsten Froehlich"
<fro### [at] charliecnsiitedu> wrote:
>For the mosaic preview part that might be possible (and help to do the
>general prediction). Any other prediction of the render time based on the
>lines rendered so far (the only "easy" way I can think of) is not very
>useful for most scenes. For example you might have just a sky in the upper
>half of the image which renders fast, and the, in the lower part you might
>have a few hundret glass spheres, using a high max_trace_level and
>anti-aliasing the first line that intersect a few of the spheres might take
>a few times what the whole image took to the middle...
I know, that's why I am suggesting an estimate to be made based on
both speed reports. The mosaic preview will be the most accurate, but
then again, even a faked estimate is better than none at all, right?
>I usually use a very small preview image (like 32 * 24 for a 640 * 480
>image) with the _same_ options I will use for the final image and multiply
>the time it took for the small one to estimate the final render. Most of the
>time an additional factor of 1.05 to 1.1 is needed to account for background
>system tasks etc.
Antialiasing takes longer on smaller images if you account for
samples/pixel. AT least that's what my practice shows. I usually
render a 160x120 no AA, multiply the render time accordingly, multiply
it by [1+AA/3] and add it to the parsing time. Usually works pretty
fine, but why should *I* bother if the box can do it for me?
Come to think of it, can antialiasing be made *after* the image has
been traced? I.e., scan the pic and apply aa where needed. This has
several advantages over the current implementation:
1) instead of the up-and-left-only aa check all adjacent pixels will
be checked
2) one can see if aa is needed at all. Then one can do a pass with a
large threshold (0.5 for eg.) and lower it until the result is
satisfactory. If parse time is much lower than render time, this is a
pretty time-saving approach
3) dual processors or two computers can be easily made to work in
tandem with the field interlace option (two pov instances in the first
case), then one of them will only do the aa pass.
> Thorsten
Regards,
Peter
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <36bd807b.4891785@news.povray.org> , pet### [at] usanet (Peter
Popov) wrote:
> I know, that's why I am suggesting an estimate to be made based on
> both speed reports. The mosaic preview will be the most accurate, but
> then again, even a faked estimate is better than none at all, right?
Well, for experienced users that migth be the case, but I got horrified when
thinking of all the new users posting bug reports saying that POV-Ray is
slower than it say, and then they ask what is wrong or even make bug reports
out of it...
> Antialiasing takes longer on smaller images if you account for
> samples/pixel. AT least that's what my practice shows. I usually
Yes, that is possible. For my typical scenes (only sometimes rendered using
Windows, most of the time on the Mac, but that should make little difference
for the estimate) it comes out well for me that way. (Maybe I should add
that I don't run POV-Ray with full priority (I use the defaults) to be able
to use the computer during very long renders (e.g. read e-mail)).
> render a 160x120 no AA, multiply the render time accordingly, multiply
> it by [1+AA/3] and add it to the parsing time. Usually works pretty
> fine, but why should *I* bother if the box can do it for me?
Interesting formula, I will try it for my next full priority render.
> Come to think of it, can antialiasing be made *after* the image has
> been traced? I.e., scan the pic and apply aa where needed. This has
It is possible, but all first run pixel values would need temporary
storage...
> several advantages over the current implementation:
> 1) instead of the up-and-left-only aa check all adjacent pixels will
> be checked
Hmm, I don't understand what you mean by "up-and-left-only aa check".
> 2) one can see if aa is needed at all. Then one can do a pass with a
> large threshold (0.5 for eg.) and lower it until the result is
> satisfactory. If parse time is much lower than render time, this is a
> pretty time-saving approach
Are you sure you know how the adaptive anti-alias in POV-Ray works? This
sounds like you want to do it the same way :-) Maybe I misunderstand
you!?!
> 3) dual processors or two computers can be easily made to work in
> tandem with the field interlace option (two pov instances in the first
> case), then one of them will only do the aa pass.
This would be possible without the AA option as well...
Thorsten
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sat, 06 Feb 1999 08:52:17 -0600, "Thorsten Froehlich"
<fro### [at] charliecnsiitedu> wrote:
>In article <36bd807b.4891785@news.povray.org> , pet### [at] usanet (Peter
>Popov) wrote:
>> I know, that's why I am suggesting an estimate to be made based on
>> both speed reports. The mosaic preview will be the most accurate, but
>> then again, even a faked estimate is better than none at all, right?
>
>Well, for experienced users that migth be the case, but I got horrified when
>thinking of all the new users posting bug reports saying that POV-Ray is
>slower than it say, and then they ask what is wrong or even make bug reports
>out of it...
Heh heh I guess you are right, but then again, using the current and
average-up-to-now means that the "estimate" is updated realtime...
which renders kind of senseless. OTOH, the more accurate mosaic
preview-based one might turn out to be a good idea.
<snip!>
>> Come to think of it, can antialiasing be made *after* the image has
>> been traced? I.e., scan the pic and apply aa where needed. This has
>
>It is possible, but all first run pixel values would need temporary
>storage...
Why? There's file output for this purpose, right? That's the main
idea. D/l a pic, it's edgy, d/l the source and do an anti-alias render
of it with. It should take several percent of total render time.
Render at higher resolution? OK, but why recalculate already rendered
pixels? Use a 320x240 input file for a 640x480 render and you save 1
in every four pixels. Cool, isn't it?
>> several advantages over the current implementation:
>> 1) instead of the up-and-left-only aa check all adjacent pixels will
>> be checked
>
>Hmm, I don't understand what you mean by "up-and-left-only aa check".
Last time I've throroughly read the docs on non-adaptive anti-aliasing
it said that if a pixel differs from its left and/or upper neighbour
by more than a certain amount of r+g+b (0.3 by default), both are
anti-aliased. This makes sense since the ones to the right and below
would not be rendered at the time of this check.
>> 2) one can see if aa is needed at all. Then one can do a pass with a
>> large threshold (0.5 for eg.) and lower it until the result is
>> satisfactory. If parse time is much lower than render time, this is a
>> pretty time-saving approach
>
>Are you sure you know how the adaptive anti-alias in POV-Ray works? This
>sounds like you want to do it the same way :-) Maybe I misunderstand
>you!?!
Of course I know :) Tell me this, can you render your favourite, for
eg., 1024x768-35-meg-source-with-media-and-a-zillion-glass-spheres
scene with +am2 +a0.5 and, if after 165.34 hours of rendering during
you Hawaii vacation you see is pixels and jaggies, you re-render it
using +a0.25 +am2 and it takes, say, 3h only? Currently, no. You'll
have to re-calculate all that stuff that's already there and only
about 5% more, which is a real waste of CPU time and neuron cells.
>> 3) dual processors or two computers can be easily made to work in
>> tandem with the field interlace option (two pov instances in the first
>> case), then one of them will only do the aa pass.
>This would be possible without the AA option as well...
Yes but it would be jaggy.
> Thorsten
Regards,
Peter
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Peter Popov <pet### [at] usanet> wrote in article
<36bd807b.4891785@news.povray.org>...
> On Thu, 04 Feb 1999 23:14:48 -0600, "Thorsten Froehlich"
> <fro### [at] charliecnsiitedu> wrote:
>
> >For the mosaic preview part that might be possible (and help to do the
> >general prediction). Any other prediction of the render time based on
the
> >lines rendered so far (the only "easy" way I can think of) is not very
> >useful for most scenes. For example you might have just a sky in the
upper
> >half of the image which renders fast, and the, in the lower part you
might
> >have a few hundret glass spheres, using a high max_trace_level and
> >anti-aliasing the first line that intersect a few of the spheres might
take
> >a few times what the whole image took to the middle...
>
> I know, that's why I am suggesting an estimate to be made based on
> both speed reports. The mosaic preview will be the most accurate, but
> then again, even a faked estimate is better than none at all, right?
I would have to say NO to that. A faked estimate is far worse than none. It
fools you into thinking you know how long it will take, when you actually
have no idea!
>
> >I usually use a very small preview image (like 32 * 24 for a 640 * 480
> >image) with the _same_ options I will use for the final image and
multiply
> >the time it took for the small one to estimate the final render. Most of
the
> >time an additional factor of 1.05 to 1.1 is needed to account for
background
> >system tasks etc.
>
> Antialiasing takes longer on smaller images if you account for
> samples/pixel. AT least that's what my practice shows. I usually
> render a 160x120 no AA, multiply the render time accordingly, multiply
> it by [1+AA/3] and add it to the parsing time. Usually works pretty
> fine, but why should *I* bother if the box can do it for me?
>
> Come to think of it, can antialiasing be made *after* the image has
> been traced? I.e., scan the pic and apply aa where needed. This has
> several advantages over the current implementation:
> 1) instead of the up-and-left-only aa check all adjacent pixels will
> be checked
> 2) one can see if aa is needed at all. Then one can do a pass with a
> large threshold (0.5 for eg.) and lower it until the result is
> satisfactory. If parse time is much lower than render time, this is a
> pretty time-saving approach
> 3) dual processors or two computers can be easily made to work in
> tandem with the field interlace option (two pov instances in the first
> case), then one of them will only do the aa pass.
>
> > Thorsten
>
> Regards,
> Peter
>
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|