|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I have found in a few of my last projects that while doing a high-quality
render, (0 threshold anti-aliasing, hefty radiosity settings, etc.), that about
98-99% of the scene is rendered in a reasonable amount of time, and then it
seems as if no progress is ever made again. The one I am rendering now
completed 98% in about 4 hours, but in the past 15 hours, with 10 patches
remaining to render, it has made no progress whatsoever.
This particular scene only has 2 light sources, a sunlight source as a 3x3 area
light, and one other light source also 3x3. Most of the scene consists of
isosurfaces, usually some sort of block whose edges and faces have been
perturbed to give them texture. The finishes are all fresnel, and there is a
tiny bit of reflection added in. (A previous scene with this problem completed
98% in less than a day, and then made no progress for another week before I gave
up.)
Lowering the radiosity settings or the anti-aliasing settings allows the
rendering to finish, but at reduced quality.
Is there any way to turn additional debugging output to see where the render
engine is spending its time? What I'm afraid of is that some interaction
between anti-aliasing, radiosity, and reflection is causing an infinite loop
measuring against thresholds.
Also, can someone verify that the rendering rate in the status bar (for Windows
at least), is just total pixels rendered/total time? Is there any way to get a
rendering rate over the past N minutes instead? For most scenes, once the
render settles down, the rate gives me a good estimate of the time to complete,
but in these long renders its not even close.
-- Chris R.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 2023-05-05 à 11:56, Chris R a écrit :
> I have found in a few of my last projects that while doing a high-quality
> render, (0 threshold anti-aliasing, hefty radiosity settings, etc.), that about
> 98-99% of the scene is rendered in a reasonable amount of time, and then it
> seems as if no progress is ever made again. The one I am rendering now
> completed 98% in about 4 hours, but in the past 15 hours, with 10 patches
> remaining to render, it has made no progress whatsoever.
>
> This particular scene only has 2 light sources, a sunlight source as a 3x3 area
> light, and one other light source also 3x3. Most of the scene consists of
> isosurfaces, usually some sort of block whose edges and faces have been
> perturbed to give them texture. The finishes are all fresnel, and there is a
> tiny bit of reflection added in. (A previous scene with this problem completed
> 98% in less than a day, and then made no progress for another week before I gave
> up.)
>
> Lowering the radiosity settings or the anti-aliasing settings allows the
> rendering to finish, but at reduced quality.
>
> Is there any way to turn additional debugging output to see where the render
> engine is spending its time? What I'm afraid of is that some interaction
> between anti-aliasing, radiosity, and reflection is causing an infinite loop
> measuring against thresholds.
>
> Also, can someone verify that the rendering rate in the status bar (for Windows
> at least), is just total pixels rendered/total time? Is there any way to get a
> rendering rate over the past N minutes instead? For most scenes, once the
> render settles down, the rate gives me a good estimate of the time to complete,
> but in these long renders its not even close.
>
> -- Chris R.
>
>
Some ideas. Some things to try.
If it's inter reflection, then, lowering max_trace_level could help.
Alternatively, slightly increasing adc_bailout may help.
Can you do with a reduced recursion level for your radiosity ?
Don't use a threshold of zero for the antialiasing. Instead, use a small
value. Maybe something like 1/256, 1/768 or even 1/1500.
For your radiosity, try pretrace_end 0.00125 or 0.000625.
You may need to set minimum_reuse to a similar value.
pretrace_end 0.000625
minimum_reuse 0.00062
I've found that using the two value version for count often help improve
the quality without increasing the render time. Could allow you to use a
lower count value without sacrificing the quality.
count 8000, 100001
Use the importance feature. Give a lower importance to bigger objects.
#default{radiosity{importance 250/8000}}
Some small object, especially if used as a light source :
... radiosity{importance 1}...
Some big object :
... radiosity{importance 100/8000}...
If you find a pathological object, try duplicating it.
One will have the option no_reflection no_radiosity and the full texture.
The other will have a simplified texture, and maybe even a slightly
simplified geometry, and no_image. For an isosurface, that could be
halving the weight of the surface perturbation, or reducing it's frequency.
This can even be done for all objects in the scene.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Chris R" <car### [at] comcastnet> wrote:
>
> What I'm afraid of is that some interaction
> between anti-aliasing, radiosity, and reflection is causing an infinite loop
> measuring against thresholds.
>
That's a really hard-core set of features to throw at POV-ray all at once, ha.
Especially with isosurfaces added to the mix. I have similar scenes where I had
to eliminate anti-aliasing altogether, if I didn't want to exclusively tie up my
machine for days...and this on an 8 core/16 thread machine!
My gut-feeling is that the slowdown for your last block is AA-related.
One suggestion would be to reduce the size of your Render_Blocks. The default is
32X32 pixels, I think. I run complex-feature scenes at 8X8, which does improve
the overall rendering speed on multi-core machines, and *helps* to keep that
final render block from taking so much time. As Thorsten F. pointed out in an
earlier post last year, each Render_Block uses an individual core (or thread-- I
get the two confused.) By reducing the block size, the processor seems to work
more efficiently, and doesn't have to spend all of its time on a single block
(with a single core) while the other cores are sitting idle. Or something like
that!
I may have the technicalities wrong, but a smaller block size can sometimes
speed-up the render. It definitely works for media and isosurfaces (but I'm not
patient enough to throw in AA as well; most of my renders lately are just
relatively quick tests of one thing or another.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 2023-05-06 à 07:40, Kenneth a écrit :
> "Chris R" <car### [at] comcastnet> wrote:
>>
>> What I'm afraid of is that some interaction
>> between anti-aliasing, radiosity, and reflection is causing an infinite loop
>> measuring against thresholds.
>>
>
> That's a really hard-core set of features to throw at POV-ray all at once, ha.
> Especially with isosurfaces added to the mix. I have similar scenes where I had
> to eliminate anti-aliasing altogether, if I didn't want to exclusively tie up my
> machine for days...and this on an 8 core/16 thread machine!
>
> My gut-feeling is that the slowdown for your last block is AA-related.
>
> One suggestion would be to reduce the size of your Render_Blocks. The default is
> 32X32 pixels, I think. I run complex-feature scenes at 8X8, which does improve
> the overall rendering speed on multi-core machines, and *helps* to keep that
> final render block from taking so much time. As Thorsten F. pointed out in an
> earlier post last year, each Render_Block uses an individual core (or thread-- I
> get the two confused.) By reducing the block size, the processor seems to work
> more efficiently, and doesn't have to spend all of its time on a single block
> (with a single core) while the other cores are sitting idle. Or something like
> that!
>
> I may have the technicalities wrong, but a smaller block size can sometimes
> speed-up the render. It definitely works for media and isosurfaces (but I'm not
> patient enough to throw in AA as well; most of my renders lately are just
> relatively quick tests of one thing or another.)
>
Yes, using smaller render blocks can absolutely help with some slower
renders. That is, renders that are not particularly fast and don't tend
to have long I/O pipes in use.
In multithread computing, the cores, virtual cores, used IS the thread
count. Cores = physical aspect. Thread = logical aspect. POV-Ray use one
thread per core/virtual core unless +WTn is used.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] gmailcom> wrote:
> "Chris R" <car### [at] comcastnet> wrote:
> >
> > What I'm afraid of is that some interaction
> > between anti-aliasing, radiosity, and reflection is causing an infinite loop
> > measuring against thresholds.
> >
>
> That's a really hard-core set of features to throw at POV-ray all at once, ha.
> Especially with isosurfaces added to the mix. I have similar scenes where I had
> to eliminate anti-aliasing altogether, if I didn't want to exclusively tie up my
> machine for days...and this on an 8 core/16 thread machine!
>
> My gut-feeling is that the slowdown for your last block is AA-related.
>
> One suggestion would be to reduce the size of your Render_Blocks. The default is
> 32X32 pixels, I think. I run complex-feature scenes at 8X8, which does improve
> the overall rendering speed on multi-core machines, and *helps* to keep that
> final render block from taking so much time. As Thorsten F. pointed out in an
> earlier post last year, each Render_Block uses an individual core (or thread-- I
> get the two confused.) By reducing the block size, the processor seems to work
> more efficiently, and doesn't have to spend all of its time on a single block
> (with a single core) while the other cores are sitting idle. Or something like
> that!
>
> I may have the technicalities wrong, but a smaller block size can sometimes
> speed-up the render. It definitely works for media and isosurfaces (but I'm not
> patient enough to throw in AA as well; most of my renders lately are just
> relatively quick tests of one thing or another.)
Thanks for the suggestion. I had done that in the past, but forgot about it. I
should just make it my default instead since it would probably help most of my
renders anyway.
I'm going to try some of the other suggestions as well. The latest render
completed 99% in about 2 hours, and then just sat there for 2 days on the last 2
blocks. I'm afraid without any other adjustments, I'll get to 99.9% using
smaller blocks, but that last 0.1% will then take two days.
-- Chris R.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 2023-05-08 à 10:04, Chris R a écrit :
> "Kenneth" <kdw### [at] gmailcom> wrote:
>> "Chris R" <car### [at] comcastnet> wrote:
>>>
>>> What I'm afraid of is that some interaction
>>> between anti-aliasing, radiosity, and reflection is causing an infinite loop
>>> measuring against thresholds.
>>>
>>
>> That's a really hard-core set of features to throw at POV-ray all at once, ha.
>> Especially with isosurfaces added to the mix. I have similar scenes where I had
>> to eliminate anti-aliasing altogether, if I didn't want to exclusively tie up my
>> machine for days...and this on an 8 core/16 thread machine!
>>
>> My gut-feeling is that the slowdown for your last block is AA-related.
>>
>> One suggestion would be to reduce the size of your Render_Blocks. The default is
>> 32X32 pixels, I think. I run complex-feature scenes at 8X8, which does improve
>> the overall rendering speed on multi-core machines, and *helps* to keep that
>> final render block from taking so much time. As Thorsten F. pointed out in an
>> earlier post last year, each Render_Block uses an individual core (or thread-- I
>> get the two confused.) By reducing the block size, the processor seems to work
>> more efficiently, and doesn't have to spend all of its time on a single block
>> (with a single core) while the other cores are sitting idle. Or something like
>> that!
>>
>> I may have the technicalities wrong, but a smaller block size can sometimes
>> speed-up the render. It definitely works for media and isosurfaces (but I'm not
>> patient enough to throw in AA as well; most of my renders lately are just
>> relatively quick tests of one thing or another.)
>
> Thanks for the suggestion. I had done that in the past, but forgot about it. I
> should just make it my default instead since it would probably help most of my
> renders anyway.
>
> I'm going to try some of the other suggestions as well. The latest render
> completed 99% in about 2 hours, and then just sat there for 2 days on the last 2
> blocks. I'm afraid without any other adjustments, I'll get to 99.9% using
> smaller blocks, but that last 0.1% will then take two days.
>
> -- Chris R.
>
>
Using smaller blocks I got a render that would get stuck on the last
render block for days to finish in only a few hours.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alain Martel <kua### [at] videotronca> wrote:
> > "Chris R" <car### [at] comcastnet> wrote:
> >>
> >> What I'm afraid of is that some interaction
> >> between anti-aliasing, radiosity, and reflection is causing an infinite loop
> >> measuring against thresholds.
> >>
> >
> > That's a really hard-core set of features to throw at POV-ray all at once, ha.
> > Especially with isosurfaces added to the mix. I have similar scenes where I had
> > to eliminate anti-aliasing altogether, if I didn't want to exclusively tie up my
> > machine for days...and this on an 8 core/16 thread machine!
> >
> > My gut-feeling is that the slowdown for your last block is AA-related.
> >
> > One suggestion would be to reduce the size of your Render_Blocks. The default is
> > 32X32 pixels, I think. I run complex-feature scenes at 8X8, which does improve
> > the overall rendering speed on multi-core machines, and *helps* to keep that
> > final render block from taking so much time. As Thorsten F. pointed out in an
> > earlier post last year, each Render_Block uses an individual core (or thread-- I
> > get the two confused.) By reducing the block size, the processor seems to work
> > more efficiently, and doesn't have to spend all of its time on a single block
> > (with a single core) while the other cores are sitting idle. Or something like
> > that!
> >
> > I may have the technicalities wrong, but a smaller block size can sometimes
> > speed-up the render. It definitely works for media and isosurfaces (but I'm not
> > patient enough to throw in AA as well; most of my renders lately are just
> > relatively quick tests of one thing or another.)
> >
> Yes, using smaller render blocks can absolutely help with some slower
> renders. That is, renders that are not particularly fast and don't tend
> to have long I/O pipes in use.
>
> In multithread computing, the cores, virtual cores, used IS the thread
> count. Cores = physical aspect. Thread = logical aspect. POV-Ray use one
> thread per core/virtual core unless +WTn is used.
I realized later that I had already increased my AA thresholds for the latest
run, to no effect. However, a combination of decreasing the max_trace_level
from 30 to 10, as well as decreasing the block size from 32 to 8 seems to have
markedly increased the over all speed, as well as eliminating the days-long
rendering of the few problem blocks.
I think my radiosity settings were already looser than you were suggesting, and
I have been using the 2-value counts as recommended by Thomas de Groot a while
back. However, if I start to see issues on other scenes, I'll do some
experiments with the importance feature again.
-- Chris R.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 5/8/23 10:53, Chris R wrote:
> I realized later that I had already increased my AA thresholds for the latest
> run, to no effect. However, a combination of decreasing the max_trace_level
> from 30 to 10, as well as decreasing the block size from 32 to 8 seems to have
> markedly increased the over all speed, as well as eliminating the days-long
> rendering of the few problem blocks.
What version of POV-Ray are you using?
---
I'll mention a couple things I didn't note others bringing up.
1) If you have any transparency - and IIRC - since v3.7 rays transit
through transparent surfaces, with an ior of 1.0, without increasing the
max_trace count. This was done to avoid part of the old problem of black
pixels on hitting max_trace_level.
What I've had happen, very occasionally since, is some smallish number
of rays skimming a 'numerically bumpy' surface. The rays end up
transiting in an out of the shape a large number of times. Alain taught
me the trick of changing the ior to something like 1.0005 so those
transitions through transparency count again toward the max_trace_level.
2) With isosurfaces, using 'all_intersections' (really a max of 10) when
you don't need them 'all' can be expensive. I usually start with
max_trace at 2. Sometimes I cheat down to 1, if the object is simple,
opaque and I don't see artefacts.
There is too the quality level. If of the radiosity, subsurface and
media features all you use is radiosity, you can drop from 9 (=10 & =11)
to 8 and check performance without radiosity. Dropping to 7 cuts out
reflection, refraction and transparency(a). Plus, you can use start/end
row start/end column settings to render only in regions running slow for
performance testing.
Bill P.
(a) - Makes me wonder if different bucketing of some of those features
relative to the quality level would be useful given we don't today use
all the values? The internal cost of the conditionals would be similar
(the same?), I think. It would make the quality setting different than
what folks are used to using. Something to think about I guess.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 5/8/23 18:09, William F Pokorny wrote:
> (a) - Makes me wonder if different bucketing of some of those features
> relative to the quality level would be useful given we don't today use
> all the values? The internal cost of the conditionals would be similar
> (the same?), I think. It would make the quality setting different than
> what folks are used to using. Something to think about I guess.
While taking a break from Cousin Ricky's interior id assignment crash, I
decided to look at the idea above.
What I found is that Christoph had already worked to extend / clean up
the quality settings up in post v3.7 code. I didn't run down when
exactly he did the work, but it's been in place for years - 2018 or
earlier. Meaning, what we have documented for quality levels and
behavior is not what is in v3.8 beta 2.
For quality levels our v3.8 documentation has:
0, 1 - Just show quick colors. Use full ambient lighting only.
Quick colors are used only at 5 or below.
2, 3 - Show specified diffuse and ambient light.
4 - Render shadows, but no extended lights.
5 - Render shadows, including extended lights.
6, 7 - Compute texture patterns, compute photons
8 - Compute reflected, refracted, and transmitted rays.
9, 10, 11 - Compute media, radiosity and subsurface light transport.
Which even in v3.7 was not quite right I know as 9 was the true highest
quality level. Aside: 'extended lights' above means 'area lights'.
What is in coretypes.h for all recent v3.8 / v4.0 code is:
explicit QualityFlags(int level) :
ambientOnly (level <= 1),
quickColour (level <= 5),
shadows (level >= 4),
areaLights (level >= 5),
refractions (level >= 6),
reflections (level >= 8),
normals (level >= 8),
media (level >= 9),
radiosity (level >= 9),
photons (level >= 9),
subsurface (level >= 9)
{}
So, even today we can pull apart refractions / transparent rays from
reflected rays by run time flag or ini setting. On suspecting internal
reflections are the cause of long run times, we could set the quality
level to 7 to test the thought.
Disclaimer. The hooks are there in the code, but I've not tested each
v3.8/v4.0 quality level to be sure it works.
For my povr fork play (as an idea for v4.0) I'm thinking for a start
I'll change the bucketing to:
explicit QualityFlags(int level) :
ambientOnly (level <= 1),
quickColour (level <= 5),
shadows (level >= 4),
areaLights (level >= 5),
refractions (level >= 6),
reflections (level >= 7),
normals (level >= 8),
media (level >= 10),
radiosity (level >= 11),
photons (level >= 9),
subsurface (level >= 12)
{}
and make the default quality level 12 instead of 9.
It seems to me in lumping so many of the most expensive features
together we lose the debugging capability we can get from the quality
level feature.
For 'quality' you do want to run media, radiosity, photons and
subsurface together. They are tangled and affect each other.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 2023-05-10 à 16:01, William F Pokorny a écrit :
> On 5/8/23 18:09, William F Pokorny wrote:
>> (a) - Makes me wonder if different bucketing of some of those features
>> relative to the quality level would be useful given we don't today use
>> all the values? The internal cost of the conditionals would be similar
>> (the same?), I think. It would make the quality setting different than
>> what folks are used to using. Something to think about I guess.
>
> While taking a break from Cousin Ricky's interior id assignment crash, I
> decided to look at the idea above.
>
> What I found is that Christoph had already worked to extend / clean up
> the quality settings up in post v3.7 code. I didn't run down when
> exactly he did the work, but it's been in place for years - 2018 or
> earlier. Meaning, what we have documented for quality levels and
> behavior is not what is in v3.8 beta 2.
>
> For quality levels our v3.8 documentation has:
>
> 0, 1 - Just show quick colors. Use full ambient lighting only.
> Quick colors are used only at 5 or below.
> 2, 3 - Show specified diffuse and ambient light.
> 4 - Render shadows, but no extended lights.
> 5 - Render shadows, including extended lights.
> 6, 7 - Compute texture patterns, compute photons
> 8 - Compute reflected, refracted, and transmitted rays.
> 9, 10, 11 - Compute media, radiosity and subsurface light transport.
>
> Which even in v3.7 was not quite right I know as 9 was the true highest
> quality level. Aside: 'extended lights' above means 'area lights'.
>
>
> What is in coretypes.h for all recent v3.8 / v4.0 code is:
>
> explicit QualityFlags(int level) :
> ambientOnly (level <= 1),
> quickColour (level <= 5),
> shadows (level >= 4),
> areaLights (level >= 5),
> refractions (level >= 6),
> reflections (level >= 8),
> normals (level >= 8),
> media (level >= 9),
> radiosity (level >= 9),
> photons (level >= 9),
> subsurface (level >= 9)
> {}
>
> So, even today we can pull apart refractions / transparent rays from
> reflected rays by run time flag or ini setting. On suspecting internal
> reflections are the cause of long run times, we could set the quality
> level to 7 to test the thought.
>
> Disclaimer. The hooks are there in the code, but I've not tested each
> v3.8/v4.0 quality level to be sure it works.
>
>
> For my povr fork play (as an idea for v4.0) I'm thinking for a start
> I'll change the bucketing to:
>
> explicit QualityFlags(int level) :
> ambientOnly (level <= 1),
> quickColour (level <= 5),
> shadows (level >= 4),
> areaLights (level >= 5),
> refractions (level >= 6),
> reflections (level >= 7),
> normals (level >= 8),
> media (level >= 10),
> radiosity (level >= 11),
> photons (level >= 9),
> subsurface (level >= 12)
> {}
>
> and make the default quality level 12 instead of 9.
>
> It seems to me in lumping so many of the most expensive features
> together we lose the debugging capability we can get from the quality
> level feature.
>
> For 'quality' you do want to run media, radiosity, photons and
> subsurface together. They are tangled and affect each other.
>
> Bill P.
>
>
>
>
From my experience, when using +q0 to +q3, transparent pigments do show
as transparent. Both filter and transmit.
Then, from +q4 to +q7, anything transparent shows as black, for any
pigment and any amount of filter or transmit.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|