|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Micha Riser <mri### [at] gmxnet> wrote:
> That's not true. I had scenes with max_trace_level 1000 which also reached
> this level.
The problem is that other scenes cause a crash if they reach that
max_trace_level. The amount of stack space needed for each recursion
varies from scene to scene.
I think that even the current limit of 256 causes a stack overflow crash
if proper conditions are met. This happens with very few scenes, though.
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I think that even the current limit of 256 causes a stack overflow crash
> if proper conditions are met. This happens with very few scenes, though.
This could lead to a little contest ;-))
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thorsten Froehlich wrote:
> In article <3d60a321@news.povray.org> , Micha Riser <mri### [at] gmxnet>
> wrote:
>
> Of course it depends on the scene to some extend. Yes, you can create
> really simple scenes that will not crash, but that does not mean that
> adding a few semitransparent objects with reflection and refraction won't
> get you there.
> POV-Ray cannot know in advance which objects you will have in what order
> in
> your scene, so it cannot tell if the scene may work or may not. It would
> have to know the scene before it has read it to gather such information,
> which would still be complex to do even if this could be avoided by
> placing max_trace_level at the end of the scene.
But this does not speak for a fixed limit on max_trace_level. You don't
limit the number of objects either because with too many objects there will
be not enough memory.
> BTW, by far
> most scenes do not have such a number of perfectly reflecting objects in
> their scene to even reach a max_trace_level of 256 before reaching
> adc_bailout.
I agree with that but as completly transparent containers are used for
media and maybe also complete transparancy for some stacked cloud or
something I see that this potentially could be a limit.
> Nobody said you cannot change it, or did I miss something? If you think
> your program configuration (it has absolutely nothing to do with the OS
> btw) will handle it, just define MAX_TRACE_LEVEL_LIMIT to something bigger
> in
> config.h and compile POV-Ray yourself. No big deal.
Of course one can change it (now the source is available). But that
something is changeble does not mean that the default hasn't to be
reasonable.
IMO it *is* a matter of OS. I cannot see any way to bring my linux into
problems with the stack size other than explicitly limit it or make a scene
that needs a stack size larger than my whole memory (which happens about at
a max_trace_level of 500000). I do not see any different with that than
running out of memory due to too many objects.
- Micha
--
http://objects.povworld.org - the POV-Ray Objects Collection
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3d615ba7@news.povray.org> , Micha Riser <mri### [at] gmxnet> wrote:
> But this does not speak for a fixed limit on max_trace_level. You don't
> limit the number of objects either because with too many objects there will
> be not enough memory.
<snip>
> I do not see any different with that than
> running out of memory due to too many objects.
Your stack will grow into the heap and then there will most likely be a
crash. This is something completely different from running out of memory
because of too many objects and getting a simple nice error message.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thorsten Froehlich wrote:
> In article <3d615ba7@news.povray.org> , Micha Riser <mri### [at] gmxnet>
> wrote:
>
> Your stack will grow into the heap and then there will most likely be a
> crash. This is something completely different from running out of memory
> because of too many objects and getting a simple nice error message.
Yes, it crashs but only when out of memory. So the same program will
terminated caused by the same condition as of too many objects.
--
http://objects.povworld.org - the POV-Ray Objects Collection
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3d615f26@news.povray.org> , Micha Riser <mri### [at] gmxnet> wrote:
>> Your stack will grow into the heap and then there will most likely be a
>> crash. This is something completely different from running out of memory
>> because of too many objects and getting a simple nice error message.
>
> Yes, it crashs but only when out of memory. So the same program will
> terminated caused by the same condition as of too many objects.
Sorry, I don't understand what you say.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Micha Riser" <mri### [at] gmxnet> wrote in message news:3d615ba7@news.povray.org...
>
> I agree with that but as completly transparent containers are used for
> media and maybe also complete transparancy for some stacked cloud or
> something I see that this potentially could be a limit.
>
Just out of curiosity, do completely transparent containers (w/o ior, etc) use
max_trace? If so, why?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I just wanted to point out that the differences between a stack-overflow
and a memory allocation failure does not seem to be so big to me.
But anyways, it is not an important issue. I will just remember to raise
the trace-level limit when I recompile POV next time :).
- Micha
--
http://objects.povworld.org - the POV-Ray Objects Collection
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3d62164d@news.povray.org> , Micha Riser <mri### [at] gmxnet> wrote:
> I just wanted to point out that the differences between a stack-overflow
> and a memory allocation failure does not seem to be so big to me.
Well, that then depends on the runtime libarary and OS. A good memory
allocator checks to see if the heap is coming to close to the stack and
fails (easy assuming one knows where the application stack pointer is). But
apparently that is not everywhere so :-(
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
AFAIK the stack overlapping with the heap is a problem which happens only
in Windows (and perhaps MacOS?).
If I'm not completely mistaken, in most Unix systems the stack of a program
starts from the maximum memory address which the system supports (which is
32-bit systems would be at least 2 gigabytes, if not 4). Since the heap starts
from a very low memory address (typically some tens of kilobytes) it's highly
unlikely that they will ever overlap (because usually you run out of memory
before this happens).
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |