POV-Ray : Newsgroups : povray.beta-test : intersection tests Server Time
28 Jul 2024 22:26:15 EDT (-0400)
  intersection tests (Message 13 to 22 of 22)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Thorsten Froehlich
Subject: Re: intersection tests
Date: 22 Sep 2007 20:57:25
Message: <46f5b9f5$1@news.povray.org>
Grassblade wrote:
> "Bruno Cabasson" <bru### [at] alcatelaleniaspacefr> wrote:
>> Warp <war### [at] tagpovrayorg> wrote:
>>> Bruno Cabasson <bru### [at] alcatelaleniaspacefr> wrote:
>>>> I am aware of that, of course. But what happens in our scenes? Does every
>>>> pixel concern a different object? Certainly not.
>>>   But how can the program know that without actually testing for it?
>>>
>>> --
>>>                                                           - Warp
>> As a first embryonic try for this idea, it might be enough to try to
>> intersect the previous object that was hit. If success, then time is saved;
>> if fail, go back to normal process, and no significant time was lost. I
>> think that statistically (I insist heavily on that word), we can gain
>> something because most of objects occupy an entire area of the scene, more
>> or less, even if they are plenty of them.
>>
>> Bruno.
> 
> I know nothing of how a ray-tracer works internally, so I may be spouting
> nonsense,

No, he is not on to something. Warp already pointed out that his idea makes
no sense.

	Thorsten, POV-Team


Post a reply to this message

From: Daniel Nilsson
Subject: Re: intersection tests
Date: 23 Sep 2007 04:47:56
Message: <46f6283c$1@news.povray.org>
Grassblade wrote:
> 
> Regarding tracing inside the box, if a ray gives a hit on the object, skip
> the next both in up and side directions, and trace the second ray instead.
> If it's a hit again, then assume the intermediate (skipped) ray is a hit
> too. If it's a miss, then go back to the skipped ray and trace it.
> I can already hear purists screaming to the top of their lungs that it's not
> accurate, and I happen to agree, but since it appears we're writing the
> wish-list of Pov 4, I might as well chime in. This method might be useful
> for previews, quick tests, big renders and hopefully for real-time
> raytracing. The final render would take the usual route, although that
> should be up to the user.
> 

IIRC something like what you describe is used by federation against 
nature in their RealStorm engine: http://www.realtimeraytrace.de/.
That demo was cool a couple of years ago but advances in graphics 
hardware has made it less spectacular. The image quality is not even 
close to what Povray can do.

-- 
Daniel Nilsson


Post a reply to this message

From: Orchid XP v3
Subject: Re: intersection tests
Date: 23 Sep 2007 05:06:32
Message: <46f62c98$1@news.povray.org>
Warp wrote:

>   Testing against the same object in the next pixel and getting a hit
> doesn't tell us anything. There could be another object in front of
> the object at that next pixel, so it would have to be tested anyways.

You are right of course - we still need to do more tests if we get a 
hit. But I'm wondering, does getting a hit allow us in any way to avoid 
testing objects "further away" than the one hit?

(I don't know how POV-Ray's bounding and space partitioning works - 
especially the new octree thing. And obviously POV-Ray supports infinite 
objects and so forth...)

-- 
http://blog.orphi.me.uk/


Post a reply to this message

From: Bruno Cabasson
Subject: Re: intersection tests
Date: 23 Sep 2007 13:20:00
Message: <web.46f69e7facb0bd7983ccb21a0@news.povray.org>
Orchid XP v3 <voi### [at] devnull> wrote:
> Warp wrote:
>
> >   Testing against the same object in the next pixel and getting a hit
> > doesn't tell us anything. There could be another object in front of
> > the object at that next pixel, so it would have to be tested anyways.
>
> You are right of course - we still need to do more tests if we get a
> hit. But I'm wondering, does getting a hit allow us in any way to avoid
> testing objects "further away" than the one hit?
>
> (I don't know how POV-Ray's bounding and space partitioning works -
> especially the new octree thing. And obviously POV-Ray supports infinite
> objects and so forth...)
>
> --
> http://blog.orphi.me.uk/

Well ...

Thinking of it more than 5 seconds, it is so obvious that an (unpredictable
?)object can be interposed in front of the currently tested object and
therefore intercepted by the next ray and not by the previous, that I did
not even figure out the situation!!!

Of course my suggestion in nonsense!!

However, maybe my remark that the final image (ie once entirely traced ...)


quick pretrace of the scene (also with radiosity and shadow rays), make
some kind of geometrical analysis, and feed a predictor?

For an example and simplifiyng the situation, considering only primary rays,
make an undersampled pretrace of the scene (factor of 1/2), and determine
which objects were hit by each ray. Then, using this information, isn't it
possible somehow to predict for the pixel or sub-pixel level, and for
anti-aliasing?

Is there really nothing to do in order to predict more or less for
ray-object intersection? Is it really impossible to use statistical aspects
of a scene to speed-up intersection tests?

    Bruno


Post a reply to this message

From: Warp
Subject: Re: intersection tests
Date: 23 Sep 2007 14:17:40
Message: <46f6adc4@news.povray.org>
Btw, I don't think it's the first recursion level (ie. the rays shot
from the camera) which consume the majority of the rendering time. The
majority is consumed by the additional rays spawned recursively from
the first and subsequent intersecions.

  Even if your idea worked it would only speed up the first recursion
level, but not the others.

-- 
                                                          - Warp


Post a reply to this message

From: honnza
Subject: Re: intersection tests
Date: 28 Sep 2007 08:45:00
Message: <web.46fcf5dcacb0bd79a9ce4df50@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Btw, I don't think it's the first recursion level (ie. the rays shot
> from the camera) which consume the majority of the rendering time. The
> majority is consumed by the additional rays spawned recursively from
> the first and subsequent intersecions.
>
>   Even if your idea worked it would only speed up the first recursion
> level, but not the others.
>
> --
>                                                           - Warp

maybe you could marching squares to get outlines of the objects being
rendered and then only intersect the objects before the seen one.
You can also predict multiple subsequent levels.
As this will already trace most of the pixels you don't have to retrace
those (except for AA).
BTW, implementing marching squares also increases quality of AA because it
misses less subpixel lines (should pixels found to be wrongly antialiased
be retraced if they are a bit of (under-antialiased)?)
Still, I think most of the time is spent not figuring out whether there is
an intersection but where is it (think of parametric surfaces).


Post a reply to this message

From: Le Forgeron
Subject: Re: intersection tests
Date: 28 Sep 2007 10:54:14
Message: <46fd1596@news.povray.org>
Le 28.09.2007 14:38, honnza nous fit lire :
> 
> maybe you could marching squares to get outlines of the objects being
> rendered and then only intersect the objects before the seen one.
> You can also predict multiple subsequent levels.
> As this will already trace most of the pixels you don't have to retrace
> those (except for AA).
> BTW, implementing marching squares also increases quality of AA because it
> misses less subpixel lines (should pixels found to be wrongly antialiased
> be retraced if they are a bit of (under-antialiased)?)
> Still, I think most of the time is spent not figuring out whether there is
> an intersection but where is it (think of parametric surfaces).
> 
> 
1. Marching cube is patented... aims lawyers, get set, fire!
 (Did it expires yet ?)
2. You are describing bounding box somehow. No need for marching cube.
3. You're right, you'd better spend time on optimising the
parametric solver than that part.

-- 
The superior man understands what is right;
the inferior man understands what will sell.
-- Confucius


Post a reply to this message

From: Grassblade
Subject: Re: intersection tests
Date: 24 Oct 2007 15:50:00
Message: <web.471fa121acb0bd79654d6f060@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:

> No, he is not on to something. Warp already pointed out that his idea makes
> no sense.
>
>  Thorsten, POV-Team

I have been pondering this awhile, and I don't see why not. In the vast
majority of scenes, the objects making the scene up are pretty much
differentiable. By definition, if you know the value of the ray object
intersection, normal and texture, you can posit that there exist a
neighborhood of said point that has similar attributes. In particular, no
intervening object will obstruct rays in a neighborhood of an already
traced ray. The question is just how big this neighborhood is. That's where
stochastic stuff would come in. I realize that raytracing is deeply rooted
in what I'd call brute-force deterministic algorithms, and that a
full-blown stochastic algorithm is not trivial. I also realize that if it
where a truly stochastic process, then you'd end up with ellipsoidal
confidence regions centered upon each traced ray, which would downgrade the
raytracer to a splat algorithm.
IMO, a simple alternative could be to trace a point, compare the distance in
the color space with the last traced point. If distance in color-space is
greater than a user defined threshold, trace the middle-point, otherwise
guess that the middle point is an average of the two traced points. I have
tried it in the toy raytracer found in the help file, and it cuts down
render time by almost 40% (well, it's parse time in there, but it would
translate to render time in POV-ray) if the color space threshold is big
enough. My code is limited and only skips columns, but a gifted coder could
skip rays on rows too, and save even more time. Obviously, in more complex
scenes, time saved would be much smaller but if it saves even 10% of render
time that still means more time for tweaking a scene and less for rendering.
Complex textured scenes, fractals and parts of infinite checkered planes
going off into the horizon would defeat it, obviously, and some caution
would have to be exercised as any feature that spans only one pixel would
have a 50% chance of being skipped.
What I propose is essentially a dual to the existing quality settings: do
not trace every ray, but keep the whole information of the ones you do
trace. The end-user would have all the info needed to make an informed
decision on what quality setting to use. Visual quality doesn't seem to
take a significant hit, see:
http://news.povray.org/povray.binaries.images/message/%3Cweb.471f9b31e04cdb41654d6f060%40news.povray.org%3E/#%3Cweb.471
f9b31e04cdb41654d6f060%40news.povray.org%3E


Post a reply to this message

From: Warp
Subject: Re: intersection tests
Date: 24 Oct 2007 18:31:58
Message: <471fc7de@news.povray.org>
Grassblade <nomail@nomail> wrote:
> IMO, a simple alternative could be to trace a point, compare the distance in
> the color space with the last traced point. If distance in color-space is
> greater than a user defined threshold, trace the middle-point, otherwise
> guess that the middle point is an average of the two traced points.

  That may miss a thin object or other detail at that middle point.

  Also, how to adapt that to antialiasing?

-- 
                                                          - Warp


Post a reply to this message

From: Grassblade
Subject: Re: intersection tests
Date: 25 Oct 2007 17:55:00
Message: <web.47210fb0acb0bd7945deea430@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Grassblade <nomail@nomail> wrote:
> > IMO, a simple alternative could be to trace a point, compare the distance in
> > the color space with the last traced point. If distance in color-space is
> > greater than a user defined threshold, trace the middle-point, otherwise
> > guess that the middle point is an average of the two traced points.
>
>   That may miss a thin object or other detail at that middle point.
>
>   Also, how to adapt that to antialiasing?
>
> --
>                                                           - Warp

Yup, correct, pixel-wide features would have a 50% chance of being skipped.
I guess there is no free meal. :-(

Regarding antialiasing, assume the middle pixel is guessed with an average
of bordering pixels. Then you can write pixel color= "true" color + error.
If you apply aa to a block containing guessed points, you get the real
color plus a fraction of the error(s). So aa on top of the algorithm
actually helps smooting out errors.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.