POV-Ray : Newsgroups : povray.beta-test : intersection tests : Re: intersection tests Server Time
29 Jul 2024 00:30:42 EDT (-0400)
  Re: intersection tests  
From: Grassblade
Date: 24 Oct 2007 15:50:00
Message: <web.471fa121acb0bd79654d6f060@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:

> No, he is not on to something. Warp already pointed out that his idea makes
> no sense.
>
>  Thorsten, POV-Team

I have been pondering this awhile, and I don't see why not. In the vast
majority of scenes, the objects making the scene up are pretty much
differentiable. By definition, if you know the value of the ray object
intersection, normal and texture, you can posit that there exist a
neighborhood of said point that has similar attributes. In particular, no
intervening object will obstruct rays in a neighborhood of an already
traced ray. The question is just how big this neighborhood is. That's where
stochastic stuff would come in. I realize that raytracing is deeply rooted
in what I'd call brute-force deterministic algorithms, and that a
full-blown stochastic algorithm is not trivial. I also realize that if it
where a truly stochastic process, then you'd end up with ellipsoidal
confidence regions centered upon each traced ray, which would downgrade the
raytracer to a splat algorithm.
IMO, a simple alternative could be to trace a point, compare the distance in
the color space with the last traced point. If distance in color-space is
greater than a user defined threshold, trace the middle-point, otherwise
guess that the middle point is an average of the two traced points. I have
tried it in the toy raytracer found in the help file, and it cuts down
render time by almost 40% (well, it's parse time in there, but it would
translate to render time in POV-ray) if the color space threshold is big
enough. My code is limited and only skips columns, but a gifted coder could
skip rays on rows too, and save even more time. Obviously, in more complex
scenes, time saved would be much smaller but if it saves even 10% of render
time that still means more time for tweaking a scene and less for rendering.
Complex textured scenes, fractals and parts of infinite checkered planes
going off into the horizon would defeat it, obviously, and some caution
would have to be exercised as any feature that spans only one pixel would
have a 50% chance of being skipped.
What I propose is essentially a dual to the existing quality settings: do
not trace every ray, but keep the whole information of the ones you do
trace. The end-user would have all the info needed to make an informed
decision on what quality setting to use. Visual quality doesn't seem to
take a significant hit, see:
http://news.povray.org/povray.binaries.images/message/%3Cweb.471f9b31e04cdb41654d6f060%40news.povray.org%3E/#%3Cweb.471
f9b31e04cdb41654d6f060%40news.povray.org%3E


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.