POV-Ray : Newsgroups : povray.beta-test : Testing real-time raytracing Server Time
29 Jul 2024 04:32:34 EDT (-0400)
  Testing real-time raytracing (Message 21 to 30 of 40)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Invisible
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 04:50:39
Message: <4549bf6f@news.povray.org>
>>  Then, in theory, it could be possible to attach a light source to
>>each of the cameras in the clockless animation mode?
> 
> 
> technically yes, but we lose the benefits of bounding then (i.e. the light
> would have to be placed into the list of infinite objects). either that, or
> some sort of manipulation of bounding information would need to occur each
> time the camera moved outside of its previous bounding volume.
> 
> that said, though, if we say that the light is always at the eye point, we
> could take advantage of the fact that we are already tracing a ray from that
> point out into the scene, and thus anything that the primary ray intersects
> then is by definition visible to the light source - meaning we don't need to
> test it separately.

Ah, the joys of programming... features that sound easy sometimes 
aren't. And features that sound hard are sometimes easy... ;-)


Post a reply to this message

From: Chris Cason
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 07:43:16
Message: <4549e7e4@news.povray.org>
Invisible wrote:
> Actually, that's an interesting point... Would presumably mean the very 
> first frame still takes 20 minutes, and all the subsequent ones are 
> reasonably fast unless/until you expose new geometry.

well, it might be interesting to work this out at some point ... if you were
comparing a scene lit completely with radiosity (and presuming that the
radiosity cache had already been built) against one with several traditional
point light sources, at what point does the savings of not having to trace a
shadow ray overcome the cache lookup time?

I would suspect that the answer is 'very early', except perhaps in very
simple scenes with little geometry. The more geometry is there, the more time
spent doing lookups of the bounding structure (BVH or BSP) against each
shadow ray - the more light sources, the more time, with the relationship
being fairly linear.

the rad cache lookup, however, would not suffer from this problem.

therefore it is entirely possible that, as counterintuitive as it may seem,
there is potential for a scene with global illumination to be *faster* in
interactive rendering than one without.

-- Chris


Post a reply to this message

From: Chris Cason
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 07:47:43
Message: <4549e8ef$1@news.povray.org>
Invisible wrote:
> Now if you could get a *cluster* of PCs to chew on the problem in 
> parallel, you might be able to render "non-trivial" scenes in almost 

You may find the paper "An Application of Scalable Massive Model Interaction
using Shared-Memory Systems" from http://www.cs.utah.edu/~boulos/research.htm
an interesting read. (They used 64 or 128-CPU machines with 64gb of RAM for
interactive rendering of a 350 million polygon aircraft model). There's also
some other papers on distributed rendering on that page.

-- Chris


Post a reply to this message

From: Invisible
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 08:26:15
Message: <4549f1f7$1@news.povray.org>
>>Actually, that's an interesting point... Would presumably mean the very 
>>first frame still takes 20 minutes, and all the subsequent ones are 
>>reasonably fast unless/until you expose new geometry.
> 
> 
> therefore it is entirely possible that, as counterintuitive as it may seem,
> there is potential for a scene with global illumination to be *faster* in
> interactive rendering than one without.

That's the pretty interesting point... global illumination might 
actually be faster. At least, *after* the cache has been built. (For a 
single frame, the time taken to build the cache usually makes the 
overall rendertime [drastically] slower than "normal mode". But if 
you're reusing that data for an animation... sure.)


OOC, how does POV-Ray efficiently retrive radiosity samples during 
rendering? (I thought about writing a renderer myself, but I couldn't 
think of a fast lookup algorithm.) Presumably the same problem applies 
to photon maps also...?


Post a reply to this message

From: Warp
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 10:36:12
Message: <454a106c@news.povray.org>
Chris Cason <del### [at] deletethistoopovrayorg> wrote:
> therefore it is entirely possible that, as counterintuitive as it may seem,
> there is potential for a scene with global illumination to be *faster* in
> interactive rendering than one without.

  It's easier to understand when one knows how the global illumination
algorithm works in POV-Ray, and how radiosity works in scanline-rendering.
In the latter, global illumination is precalculated into lightmaps, and
these lightmaps are then just used as texture modifiers on the object
surfaces (this is very fast because of hardware support). Rendering using
lightmaps is much faster than using dozens of light sources.

  The data structure used by POV-Ray to store global illumination samples
is not very far from a lightmap. It's a kind-of "sparse 3D lightmap".
Retrieving values from that data structure is not slow, and thus once it
has been sufficiently calculated (ie. no more illumination samples are
necessary), rendering becomes quite fast, especially if there are no
light sources.

  (Of course the main problem with this technique has to do with the
"sparse" part. When samples are too far apart, illumination will be
quite inaccurate.)

-- 
                                                          - Warp


Post a reply to this message

From: Tom York
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 12:40:01
Message: <web.454a2c6ebf0a728b7d55e4a40@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Tom York <alp### [at] zubenelgenubi34spcom> wrote:
> > I got about 1fps
> > at 160x128 on a complex scene involving complex meshes, refraction,
> > transparency, image maps and so on
>
>   It'd be interesting to know how much it speeds up if you use the
> quality parameter (+q) to turn off the slowest features.
>
> --
>                                                           - Warp

I did some more precise testing today. The scene involves a 180 degree pan
that starts with the camera point at several complicated (self-shadowing)
mesh objects. The pan then crosses two largely empty regions (just a
procedural background texture) with a few meshes appearing here and there,
and then finally ends up pointing at a large object made of meshes, and
torii/cylinders/other finite solid primitives. There is a lot of refraction
going on at the end, although the refracting surface is just a simple cone.
As expected, the frame rate is quite variable.

I used 100 cameras and let the animation run over 600 frames at 160x128. I
also turned on hyperthreading in the PC's BIOS settings, since I was a bit
curious to see if anything would happen (I disabled it a long time ago
since some software didn't like it). It may have done absolutely nothing
(can't be sure as I wasn't careful with the timing yesterday).

The scene data is about 16 million tokens worth, plus some large (2048x2048,
24 bit) image maps, and takes about 30-40 seconds to parse. Initially,
max_trace_level was set to 20 (what was used in the original scene) and the
+Q quality setting was left at default (full quality). That's the "base"
figure:

1) Base: 1.7-1.8 fps
2) With +Q2: 2.8 fps
3) With max_trace_level 5: 1.7-1.8 fps
4) With max_trace_level 2: 1.8 fps
5) With max_trace_level 1: 1.8-1.9 fps

The fps values quoted above came from the average reported in the status bar
after 600 frames. The average was reasonably stable after 600 frames,
although it sometimes varied enough that I've stated a range of values
where necessary.

The aspects of +Q2 that had the most effect on speed were presumably the
elimination of shadows and refractive surfaces. max_trace_level 1
eliminates the refraction, I guess, so perhaps that leaves shadows as the
most time-consuming feature. On the other hand, the refraction is only
present at the end, but there are shadowed objects through most of the
animation. Visually, at 160x128 there is not much difference between +Q2
and full quality; the absence of refraction is the most noticable effect.

Tom


Post a reply to this message

From: Tom York
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 12:50:01
Message: <web.454a2e8ebf0a728b7d55e4a40@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> In fact, I've built scenes like this, and as you say, if you turn off
> enough stuff you can get "almost realtime" rendering if your PC is a big
> enough brute. And yes, I would imagine disabling the parser makes it
> faster still. (Although on the other hand, how long does it take to
> parse half a page of text?)

I know that my use case probably isn't the same as the one this feature was
intended for. But consider that with 3.6 if I want to render 100 frames of
the scene I've been using here, just to check that the textures/finishes
look good in some sort of motion, POV spends ~30 seconds per frame simply
parsing and loading stuff that doesn't change from frame to frame. That's
at least 50 minutes spent before I even add rendering. Meshes often take a
relatively long time to parse but render very quickly (at least in my
experience) compared to some other primitives, so it can be quite easy to
get in a situation where parsing makes up 50% of the total time to produce
an animation. When the parsing isn't necessary (as in a preview of
materials), it just seems like a waste of energy and HDD light.

Tom


Post a reply to this message

From: Chris Cason
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 13:30:07
Message: <454a392f$1@news.povray.org>
FYI, some tests:

  http://www.legitreviews.com/article/412/10/
  http://www.legitreviews.com/article/412/19/

-- Chris


Post a reply to this message

From: Warp
Subject: Re: Testing real-time raytracing
Date: 2 Nov 2006 18:21:16
Message: <454a7d6c@news.povray.org>
Tom York <alp### [at] zubenelgenubi34spcom> wrote:
> I
> also turned on hyperthreading in the PC's BIOS settings, since I was a bit
> curious to see if anything would happen (I disabled it a long time ago
> since some software didn't like it). It may have done absolutely nothing
> (can't be sure as I wasn't careful with the timing yesterday).

  My experience is that hyperthreading in a Pentium4 gives about 10-20%
speedup with POV-Ray 3.7.

-- 
                                                          - Warp


Post a reply to this message

From: Ben Chambers
Subject: Re: Testing real-time raytracing
Date: 3 Nov 2006 03:58:58
Message: <454b04d2@news.povray.org>
Chris Cason wrote:
> For those interested, please visit http://www.povray.org/beta/rtr/
> 
> It's not very well tested (only on the systems I had available to me), so
> YMMV. Expect it to have some rough edges. Please report issues here.
> 
> -- Chris

 From that page: "If your computer does not have SSE2, the program will 
crash."

Although I'd love to test it out myself, I probably wouldn't be using 
for anything useful, and my machine would croak on it anyway.  Oh well, 
just another reason* for me to upgrade :)

...Chambers


*as if I didn't have enough reasons already...


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.