POV-Ray : Newsgroups : povray.beta-test : v3.8+ crackle instability (facets?) with >1 uses per thread. : Re: v3.8+ crackle instability (facets?) with >1 uses per thread. Server Time
27 Jul 2024 12:28:30 EDT (-0400)
  Re: v3.8+ crackle instability (facets?) with >1 uses per thread.  
From: Thorsten
Date: 11 Jun 2024 14:17:07
Message: <666894a3$1@news.povray.org>
On 11.06.2024 17:39, William F Pokorny wrote:
> When I think about really implementing that approach, I also start to 
> think about how a different approach to parallelism than our block based 
> approach could be good. One where we spin up the combined 
> shape/solver(s) as processes to which we'd send batches of rays at a 
> time and get back batches of intersections... Yeah, I'm practically 
> dreaming, but pretty sure that sort of set up would be best for the 
> merged uni-variate, polynomial solver/shape approach. How it well that 
> structure would work overall - I'm not at all sure. 😄

Well, yes, an intersection based approach would probably offer the most 
potential performance on a shared memory system. You would also end up 
with a stack-based approach automatically that way. However, you hit 
sort of a wall once you get to the really big multi-die systems like 
Epyc and newer Xeons because they only share the last level cache with 
all cores. So in the end the best performance probably hides somewhere 
in a hybrid of the two with blocks still offering some benefit for large 
multi-core systems. That is, of course, assuming the ray order doesn't 
disrupt first and second level caches too much. It is impossible to 
predict the complexity with modern CPU, I think.

The benefit would be that the "texturing" would become a completely 
separate task, and could actually be done (sans reflection and 
refraction) after tracing, which, if nothing else, would lead to a cool 
looking render preview. The other effect would be that at least bounding 
optimisations and mesh intersection testing could be done on a GPU.

Yet another benefit you get from separating the tracing and the 
texturing is that you end up with a sort of frame buffer that contains 
object data. An idea I never pursued to the end 20 or so years ago was 
that this gives rise to the ability to edit a ray-traced scene on the 
fly because you have access to the objects making up an individual pixel 
and can separate objects in and out of the scene as long as the camera 
doesn't move.

Thorsten


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.