|
![](/i/fill.gif) |
> It all depends on what you want, but personally I don't like the idea. Even if
> that sample was added to three others (forming a camera-oriented tetrahedron),
> there's still a likelihood that those last DE samples will have been taken at
> different relative distances to the surface across the render. Does that make
> sense?
Indeed, my personal goal is to rather keep the scene moving at a decent
frame rate (whilst adding more complexity) rather than towards a more
realistic render.
>> When thinking about optimising distance field renderers I always get a
>> feeling that you should be able to do something clever with voronoi
>> diagrams but I have never worked out what yet...
>
> Hmm. Are you talking about breaking up the DE into separate regions, so that no
> particular region will have too many components (shape operations)? If so, what
> happens when a shadow or reflection ray needs to travel between regions?
Yes, I'm thinking that with complex scenes I am still always computing
the distance field for every single object. What I think would speed
things up is if you could group objects together and create a kind of
much simpler conservative "group" distance function. I guess a kind of
n-tree structure where you only need to do a relatively cheap check at
each level to rule out certain nodes that you don't need to go down and
evaluate any further. I guess POV does something similar.
You would do this for every distance field evaluation, so it wouldn't
matter if a ray jumped from one region to another.
I'm also not sure if such a scheme would actually make much improvement
on a GPU.
Post a reply to this message
|
![](/i/fill.gif) |