|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Here's a question about the trace function.
When tracing from a starting point A to an object B, does the distance
between A and B have any effect on rendering time.
My guess would be no, because:
Given the equations of any two non-parallel lines (in 2D), finding the point
where the two lines intersect is achieved by solving a simple simultaneous
equation. One does not (and could not) compare every possible value of y1
and y2, given x, to find the point of intersection. I say "could not" in
the sense that any line-segment can be divided up into an infintesimal
number of points and the point of intersection may not be a rational
number.
I don't think that PovRay would look at every possible point between the
starting point and the target object. However, imagine using trace where
the target object was a complex CSG object, mesh, isosurface, etc. The
maths involved here would be more than a mere simultaneous equation and
admittedly, at this point, I would simply throw my hands up in despair if
asked to do it by hand. On the other hand, I don't how complex (or simple)
this sort of geometry really is.
Can anyone shed a bit of light on this?
You can point me to the documentation if it's already explained there. If
trace works in the same way as as the raytracing algorithm (which I'm
beginning to suspect) I'll have to make a thorough study of it.
BTW: Is trace generally considered an "advanced" povray function. I didn't
find it difficult to use as I already knew what a normal vector was from
high-school geometry but I've noticed that the normal vector is the part
that most-often throws people.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
FrogRay <nomail@nomail> wrote:
> When tracing from a starting point A to an object B, does the distance
> between A and B have any effect on rendering time.
1) Why would you even think that it could have some effect?
2) If you are having such doubts, why don't you just test it?
> One does not (and could not) compare every possible value of y1
> and y2, given x, to find the point of intersection.
Ever heard of so-called root-finding algorithms?
http://en.wikipedia.org/wiki/Root-finding_algorithm
Not that POV-Ray uses that, but equations can be approximated
iteratively too. Of course POV-Ray doesn't do that because it doesn't
need to (except for isosurfaces, which is a different story and not
really related to the question of whether a longer ray takes more to
calculate). Even if POV-Ray used such an algorithm, a clever algorithm
wouldn't be very much dependent on the ray length.
> If trace works in the same way as as the raytracing algorithm
Why would you think that trace() uses anything else than what the
raytracing process itself uses?
trace() was originally added (initially to megapov iirc) exactly
because it was so simple to add. There was already an internal trace
function in the source code to trace objects (used to raytrace the
objects in the scene), and the SDL trace() simply calls that for the
given object. I bet the implementation of trace() is a one-liner.
> BTW: Is trace generally considered an "advanced" povray function.
"Advanced" is completely a question of opinion. To some people
trace() is one of the simplest functions in the whole SDL, while
to others it's really advanced stuff.
I suppose than in general it's considered quite advanced since it
requires some spatial understanding and visualization skills in order
to be used properly (moreso than eg. just placing objects in a scene).
Of course being able to use the normal vector usefully requires some
knowledge too.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
FrogRay wrote:
> I don't think that PovRay would look at every possible point between the
> starting point and the target object. However, imagine using trace where
> the target object was a complex CSG object, mesh, isosurface, etc. The
> maths involved here would be more than a mere simultaneous equation and
> admittedly, at this point, I would simply throw my hands up in despair if
> asked to do it by hand. On the other hand, I don't how complex (or simple)
> this sort of geometry really is.
It's really quite simple. For CSG, POV-Ray traces each member object,
and then deals with the set of results.
It's easier to explain with an example, so here's one:
union {
merge {
A
B
}
C
}
When tracing this object, POV-Ray will find each intersection between
the ray with the three separate objects A, B and C. We'll call the
resulting set of points R (or would it be R[] or [R]? It's been a while
since I've done this kind of notation).
A and B are merged, so every point in R from A is checked if it's inside
of B. If it is, then it is removed. If it isn't, then it is retained.
Then, the same thing is done in reverse, checking points from B to see
if they're inside of A.
The resulting set is union'ed with C, so the intersections from C are
just added to R.
Then, POV-Ray just looks for the closest point in R, and returns that as
your result.
So, the overall algorithm is a bit more complex, but only a little, and
it still ends up being very fast. It doesn't really change the math at
all, it just changes how you process the result.
...Chambers
PS Thorsten, Warp, if either of you are going to nitpick something I
said, please remember that I have the best of intentions and I am
sincerely sorry for any errors! :)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|