|
![](/i/fill.gif) |
clipka wrote:
> As a matter of fact, there already is a constant with similar use in POV: The
> MIN_ISECT_DEPTH (currently hard-coded to 1e-4) is the minimum distance from a
> ray's origin (camera or an earlier intersection) to accept an intersection; any
> intersection closer than that is currently ignored, probably mainly to avoid
> "re-intersecting" at the ray's origin.
Ah, that's why I was getting what I thought was a clipping issue with a
very small distance between two image_map (filtering) surfaces (1e-5 or
so, since it was a distance in less than mm when my base unit was in
ft). I sorta figured that was happening when it disappeared with a
change in scale.
> But I guess attempting to auto-tune those key distances would be faced with too
> many unknowns. Distance between camera and look_at seems like the best
> reference value to me for such a purpose (or focal plane if focal blur is used
> in the shot), but still someone might e.g. have chosen look_at just because the
> direction was ok, without paying attention to its distance. Or a shot may have
> look_at point somewhere in the distance, but highly detailed stuff visible in
> the foreground.
Exactly, it has too many potential problems to pick any single method. I
know I've made my camera look_at by normalizing a vector of the right
direction and adding it to the location, though it is infrequent. I've
almost never used a sky_sphere, since I either use a box to contain an
interior scene or a real sphere when doing HDRI lighting (for brightness
control). Focal blur is usually not turned on until final tests and the
'production' render, so any initial shape testing would not trigger it.
Using a randomly large object just to have a nearly infinite backdrop
yet still do something to it with CSG happens too, so the largest object
method I threw out there has its own set of problems. Even more so when
someone uses a big difference CSG and doesn't manually set a bounding box.
Having a loop that runs through all the possible intersections in the
scene and then maybe finding the number of 'intersections' (if only
between objects that have any transmit or filter, maybe even just with
defined ior) and the amount of overlap for each, then setting the value
to catch some percentage of them would be a bear to code. It could also
slow down parsing significantly.
Though having intersection detection code could make some things easier
for the user, like "drop all these spheres and stuff until they touch
the floor" ... yeah, that's not what povray is for anyway.
Post a reply to this message
|
![](/i/fill.gif) |