|
![](/i/fill.gif) |
CShake nous illumina en ce 2009-03-19 00:55 -->
> clipka wrote:
>> As a matter of fact, there already is a constant with similar use in
>> POV: The
>> MIN_ISECT_DEPTH (currently hard-coded to 1e-4) is the minimum distance
>> from a
>> ray's origin (camera or an earlier intersection) to accept an
>> intersection; any
>> intersection closer than that is currently ignored, probably mainly to
>> avoid
>> "re-intersecting" at the ray's origin.
> Ah, that's why I was getting what I thought was a clipping issue with a
> very small distance between two image_map (filtering) surfaces (1e-5 or
> so, since it was a distance in less than mm when my base unit was in
> ft). I sorta figured that was happening when it disappeared with a
> change in scale.
>
>> But I guess attempting to auto-tune those key distances would be faced
>> with too
>> many unknowns. Distance between camera and look_at seems like the best
>> reference value to me for such a purpose (or focal plane if focal blur
>> is used
>> in the shot), but still someone might e.g. have chosen look_at just
>> because the
>> direction was ok, without paying attention to its distance. Or a shot
>> may have
>> look_at point somewhere in the distance, but highly detailed stuff
>> visible in
>> the foreground.
> Exactly, it has too many potential problems to pick any single method. I
> know I've made my camera look_at by normalizing a vector of the right
> direction and adding it to the location, though it is infrequent. I've
> almost never used a sky_sphere, since I either use a box to contain an
> interior scene or a real sphere when doing HDRI lighting (for brightness
> control). Focal blur is usually not turned on until final tests and the
> 'production' render, so any initial shape testing would not trigger it.
> Using a randomly large object just to have a nearly infinite backdrop
> yet still do something to it with CSG happens too, so the largest object
> method I threw out there has its own set of problems. Even more so when
> someone uses a big difference CSG and doesn't manually set a bounding box.
>
> Having a loop that runs through all the possible intersections in the
> scene and then maybe finding the number of 'intersections' (if only
> between objects that have any transmit or filter, maybe even just with
> defined ior) and the amount of overlap for each, then setting the value
> to catch some percentage of them would be a bear to code. It could also
> slow down parsing significantly.
>
> Though having intersection detection code could make some things easier
> for the user, like "drop all these spheres and stuff until they touch
> the floor" ... yeah, that's not what povray is for anyway.
And, there is the case when you have the camera with a look_at point a unit in
front of it that get rotated and translated to it's final orientation and
location. That location been maybe 1000 units from the main objects of the scene.
--
Alain
-------------------------------------------------
You know you've been raytracing too long when your 18 year-old daughter asks if
she can marry one of the POV Team, and you give her your complete blessing.
Taps a.k.a. Tapio Vocadlo
Post a reply to this message
|
![](/i/fill.gif) |