"Bald Eagle" <cre### [at] netscapenet> wrote:
> "Kenneth" <kdw### [at] gmailcom> wrote:
> > IMO, min_extent/max_extent mistakenly begin working in reverse at that
> > point, sensing the now-'far' point as the 'near' one, for example. That
> > doesn't seem right.
> Maybe you need to define a _relative_ "min" and "max" based on the
> distances of the corners from the camera. Then I think it should all
> work out.
Yep, that sounds like the trick to use. Thanks.
I'm still mighty curious about why the use of min_extent and max_extent (as used
or derived from the moving 'MAIN camera' location) 'flip' their meaning of
'near' and 'far', depending on the relative locations of camera and object. (I'm
simply using my L_1/L_2 equation to gauge that-- it seems like an accurate and
simple-enough test, as it's just the ratio of min_extent/max_extent measured
from the camera position-- which *should* never go past 1.0.) Given that the
found min and max points are simply coordinates in space, why doesn't, say,
min_extent *always* pick the 'closest' bounding-box corner to the camera? Yet it
appears that the 'near' and 'far' relationship never changes for an object, like
it's sealed in stone the moment the object is made (the coordinates apparently
relative to the origin at <0,0,0>). Or else min_extent and max_extent aren't
Post a reply to this message