POV-Ray : Newsgroups : povray.binaries.images : Chromadepth scaling to model : Re: Chromadepth scaling to model Server Time
28 Apr 2024 00:40:07 EDT (-0400)
  Re: Chromadepth scaling to model  
From: Mr
Date: 19 Feb 2018 04:45:00
Message: <web.5a8a9ba8b61f565716086ed00@news.povray.org>
"Kenneth" <kdw### [at] gmailcom> wrote:
> Mike Horvath <mik### [at] gmailcom> wrote:
> > I ended up using this:
> >
> >
> > // This script assumes the depth effect is *linear* from near to far,
> > which may not be the case.
> >
>
> The idea looks interesting! And your model proves that it works.
>
> In the interim time, I had a brainstorm. I had to work it out graphically on
> paper, discarding one idea after another, but finally came up with a solution.
> It's more complex than your's, but it doesn't have any fudge factor that I know
> of. I have to describe it in words; if you can put it into an equation, kudos
> ;-) (I'm really tired and the ol' brain is fizzling out...)
>
> 1) Choose a camera position; it can be anywhere. (So can the object.)
>
> 2) Get the bounding-box coordinates of the object-- the farthest and nearest
> corner locations, whatever they happen to be.
>
> 3) find vlength from camera to nearest bounding-box corner. Call it L-1
>
> 4) find vlength from camera to farthest B-B corner. Call it L-2
>
> 5) L-1 / L-2 = S   This will be somewhere between 0.0 and 1.0-- it depends on
> how far your object is from the camera. The farher apart, the larger this will
> be.
>
> 6)  Then, (1.0 - S) = T
>
> 7) Using T, change your spherical color_map's original 0.0-to-1.0 index values
> to a more 'squased' version, closer to the outer spherical 'surface'. (And don't
> scale it up yet.)
>
>      [0.0 BLUE] // outer radius of spherical pattern
>      [.5*T WHITE]
>      [T  RED ]  // what used to be the center point of the pattern; now father
>                 // out toward edge
>
> 8) Now scale-up the spherical pattern by L-2. That *should* put the outer
> edge of the BLUE at or near the object's farthest bounding-box corner; and RED
> should begin at or neat the closer corner.
>
> CAVEAT: No guarantees are implied... :-P


Maybe this is asking a lot, but I believe such features should be hardcoded into
POV, with a pass system where pov could render a z pass, motion vector pass,
ambient occlusion (proximity pattern) etc. so they would be rendered in the same
process as main image, stored in its secondary layers when using EXR format, and
be usable for compositing.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.