POV-Ray : Newsgroups : povray.binaries.images : Chromadepth scaling to model : Re: Chromadepth scaling to model Server Time
28 Apr 2024 08:33:35 EDT (-0400)
  Re: Chromadepth scaling to model  
From: Kenneth
Date: 17 Feb 2018 11:40:01
Message: <web.5a885a2fb61f5657a47873e10@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:

>
>  scale vlength(CameraLocation - CameraLookAt) * 2
>  translate CameraLocation

Just from sketching some graphical ideas on paper, I think there are probably
two problems involved-- although I hope that I'm understanding the intended use
of your macro!

The spherical pattern by default occupies a sphere space of radius 1.0. The
macro is finding the distance from the *center* of that sphere to the center of
your model (I'm guessing that's what you have in mind, anyway.) Then the camera
is translated to the center point of the scaled-up pattern. So far, the idea
sounds OK. But the spherical pattern's outer 'surface' is already 1 unit away
from the camera location to start with-- so I'm wondering how far that 'new'
outer surface extends to. And it looks like the X2 multiplier is making it much
larger than it needs to be.(?)

If that's the case, then the color_map entries are also getting 'stretched out'.
Maybe they need pre-compressing, so to speak.

Instead of this...
          [0.0 color srgb <0,0,1>]
          [0.5 color srgb <1,1,1>]
          [1.0 color srgb <1,0,0>]

.... maybe *something* like this...

          [0.4 color srgb <0,0,1>]
          [0.5 color srgb <1,1,1>]
          [0.6 color srgb <1,0,0>]

Or maybe the vlength formula should be

scale vlength(CameraLocation - CameraLookAt - 1)

???


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.