|
![](/i/fill.gif) |
<Mienai> wrote in message
news:web.44166001d29a596193db03e80@news.povray.org...
> So I scanned some real objects with two colors from two different angles
> to
> produce an image that should more or less be a normal map. I'm wondering
> if anyone knows of a quick/easy way to convert these images into height
> fields. I think I could run it through some 4-dimensional derivative to
> get something right, but I'm not positive.
>
> with the y-axis being straight up, I have a red light coming from the x
> direction and a green light from the z. My vectors are <red,green> with
> values from 0-255 with <128,128> being straight up, <255,128> pointing to
> positive x, and <128,255> pointing to positive z.
>
Hi Mienai,
I think there are a few problems with this approach.
One biggy is that the images won't contain some of the information you would
need in order to create a height field - In particular the height. You would
need to know the distance of a point from the camera, which the colour
information in the images doesn't give you.
With the exception of certain shapes the colour information doesn't give you
all of the normal information either, because a colour of <200,200> could be
pointing slightly up or slightly down.
Even if you introduced a third colour and could get good approximate normal
information you would have to make some quite unreliable assumptions about
the contours and smoothness of the surface to attempt to derive positional
information from that. For example, it would still be possible to have a
step change in the surface height that is inperceptible in the image.
Regards,
Chris B.
Post a reply to this message
|
![](/i/fill.gif) |