POV-Ray : Newsgroups : povray.advanced-users : Convert a normal map to a height field? : Re: Convert a normal map to a height field? Server Time
28 Jul 2024 10:23:15 EDT (-0400)
  Re: Convert a normal map to a height field?  
From: Mienai
Date: 14 Mar 2006 13:40:00
Message: <web.44170b34925a6bc093db03e80@news.povray.org>
"Chris B" <c_b### [at] btconnectcomnospam> wrote:
> <Mienai> wrote in message
> news:web.44166001d29a596193db03e80@news.povray.org...
> > So I scanned some real objects with two colors from two different angles
> > to
> > produce an image that should more or less be a normal map.  I'm wondering
> > if anyone knows of a quick/easy way to convert these images into height
> > fields.  I think I could run it through some 4-dimensional derivative to
> > get something right, but I'm not positive.
> >
> > with the y-axis being straight up, I have a red light coming from the x
> > direction and a green light from the z.  My vectors are <red,green> with
> > values from 0-255 with <128,128> being straight up, <255,128> pointing to
> > positive x, and <128,255> pointing to positive z.
> >
>
> Hi Mienai,
>
> I think there are a few problems with this approach.
>
> One biggy is that the images won't contain some of the information you would
> need in order to create a height field - In particular the height. You would
> need to know the distance of a point from the camera, which the colour
> information in the images doesn't give you.
> With the exception of certain shapes the colour information doesn't give you
> all of the normal information either, because a colour of <200,200> could be
> pointing slightly up or slightly down.
>
> Even if you introduced a third colour and could get good approximate normal
> information you would have to make some quite unreliable assumptions about
> the contours and smoothness of the surface to attempt to derive positional
> information from that. For example, it would still be possible to have a
> step change in the surface height that is inperceptible in the image.
>
> Regards,
> Chris B.

"Chris B" <c_b### [at] btconnectcomnospam> wrote:
> Hi Mienai,
>
> I think there are a few problems with this approach.
>
> One biggy is that the images won't contain some of the information you would
> need in order to create a height field - In particular the height. You would
> need to know the distance of a point from the camera, which the colour
> information in the images doesn't give you.
> With the exception of certain shapes the colour information doesn't give you
> all of the normal information either, because a colour of <200,200> could be
> pointing slightly up or slightly down.
>
> Even if you introduced a third colour and could get good approximate normal
> information you would have to make some quite unreliable assumptions about
> the contours and smoothness of the surface to attempt to derive positional
> information from that. For example, it would still be possible to have a
> step change in the surface height that is inperceptible in the image.
>
> Regards,
> Chris B.

The way I was thinking of it was that the normal is providing the delta in
height (slope) that's why I was thinking that I could just run the image
through a derivative to get an approximate height based of some +c value
(basic calculus, right?).  While I agree that there are going to be some
inconsistancies (specially on vertical slopes) I still think this should
provide relativly accurate resu;ts, at least accurate enough to model
surfaces with a height map.

As for not all the info be contained, maybe I didn't make myself clear, the
y vector never points down but since it's the normal the slope can
decrease.  The point <200,200> would be pointing halfway between positive x
and z with a slight incline in the y direction.

I'm still looking of a good way to convert this.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.