|
![](/i/fill.gif) |
"Chris B" <c_b### [at] btconnect com nospam> wrote:
> <Mienai> wrote in message
> news:web.44166001d29a596193db03e80@news.povray.org...
> > So I scanned some real objects with two colors from two different angles
> > to
> > produce an image that should more or less be a normal map. I'm wondering
> > if anyone knows of a quick/easy way to convert these images into height
> > fields. I think I could run it through some 4-dimensional derivative to
> > get something right, but I'm not positive.
> >
> > with the y-axis being straight up, I have a red light coming from the x
> > direction and a green light from the z. My vectors are <red,green> with
> > values from 0-255 with <128,128> being straight up, <255,128> pointing to
> > positive x, and <128,255> pointing to positive z.
> >
>
> Hi Mienai,
>
> I think there are a few problems with this approach.
>
> One biggy is that the images won't contain some of the information you would
> need in order to create a height field - In particular the height. You would
> need to know the distance of a point from the camera, which the colour
> information in the images doesn't give you.
> With the exception of certain shapes the colour information doesn't give you
> all of the normal information either, because a colour of <200,200> could be
> pointing slightly up or slightly down.
>
> Even if you introduced a third colour and could get good approximate normal
> information you would have to make some quite unreliable assumptions about
> the contours and smoothness of the surface to attempt to derive positional
> information from that. For example, it would still be possible to have a
> step change in the surface height that is inperceptible in the image.
>
> Regards,
> Chris B.
"Chris B" <c_b### [at] btconnect com nospam> wrote:
> Hi Mienai,
>
> I think there are a few problems with this approach.
>
> One biggy is that the images won't contain some of the information you would
> need in order to create a height field - In particular the height. You would
> need to know the distance of a point from the camera, which the colour
> information in the images doesn't give you.
> With the exception of certain shapes the colour information doesn't give you
> all of the normal information either, because a colour of <200,200> could be
> pointing slightly up or slightly down.
>
> Even if you introduced a third colour and could get good approximate normal
> information you would have to make some quite unreliable assumptions about
> the contours and smoothness of the surface to attempt to derive positional
> information from that. For example, it would still be possible to have a
> step change in the surface height that is inperceptible in the image.
>
> Regards,
> Chris B.
The way I was thinking of it was that the normal is providing the delta in
height (slope) that's why I was thinking that I could just run the image
through a derivative to get an approximate height based of some +c value
(basic calculus, right?). While I agree that there are going to be some
inconsistancies (specially on vertical slopes) I still think this should
provide relativly accurate resu;ts, at least accurate enough to model
surfaces with a height map.
As for not all the info be contained, maybe I didn't make myself clear, the
y vector never points down but since it's the normal the slope can
decrease. The point <200,200> would be pointing halfway between positive x
and z with a slight incline in the y direction.
I'm still looking of a good way to convert this.
Post a reply to this message
|
![](/i/fill.gif) |