POV-Ray : Newsgroups : povray.general : Generate New Kind of Normal Map Server Time
28 Mar 2024 05:53:39 EDT (-0400)
  Generate New Kind of Normal Map (Message 1 to 3 of 3)  
From: Josh
Subject: Generate New Kind of Normal Map
Date: 11 May 2020 13:25:01
Message: <web.5eb98959b86ff09fdc1270cd0@news.povray.org>
Hi all,

I was able to create a good looking normal map thanks to the great discussion
and code in the following thread. Thanks for that!

http://news.povray.org/povray.binaries.images/thread/%3Cweb.5d9699cfbd113864eec112d0%40news.povray.org%3E

This works perfectly for certain shapes: spheres, cones, etc. But for shapes
where the surface is very distorted, I need a different effect.

Imagine for example a sphere with very bumpy/distorted surface where on the far
left of the sphere where the normal would normally be pointed almost directly
left, because of all the surface distortion, it is actually pointing right.

The code above renders the normal right as expected.

However, if you place a light to the far right, this point wouldn't be lit by
the light since it is on the far left of the sphere and blocked by (in the
shadow of) the main body of the sphere.

So I would want that pixel's left-right (red) value to be 127 (not pointing left
or right), since if a light was either on the left or right of the object that
pixel wouldn't be lit.

I was able to get decent results by simply rendering the scene two times, once
with red and green lights left and top, and once with red and green right and
bottom, and then using image editing software to invert the second image and
then combine the images.
  #case(2)
    light_source { <-50,0,0>  color rgb <1, 0, 0> /*shadowless*/}
    light_source { <0,50,0>  color rgb <0, 1, 0> /*shadowless*/}
  #break
  #case(3)
    light_source { <50,0,0>  color rgb <1, 0, 0> /*shadowless*/}
    light_source { <0,-50,0>  color rgb <0, 1, 0> /*shadowless*/}
  #break

I'm looking for better results and a more elegant solution if possible.

1) Does anyone have any ideas about how such a Normal Map could be rendered
directly? (Like a the example above, but normals that would normally point in a
direction but are in a shadow from light in that direction are given a value of
127...

2) If that's too hard I would just like to improve my results. I still could
render the Normal Map above, and then render a Mask for it and run it through
image software to create the final result. How would I render a Mask for the
normal map generated above? I am envisioning a render where every pixel has a
value proportionate to the amount of light falling on it from each cardinal
direction (left, right, top, bottom)? Maybe just the correct settings for finish
would achieve this, but I'm not very knowledgable in all the settings...

Hope that makes sense!

Josh


Post a reply to this message

From: Bald Eagle
Subject: Re: Generate New Kind of Normal Map
Date: 11 May 2020 18:40:01
Message: <web.5eb9d36bd6104eb1fb0b41570@news.povray.org>
"Josh" <nomail@nomail> wrote:
> Hi all,
>
> I was able to create a good looking normal map thanks to the great discussion
> and code in the following thread. Thanks for that!

Sure thing.  :)   Any idea when we get to see some of the results?  ;)


As for the rest of it.... Whoa.

I don't think there's a way to get the illumination state before rendering -
which is what you need.

So I think you need a 2-part render, but you could use an image_map of the first
in your second render - maybe as some sort of mask or filter in between the
scene and the camera, like in screen.inc.

The rest here is me just speculating off-the-cuff.

I think what you're going to need to do if you want to do it all in one go is
come up with a way to combine all the features that you need in either a clever
lighting setup with negative-colored lights, and reversed normal definitions, or
some sort of light-less scene where you use emission in a geometrically
calculated pigment pattern based on a function.


I'm just thinking through the process here - the following may not be the actual
best way to go about it.  But I'll go through it just so you can see the logic
(or insanity) involved.

I did a scene to encode the height of an object - to make a heightfield, and
that obviously occluded any regions under a concavity.  I did that in two ways -
one was with a gradient pigment pattern, and the other was with an orthographic
camera and boxes pigmented based on the result of the trace() function.
Now trace() not only gives you the position of the _first_ surface it intersects
(so you can simulate light --- and shadow) but it also returns the normal of
that surface.

There's also eval_pigment which returns the base pigment color of that point.

So maybe you could create a series of objects which were the "pixels" of an
image that were colored based on a scanning macro with the "from" vector being
where your light source would be.  So you would then have a union {} of box{}
objects all tightly packed into a sheet that perfectly fit the pixels of the
screen (1 pixel = 1 POV unit).   But you don't have to render that - it just
gets stored as an object definition in memory.
Now you can create another similar object based on a different "light source"
using the trace() function from a different direction, and so on, until you have
all the data you need.

Then you can use eval_pigment to look at the pigment value of each of those
sheets at each pixel location and average them together to pigment a final sheet
of box{} objects that you DO render....

I'll let you digest all of that.

Diagrams and detailed descriptions of what you need and the value ranges would
help figure this all out...


Post a reply to this message

From: Josh
Subject: Re: Generate New Kind of Normal Map
Date: 11 May 2020 19:00:00
Message: <web.5eb9d880d6104eb1dc1270cd0@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "Josh" <nomail@nomail> wrote:
> > Hi all,
> >
> > I was able to create a good looking normal map thanks to the great discussion
> > and code in the following thread. Thanks for that!
>
> Sure thing.  :)   Any idea when we get to see some of the results?  ;)
>
>
> As for the rest of it.... Whoa.
>
> I don't think there's a way to get the illumination state before rendering -
> which is what you need.
>
> So I think you need a 2-part render, but you could use an image_map of the first
> in your second render - maybe as some sort of mask or filter in between the
> scene and the camera, like in screen.inc.
>
> The rest here is me just speculating off-the-cuff.
>
> I think what you're going to need to do if you want to do it all in one go is
> come up with a way to combine all the features that you need in either a clever
> lighting setup with negative-colored lights, and reversed normal definitions, or
> some sort of light-less scene where you use emission in a geometrically
> calculated pigment pattern based on a function.
>
>
> I'm just thinking through the process here - the following may not be the actual
> best way to go about it.  But I'll go through it just so you can see the logic
> (or insanity) involved.
>
> I did a scene to encode the height of an object - to make a heightfield, and
> that obviously occluded any regions under a concavity.  I did that in two ways -
> one was with a gradient pigment pattern, and the other was with an orthographic
> camera and boxes pigmented based on the result of the trace() function.
> Now trace() not only gives you the position of the _first_ surface it intersects
> (so you can simulate light --- and shadow) but it also returns the normal of
> that surface.
>
> There's also eval_pigment which returns the base pigment color of that point.
>
> So maybe you could create a series of objects which were the "pixels" of an
> image that were colored based on a scanning macro with the "from" vector being
> where your light source would be.  So you would then have a union {} of box{}
> objects all tightly packed into a sheet that perfectly fit the pixels of the
> screen (1 pixel = 1 POV unit).   But you don't have to render that - it just
> gets stored as an object definition in memory.
> Now you can create another similar object based on a different "light source"
> using the trace() function from a different direction, and so on, until you have
> all the data you need.
>
> Then you can use eval_pigment to look at the pigment value of each of those
> sheets at each pixel location and average them together to pigment a final sheet
> of box{} objects that you DO render....
>
> I'll let you digest all of that.
>
> Diagrams and detailed descriptions of what you need and the value ranges would
> help figure this all out...

Ya, I'll have to think about it. May be too involved for what I need. It's only
one asteroid that it greatly affects. With the others the 'regular' normal map
should be mostly ok.

I'm not done yet, but here are some samples of asteroids...

http://grauman.com/aster/crackled.png
http://grauman.com/aster/cratered.png
http://grauman.com/aster/emerald.png
http://grauman.com/aster/goldvein.png
http://grauman.com/aster/greenvein.png
http://grauman.com/aster/liquidgold.png
http://grauman.com/aster/metal.png
http://grauman.com/aster/rocky_blue.png
http://grauman.com/aster/rocky.png
http://grauman.com/aster/ruts.png

Josh


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.