POV-Ray : Newsgroups : povray.general : Generate New Kind of Normal Map : Re: Generate New Kind of Normal Map Server Time
25 Apr 2024 10:38:15 EDT (-0400)
  Re: Generate New Kind of Normal Map  
From: Bald Eagle
Date: 11 May 2020 18:40:01
Message: <web.5eb9d36bd6104eb1fb0b41570@news.povray.org>
"Josh" <nomail@nomail> wrote:
> Hi all,
>
> I was able to create a good looking normal map thanks to the great discussion
> and code in the following thread. Thanks for that!

Sure thing.  :)   Any idea when we get to see some of the results?  ;)


As for the rest of it.... Whoa.

I don't think there's a way to get the illumination state before rendering -
which is what you need.

So I think you need a 2-part render, but you could use an image_map of the first
in your second render - maybe as some sort of mask or filter in between the
scene and the camera, like in screen.inc.

The rest here is me just speculating off-the-cuff.

I think what you're going to need to do if you want to do it all in one go is
come up with a way to combine all the features that you need in either a clever
lighting setup with negative-colored lights, and reversed normal definitions, or
some sort of light-less scene where you use emission in a geometrically
calculated pigment pattern based on a function.


I'm just thinking through the process here - the following may not be the actual
best way to go about it.  But I'll go through it just so you can see the logic
(or insanity) involved.

I did a scene to encode the height of an object - to make a heightfield, and
that obviously occluded any regions under a concavity.  I did that in two ways -
one was with a gradient pigment pattern, and the other was with an orthographic
camera and boxes pigmented based on the result of the trace() function.
Now trace() not only gives you the position of the _first_ surface it intersects
(so you can simulate light --- and shadow) but it also returns the normal of
that surface.

There's also eval_pigment which returns the base pigment color of that point.

So maybe you could create a series of objects which were the "pixels" of an
image that were colored based on a scanning macro with the "from" vector being
where your light source would be.  So you would then have a union {} of box{}
objects all tightly packed into a sheet that perfectly fit the pixels of the
screen (1 pixel = 1 POV unit).   But you don't have to render that - it just
gets stored as an object definition in memory.
Now you can create another similar object based on a different "light source"
using the trace() function from a different direction, and so on, until you have
all the data you need.

Then you can use eval_pigment to look at the pigment value of each of those
sheets at each pixel location and average them together to pigment a final sheet
of box{} objects that you DO render....

I'll let you digest all of that.

Diagrams and detailed descriptions of what you need and the value ranges would
help figure this all out...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.