POV-Ray : Newsgroups : povray.programming : adding normal map from image capability? + other stuff Server Time
15 Oct 2024 04:47:59 EDT (-0400)
  adding normal map from image capability? + other stuff (Message 1 to 2 of 2)  
From: Raiford, Michael
Subject: adding normal map from image capability? + other stuff
Date: 17 Nov 2014 13:30:12
Message: <546a3eb4$1@news.povray.org>
So, I've begun poking and prodding the POV-Ray code a bit lately, due to 
my first foray into adding an on-screen indicator of which blocks are 
actively rendering. (It works, but I'm not proud of how hacky it is for 
some situations :) )

When I say normal map, I mean the RGB values are mapped to the XYZ of a 
surface normal.

I've been looking at the bump map code and from what I can see, it 
wouldn't be terribly hard to add this capability, controlled by a switch 
on the bump_map image specification.

Are there any caveats that I am missing here... was there a technical 
reason why this option wasn't implemented?

I'm thinking of adding something like rgb_normals to the bump_map as a 
switch to change how the maps are handled.

The other thing I am investigating is putting together a client and 
server architecture to allow for clustering, since the front and 
back-end are now separate.

Probably with a "virtual" back-end on the front-end piece that would 
pass to a coordinating server, and a virtual front-end to the server 
piece that would pass the messages to the actual render engine (I think 
I saw a few weeks someone here was doing something similar...)


Post a reply to this message

From: clipka
Subject: Re: adding normal map from image capability? + other stuff
Date: 4 Dec 2014 04:00:04
Message: <54802294$1@news.povray.org>
Am 17.11.2014 19:30, schrieb Raiford, Michael:
> So, I've begun poking and prodding the POV-Ray code a bit lately, due to
> my first foray into adding an on-screen indicator of which blocks are
> actively rendering. (It works, but I'm not proud of how hacky it is for
> some situations :) )
>
> When I say normal map, I mean the RGB values are mapped to the XYZ of a
> surface normal.
>
> I've been looking at the bump map code and from what I can see, it
> wouldn't be terribly hard to add this capability, controlled by a switch
> on the bump_map image specification.
>
> Are there any caveats that I am missing here... was there a technical
> reason why this option wasn't implemented?
>
> I'm thinking of adding something like rgb_normals to the bump_map as a
> switch to change how the maps are handled.


The simple reason why it hasn't been implemented yet is the lack of 
those magical round tuits (as in, "I'll do it as soon as I get a round 
tuit").

If you do embark on this, I'd recommend paying attention to object 
and/or texture transformations (rotation, shearing and anisotropic 
scaling, how are they supposed to affect the feature; and scaling in 
general, when and how does it affect the "depth" of the effect), as well 
as the interaction with the shape itself and/or other normal 
perturbations; these things are where I reckon there might be some pitfalls.


> The other thing I am investigating is putting together a client and
> server architecture to allow for clustering, since the front and
> back-end are now separate.
>
> Probably with a "virtual" back-end on the front-end piece that would
> pass to a coordinating server, and a virtual front-end to the server
> piece that would pass the messages to the actual render engine (I think
> I saw a few weeks someone here was doing something similar...)

If you do want to invest time into this, please contact Chris Cason, as 
there are already some ideas and concepts floating around among the dev 
team. (As a matter of fact the whole architectural overhaul for 3.7 was 
originally geared towards this; only later did the issue of SMP come up 
and get priority.)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.