POV-Ray : Newsgroups : povray.general : Height field to mesh - more resolution : Re: Height field to mesh - more resolution Server Time
17 Jun 2024 08:24:17 EDT (-0400)
  Re: Height field to mesh - more resolution  
From: Bald Eagle
Date: 2 Dec 2023 18:40:00
Message: <web.656bbf5c2c1eda441f9dae3025979125@news.povray.org>
"Kenneth" <kdw### [at] gmailcom> wrote:

> Of course, I could pull out the two pieces of information that I want-- PRIOR to
> turning the image into a function-- by using eval_pigment + loop on the image or
> image_map itself.

You can.  Which in most regards in not really any different than what we're
talking about here.
If you look at the old thread, and the source code I quoted, eval_pigment is
making a function {pigment{image_map}} and using that to get your pixel info.

> But Ingo's comment implies that there is a code trick
> POST-function to get (at least one of?) those values, which he would then enter
> into an array.

Ingo said, "you can use an image as a pattern and you can use a pattern in a
function. Then
you can sample the "image function" to fill arrays. But, the image will already
be interpolated."

All he's doing there is your loop / eval_pigment thing.
The point he's trying to make is that stuff under the hood is going to be doing
interpolation between pixels, which may or may not be wanted.

> > The image is never "in function form"...
>
> 'under the hood' that is, if I understand your older comments...but the user
> creates and sees a single entity, 'the function', at the SDL-code level

Yes.  The function.
But the function is the function.   The image still remains untouched.

The function is merely a pointer into the image, like a computer-controlled
laser pointer might be used to light up a certain point on a painting.


> Let's say that I turn an image into a function...
> #declare MY_IMAGE_AS_FUNCTION=
> function{pigment{png "MY_IMAGE.png}}

You instantiate a "function" that is linked to a specific image.

> Now, I could manipulate the function to pull out the grayscale values of only
> the RED color channel for example, using dot notation...
> pigment{function{MY_IMAGE_AS_FUNCTION(x,y,z).red}

Much of what goes on with the dot-notation is simple a matter of POV-Ray's
function parser not being very robust or flexible.
You do not need to use the dot notation with a pigment function when directly
using the result.

https://wiki.povray.org/content/Reference:Function
"Declaring User-Defined Color Functions
Right now you may only declare color functions using one of the special function
types. The only supported type is the pigment function. You may use every valid
pigment. This is a very simple example:"

The peculiarities of the function parser require you to choose a single scalar
color channel if the result will be further used in a more complex function, but
that's the function parser's problem, not anything to do with what the pigment
function natively returns, which is an <r, g, b> color vector.

> then create an image_map of just that segregated channel. But that is for ALL of
> the pixels at once. The dot notation trick provides no individual-pixel
> 'location' information; nor do the (x,y,z) function arguments-- they can be
> thought of as overall pixel 'scalers', like (3*x,y,z).

It does nothing to the image pixels.  It only ever returns information for a
single pixel at any given time.  The image_map is restricted to the <0, 0, 0> to
<1, 1, 0> range due to the way POV-Ray imports it into an internal data
structure.
A sane way to handle this information is to use max_extent to get the image size
/ resolution of the image and scale the image_map by that.  Then you link a
function to it, and then every x, y position is a 1:1 correspondence to the
image pixels.

> Yet, your comments and Ingo's seem to imply that there IS a further
> code-manipulation trick to extract a single pixel's color value (a grayscale
> value in my example). And/or "It's *location* is the location that you specify
> in the function."

In the carpet I made for the 2014 Secret Passage TCRTC competition, I used TdG's
eval_pigment method to make a nice carpet from an image.

Here's a direct copy-paste from the .inc file that I wrote for that scene.

#declare ImageMap = pigment {image_map {jpeg "NYLON-carpet-2001.jpg" once} };
#declare Resolution = max_extent (ImageMap);
#declare Resolution = Resolution + <0, 0, 1>;

#declare Carpet =
union {
#declare CarpetX = 0;
#while (CarpetX < Resolution.x)
 #declare CarpetY = 0;
 #while (CarpetY < Resolution.y)
  #declare Tilt = (rand (Seed1)-0.5)*4;
  #declare Rotate = rand (Seed2)*360;

  #declare X = CarpetX/Resolution.x;
  #declare Y = CarpetY/Resolution.y;
  #declare Strand = eval_pigment (ImageMap, <X, Y, 0>);
  #declare Point1 = <X*XSize, Y*(Resolution.y/Resolution.x)*XSize, 0>;
  #declare Point2 = <X*XSize, Y*(Resolution.y/Resolution.x)*XSize, 0> + <0, 0,
-0.5>;
  #declare Length = 0.25*rand (Seed3);

  cone {0, 0.25, <0, 0, -(0.25+Length)>, 0.125 pigment {Strand} rotate x*Tilt
rotate y*Rotate translate Point1}

  #debug concat("Strand = <", vstr(3, Point2, ", ", 0, 1), ">, to <", vstr(3,
Point1, ", ", 0, 1), ">, rgb <", vstr(3, Strand, ", ", 0, 1), "> "," \n")

  #declare CarpetY = CarpetY + 5;
 #end
#declare CarpetX = CarpetX + 5;
#end
}

As you can see, I use fractional values based on the resolution, which spans the
0-1 range of the imported image - which is just an arbitrary choice.  It works
just as well either way.
#declare Strand = eval_pigment (ImageMap, <X, Y, 0>);
Give me a full color vector that I then use to pigment my carpet strand
primitives.

NO. MAGIC.

The very interesting thing that I do see here is that I have:
#macro eval_pigment (pigm, vec)
 #local fn = function {pigment {pigm} }
 #local result = (fn(vec.x, vec.y, vec.z));
 result
#end

which apparently allowed me to use dot notation in function _arguments_, which
I've experienced trouble with on numerous occasions.   I'll have to look into
and experiment to find what the limitations on that are.
(Thanks for making me look back at that scene :) )

- BW


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.