POV-Ray : Newsgroups : povray.binaries.images : Sci-Fi Scene Assets : Re: Sci-Fi Scene Assets Server Time
1 May 2024 09:45:59 EDT (-0400)
  Re: Sci-Fi Scene Assets  
From: Bald Eagle
Date: 21 Feb 2021 10:00:08
Message: <web.6032744aa906d8e31f9dae300@news.povray.org>
"Kenneth" my namesake, wrote:

> ...take all of
> the arrays' eval_pigment color values (and positions in the image), and somehow
> shove them into a FUNCTION of some sort-- to ultimately be applied as a *single*
> pigment function to a single flat box. But HOW to do this has always eluded me.

Because it's hiding under you nose - under the hood.
It the image_map "function" that is all shoved into ... one pigment pattern.

Throughout all of my experiments over the years, I've used POV-Ray to help teach
myself math, and computer graphics, which has helped me learn about POV-Ray, and
what actually goes on (or is supposed to go on) under the hood.

I dabble in ShaderToy because there is no (or very little) "under the hood".
You have to write it all yourself.  And that dispels the illusions of things
like light source, shadows, camera, 3D space....   it's all just "one function"
that evaluates to a certain rgb value at every x,y position of the screen, or
image file.

So how does a function work?
We take <x, y, z> and plug those into the function, which computes a color
value.

But you could also pre-calculate the color values, and store them - in a file -
to reference when you're looping through all of the position values - and that's
called an image file.  That gets implemented via the pigment {image_map} syntax.
Because IIRC POV-Ray even has a mechanism whereby you can "render" something in
memory and use that as an image_map style thing, without even rendering it, or
saving it to an image file first.

Now, you could manually construct a function as a giant polynomial that
intersects all the right rgb values given the right xyz values --- but that's
just encoding the image file data into an equation - which is probably wasteful
of effort, time, and storage.  But it's also likely the kind of thing that _I'd_
be likely to do, in order to learn how to do it, prove that it's possible, and
possibly learn another dozen things and raise 34 more questions in the process.

You likely don't have to store ALL of the image data, if a lot of it is the same
- and then you can write a function, or an algorithm that encodes a lot of data
with - less data.   That is image compression.
In the 80's, I was making a world map on an Atari 800 XL, and it seemed silly to
store EVERY "pixel" of the map, when I was only dealing with 2 colors, and I had
long stretches of same-color pixels.  So I defined them as "lines" (or "boxes")
and just saved the starting point and the length.  Like assembling a hardwood
floor.

So how does one take 48,000 boxes and color them all with one function?

union {
.... all your boxes ...
pigment {image_map}
}

All those boxes get immersed into the region where the image_map correlates to
the 3D space coordinates.  It creates a 1:1 correspondence - a look-up table - a
function - that does what you'd essentially be doing from scratch, or through a
roundabout way with a hand-written function.


At least that's how it all works in my head at this point in time.   There may
be some nuances and POV-Ray specific source code things that don't quite work in
exactly that way...   but I hope this helps you "wrap your head around it"?


- Bill


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.