POV-Ray : Newsgroups : povray.binaries.images : Sci-Fi Scene Assets : Re: Sci-Fi Scene Assets Server Time
1 May 2024 17:01:51 EDT (-0400)
  Re: Sci-Fi Scene Assets  
From: Kenneth
Date: 20 Feb 2021 19:05:00
Message: <web.6031a295a906d8e3d98418910@news.povray.org>
"Robert McGregor" <rob### [at] mcgregorfineartcom> wrote:
> "Kenneth" <kdw### [at] gmailcom> wrote:
>
> > ...So I put together something similar(?) to your idea: Rendering a scene
> > the normal way, then bringing the image back into POV-ray and doing an
> > image-to-function-to-pigments conversion on it (all 3 color channels)...
>
> Yes, this sounds very much like what I did (which was orginally inspired by
> reading your bright pixels AA problems post and pondering ways to fix that!)

Ha! This is what I love about the newsgroups here: similar ideas feeding off of
each other, to find a common solution.

> > I looked up 'gaussian blur', and I see that it
> > involves matrix use in some way...
>
> Okay, I didn't use a matrix in that sense, i.e., the matrix keyword as
> used for transformations.

Yes, I had a half-formed idea that this might be the case. Honestly, I had no
real idea as to how I was going to proceed with a 'typical' matrix, if I could
even get the darn thing to run ;-)

> I just built my own Gaussian smoothing matrix using a 2d array,
> like this (note the symmetry):
>
> #declare ConvolutionKernel = array[5][5] {   // Smoothing matrix
>    {1,  4,  7,  4, 1},
>    {4, 16, 26, 16, 4},
>    {7, 26, 41, 26, 7},
>    {4, 16, 26, 16, 4},
>    {1,  4,  7,  4, 1}
> }

[clip]
Ah, the key! That's brilliant and elegantly simple. I shall experiment with it.
Thanks again for your willingness to share, and for your clear comments.

Here's an alternate 'brute force' idea for finding / masking off pixels in an
image that go over a certain brightness limit: Years ago, I wrote some code
using eval_pigment to go over each and every pixel of a pre-rendered image, then
storing all of those found color values in a 2-D array. (Or alternately, just
the brightest pixel values according to a particular threshold, and their x/y
positions in the image). The final step was to 're-create' the image by
assigning each original pixel's color-- or black, as the case may be-- to a tiny
flat box or polygon in an x/y grid. For example, an 800X600 re-created image
would be made of 480,000 tiny box shapes! Crude but effective.

But a more elegant solution to this 'many boxes' idea would be to take all of
the arrays' eval_pigment color values (and positions in the image), and somehow
shove them into a FUNCTION of some sort-- to ultimately be applied as a *single*
pigment function to a single flat box. But HOW to do this has always eluded me.
In other words, how can a single function be 'built' using a repetitive
iteration scheme-- other than by assigning each and every pixel color to its own
initial pigment function, then combining/adding(?) all of the 480,000 functions
into 1 for the final image output?? (Which by itself leaves out the *positions*
of the individual pixels, another major problem.)

Can such a pigment function be built piece-by-piece-- in a #for loop, for
example-- AND in correct final order of the pixels? I was wondering if you have
ever devised a scheme to do something like this.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.