|
|
Anthony D. Baye wrote:
> In between working on other projects over the past month, I've been
> doing some experiments with image processing in POV. My first
> installment is my latest experiment.
>
> While this is an interesting technique, I have yet to find any way
> to make it practical. The biggest problem is one of memory: my computer
> has 2Gb of ram, but POV still crashes when parsing this effect at
> dimensions somewhere above 9.21875' by 5.25'. This is due, no doubt, to
> the rather LARGE number of objects generated by my macro - at this size,
> there are roughly 148,680 individual objects.
>
> I plan to do some further experimentation to see whether or not I
> can make it more memory efficient, but I'm not holding my breath.
>
> The first image is a single color demonstration, this method is more
> efficient, memory-wise, but not as impressive as the full-color demo IMHO.
>
> The second image is a full-color shot of the entire image, dimensions:
> 9.21875' X 5.25'. The third image is a cropped closeup render.
>
> <A href="http://speedy.sdsmt.edu/~1305761/images/LED.jpg">Monochrome LED
> Display.</A>
> <A href="http://speedy.sdsmt.edu/~1305761/images/LED3.jpg">Full Color
> LED Display.</A>
> <A href="http://speedy.sdsmt.edu/~1305761/images/LEDCloseup.jpg">Full
> Color Closeup.</A>
>
> Questions/Comments?
>
> Regards,
>
> ADB
Yes, It's Castle In the Sky.
Alright, here's the basic layout.
I have three macros
LED(dT, dC, dL, dP) // This macro defines a single Light emitting
diode of a given type (the first parameter is largely vestigial, I just
haven't taken it out yet.) The second parameter controls the color, the
third is the Hue, and the fourth controls position via a transform block.
ColorBlock(rV, gV, bV, pos) // defines a 3/8" square brick
containing three LEDs in a molded block. (this is one place where
refinement might be possible.) The macro takes in hue values for red,
green and blue values. The fourth is a transform block.
Display(dSize, bSize, dImage) // The first two parameters take
<u,v> pairs as the height and width of the display, and the height and
width of the individual color blocks. The second parameter is obsolete,
it was simply a carry-over from one of my other macros that I
reverse-engineered. The third parameter is the source image file in
JPEG format.
Display(...) reads the image into an image map, which is then
scaled to the display size. Calculations are performed to determine how
many times the individual blocks will fit into the display these are
defined as (a) and (b).
Two while() loops are initiated, (b) on the outside, (a) on the
inside. Loop (a) handles horizontal translation while loop (b) handles
virtical translation. The image is laid out row by row, starting at the
bottom.
As each block is placed, the macro reads the image_map for red,
green and blue values at the same location (If you have'nt guessed, yet,
this uses the eval_pigment() macro.) These values are passed to the
ColorBlock(...) macro along with a position vector.
The inner loop increments until an entire row is complete, then the
outer loop increments.
In the case of the monochrome display, I simply averaged the three
color values and passed the result straight to the LED(...) macro along
with a standard color for the display.
There are probably quite a few ways to simplify my model. One of
my current ideas is replacing my LED CSG model with an Isosurface
created from a cylindrical pigment. I'd have to try it to know if it
would work, though.
I don't know about texturing the pieces as a whole, I'd have to
re-think my entire model. I know that it doesn't transform well once
complete, so I'd considered writing a sub process that would allow it to
be built around a given point, but I'll see.
Thanks for your input.
Regards,
ADB
Post a reply to this message
|
|