|
|
>The approach I would take is to not define 1-d constraints, but
>to use 2-d constraints.
>
>What I would use is two images, plus an alpha channel which has ones
>where the constraint image ( z1[x,y] ) should be used, and zeros
>where the background, terrain-generator-created terrain ( z0[x,y] )
>should be used. Image 1 would be defined over regions where the
>alpha channel is 1, and would be "don't care" elsewhere.
I've had very similar thoughts, but wouldn't it really be a 3D
constraint? X and Y from the pixel row and column, and Z from the
color value (right-handed system). Also, why create a second file
that needs to be recombined with the original? Wouldn't it be
better to just store the pixel channels into a 2 dimensional
array of structures containing the channel info, then do the
interp and output directly to a completed image file, so what if
it takes a little more memory. You only really need two channels
on the input, i think pov uses red for the height (correct me if
i'm wrong) and the alpha, of course you could always copy the
calculated values into all 3 RGB channels for the output, if it
matters.
Regarding the interpolation algorithm, that's something i'd need
to experiment with, i've never had to do more than one dimension
before. The 'diffusion' algorithm sounds like a possible starting
point, i've had thoughts of finding contour paths and normals (2D),
more complex than i would like.
Adrian Pederson
Post a reply to this message
|
|