"Kenneth" <kdw### [at] gmailcom> writes:
> But there's one aspect of my example that is still mysterious to me:
> By pre-#declaring TEMP_COLOR=srgb <.5,.3,.7> and then plugging that into the
> light as
> color TEMP_COLOR*50000
> it would *appear* that the syntax result is again simply srgb <.5,.3,.7>*50000
> -- or maybe a segregated (srgb <.5,.3,.7>)*50000 ? I'm not seeing the
> essential difference that the #declare produces. So it looks like the parser
> *is* making some kind of mathematical distinction, whatever that is.
According to the documentation, colors are stored internally as
five-component vectors, which represent the color components in the
ordinary RGB space—linear if you’re using ‘assumed_gamma 1’—in addition
to filter and transmit. Therefore, your ‘TEMP_COLOR’ variable does not
remember that it was set using the ‘srgb’ keyword, or the original sRGB
color values. In the light-source declaration, you’re multiplying by
50000 each of the five components of the color vector stored in the
variable, i.e., the ordinary RGB color components, along with filter and
transmit, which don’t matter in this context.
Perhaps we could use a macro like this in the ‘colors.inc’ file:
// Converts a color given in sRGB space to one in the ordinary RGB
// space determined by ‘assumed_gamma’.
#local Result = srgbft Color
Now ‘#declare C4 = CsRGB2RGB(rgb <.5, .3, .7>)*50000;’ does work.
Post a reply to this message