|
|
>> I have used "XYZ=(3.815429E-06,1.523114E-06,0.000000E+00)" ,then converted xyz > to
linear RGB in your help.I set "l
> ight_source {<sun_x,sun_y,sun_z> color
>> rgb <1.00229e-005,3.46021e-006,-5.65828e-007>
The fact that the blue component is negative indicates that this colour
is outside of the sRGB colour space (just). I think POV will cope with
this whilst raytracing, but I don't know how it will interpret negative
colour values when writing to HDR.
> ,and set output file type to be >"*.HDR".The first column is the value
> of each pixel in the hdr image which is >simulated by pov ;The second column
is the value of each pixel in my >r
> eflection image which is real image.My goal is the hdr image which I simulated
>equal to my reflection image.But I find
> my real image is 10 times as hdr >image.The trend of this two image is same,only
having a 10 times between >them.So I
> think there must be have some relation in it.
Where did your "real image" come from? Is it a photograph?
I also struggle to see how you are getting output pixels with a value
around 0.04 when your light source is at a level of .00001. Can you post
your whole scene (or a simplified version of it) to better understand
what you are doing?
Post a reply to this message
|
|