POV-Ray : Newsgroups : povray.general : question about light_source : Re: question about light_source Server Time
6 Oct 2024 09:20:34 EDT (-0400)
  Re: question about light_source  
From: CAS
Date: 19 Mar 2014 08:05:01
Message: <web.532987827ece86556a92a3a00@news.povray.org>
scott <sco### [at] scottcom> wrote:
> >> I have used "XYZ=(3.815429E-06,1.523114E-06,0.000000E+00)" ,then converted xyz >
to linear RGB in your help.I set 
 "l
> > ight_source {<sun_x,sun_y,sun_z> color
> >> rgb <1.00229e-005,3.46021e-006,-5.65828e-007>
>
> The fact that the blue component is negative indicates that this colour
> is outside of the sRGB colour space (just). I think POV will cope with
> this whilst raytracing, but I don't know how it will interpret negative
> colour values when writing to HDR.
>
> > ,and set output file type to be >"*.HDR".The first column is the value
> > of each pixel in the hdr image which is >simulated by pov ;The second
column is the value of each pixel in m
y >r
> > eflection image which is real image.My goal is the hdr image which I simulated
>equal to my reflection image.But I 
find
> >   my real image is 10 times as hdr >image.The trend of this two image is same,only
having a 10 times between >them.
So I
> > think there must be have some relation in it.
>
> Where did your "real image" come from? Is it a photograph?
>
> I also struggle to see how you are getting output pixels with a value
> around 0.04 when your light source is at a level of .00001. Can you post
> your whole scene (or a simplified version of it) to better understand
> what you are doing?

#declare camera_za=0;
#declare camera_aa=0;
#declare camera_h=4267.2;
#declare camera_dist=4267.2+435;
#declare camera_y = camera_dist*cos(radians(camera_za));
#declare camera_LL = camera_dist*sin(radians(camera_za));
#declare camera_x = camera_LL*sin(radians(camera_aa));
#declare camera_z = camera_LL*cos(radians(camera_aa));

camera
{
angle  7.54371
location <camera_x,camera_y,camera_z >
look_at <0,0,0>
rotate <0,-31,0>
right x
up y
}

#declare sun_za = 31.65;
#declare sun_aa = 187.89;
#declare sun_dist = 152589828000;
#declare sun_y = sun_dist*cos(radians(sun_za));
#declare sun_LL = sun_dist*sin(radians(sun_za));
#declare sun_x = sun_LL*sin(radians(sun_aa));
#declare sun_z = sun_LL*cos(radians(sun_aa));

light_source {<sun_x,sun_y,sun_z>
            color rgb <1.00229e-005,3.46021e-006,-5.65828e-007> }

#declare geom_file_name = "pov-xyz.txt";
#declare spec_file_name = "pov-ref.txt";
#declare n = 0;
#fopen   Input_geom_file geom_file_name read
#fopen   Input_spec_file spec_file_name read
#while(defined(Input_geom_file))
    #read(Input_geom_file,x1,y1,z1)
    #read(Input_spec_file,spectral)
         sphere
        {
         <x1-center_x,y1-center_y,z1-center_z>,
         1.55
        pigment {color rgb spectral}
        }
    #declare n = n + 1;
#end
#debug concat(str(n,15,2),"\n")

It is my code.Real image is a photograph.the value of each pixel in real image
are like these:
0.220900
0.375800
0.375800
0.346500
0.277700
0.355800
0.396900
0.441800
0.442800
0.382300
I doublt this statement "pigment {color rgb spectral}"is wrong.spectral is
responsible to  "pov-ref.txt" which save the value of each pixel of real image.
the parameters I set are all real value calculated by real image ,my experiment
is to simulation the process of getting real image,so all the parameters are
decided by real image.the process of getting real image is the same of pov.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.