|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I am currently working on a UTD MOM hybrid solver.
This is a solver for an electromagnetic waves that uses an hybrid method between
geometrical optics and direct solving of the maxwell equations.
In the part of the geometrical optics I am planning to use direct ray tracing
(source to camera/point of measurement).
My brother suggested I consult this website, and use parts of the ray tracer as
a base for my program.
I have encountered a problem understanding direct ray tracing.
Unlike in backward ray tracing, the number of rays entering every pixel is not
fixed (depends on the geometry and the density of the incoming rays).
Say I double the incoming ray density so I will get a double number of rays
hitting every pixel, but the actual number of rays is dependent on the
geometry.
So how do I normalize the value inside the pixel?
Any one knows a good forward ray tracing tutorials that could help me?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
tsachi nous illumina en ce 2008-11-23 14:23 -->
> I am currently working on a UTD MOM hybrid solver.
> This is a solver for an electromagnetic waves that uses an hybrid method between
> geometrical optics and direct solving of the maxwell equations.
> In the part of the geometrical optics I am planning to use direct ray tracing
> (source to camera/point of measurement).
> My brother suggested I consult this website, and use parts of the ray tracer as
> a base for my program.
> I have encountered a problem understanding direct ray tracing.
> Unlike in backward ray tracing, the number of rays entering every pixel is not
> fixed (depends on the geometry and the density of the incoming rays).
> Say I double the incoming ray density so I will get a double number of rays
> hitting every pixel, but the actual number of rays is dependent on the
> geometry.
> So how do I normalize the value inside the pixel?
> Any one knows a good forward ray tracing tutorials that could help me?
>
>
You can't do "pure" forward tracing with POV-Ray.
You can do mixed forward/backward tracing when you use the photons feature.
When using that feature, rays are still traced from the camera. Rays comming
from a light source and passing trough a transparent object, or bouncing off
reflective objects are calculated, if the objects in question are designated as
"target".
The rays comming from a light source are evenly spaced. If you double the number
of rays, each ray will be half the intensity. So, there is a inbuilt normalisation.
--
Alain
-------------------------------------------------
Moonies: Only really happy shit happens.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I am currently working on a UTD MOM hybrid solver.
> This is a solver for an electromagnetic waves that uses an hybrid method
> between
> geometrical optics and direct solving of the maxwell equations.
> In the part of the geometrical optics I am planning to use direct ray
> tracing
> (source to camera/point of measurement).
> My brother suggested I consult this website, and use parts of the ray
> tracer as
> a base for my program.
> I have encountered a problem understanding direct ray tracing.
> Unlike in backward ray tracing, the number of rays entering every pixel is
> not
> fixed (depends on the geometry and the density of the incoming rays).
A ray travels from the camera and strikes an object. That object then cast
rays
in the direction of the light sources, and if the lights are unobstructed
the
color is determined by phong and/or specular highlight models and object
color.
The object could be reflective and/or transparent though, if so then a ray
is
traced from the object at the proper angle and travels on to strike a second
object... etc. This process is repeated until max_trace level is reached, or
the contribution of the next object would fall below the adc_bailout value.
This final value then becomes the resulting pixel value at that spot.
> Say I double the incoming ray density so I will get a double number of
> rays
> hitting every pixel, but the actual number of rays is dependent on the
> geometry.
> So how do I normalize the value inside the pixel?
> Any one knows a good forward ray tracing tutorials that could help me?
POV can use adaptive anti-alias super-sampling of pixel values.
Anti-aliasing is adaptive: it will only super-sample pixels that are
different
in color from neighboring pixels. The threshold value controling
the trigger for perfoming super-sampling on a pixel is suplied by
the user though, so it can be set very low if required.
In general higher levels of anti-aliasing result in a squared increase in
the number of rays cast from the camera, though the actual number
can vary by method.
Search the documentation under "+a" for more details.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alain <ele### [at] netscapenet> wrote:
> tsachi nous illumina en ce 2008-11-23 14:23 -->
> > I am currently working on a UTD MOM hybrid solver.
> > This is a solver for an electromagnetic waves that uses an hybrid method between
> > geometrical optics and direct solving of the maxwell equations.
> > In the part of the geometrical optics I am planning to use direct ray tracing
> > (source to camera/point of measurement).
> > My brother suggested I consult this website, and use parts of the ray tracer as
> > a base for my program.
> > I have encountered a problem understanding direct ray tracing.
> > Unlike in backward ray tracing, the number of rays entering every pixel is not
> > fixed (depends on the geometry and the density of the incoming rays).
> > Say I double the incoming ray density so I will get a double number of rays
> > hitting every pixel, but the actual number of rays is dependent on the
> > geometry.
> > So how do I normalize the value inside the pixel?
> > Any one knows a good forward ray tracing tutorials that could help me?
> >
> >
> You can't do "pure" forward tracing with POV-Ray.
> You can do mixed forward/backward tracing when you use the photons feature.
>
> When using that feature, rays are still traced from the camera. Rays comming
> from a light source and passing trough a transparent object, or bouncing off
> reflective objects are calculated, if the objects in question are designated as
thank you, I think I started to understand it.
But still, can you recomend a good toturial in the subject? because I think I
need pure farward raytracing (I am interested in the field every where and not
only at the camera), and every toturial/book I found was about backwards ray
tracing.
> "target".
>
> The rays comming from a light source are evenly spaced. If you double the number
> of rays, each ray will be half the intensity. So, there is a inbuilt normalisation.
>
> --
> Alain
> -------------------------------------------------
> Moonies: Only really happy shit happens.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> thank you, I think I started to understand it.
> But still, can you recomend a good toturial in the subject? because I
> think I
> need pure farward raytracing (I am interested in the field every where and
> not
> only at the camera), and every toturial/book I found was about backwards
> ray
> tracing.
> "target".
Forward ray tracing is generally known as "unbiased" ray tracing.
With unbiased ray tracing the rays originate from the light sources,
then they travel out (in some random direction) and stike some
object, the ray then picks up color from the object, and deposits
some light information on the surface, and bounces off at some
appropriate angle, bouncing around until it strikes the camera.
Because the directions are somewhat random, there is no
knowing how many rays will be deposited onto the forming
image at what pixel locations, so in practice such rendering
programs allow the user to interactivly decide when the image
looks complete, and then stop the process.
Indigo is free, you might check if you can find source for them.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|