POV-Ray : Newsgroups : povray.newusers : povray output as a film/CCD : Re: povray output as a film/CCD Server Time
27 Jun 2024 17:10:44 EDT (-0400)
  Re: povray output as a film/CCD  
From: spitz
Date: 13 Jul 2012 18:15:01
Message: <web.50009d33faf6826b9b0928190@news.povray.org>
I'll look more into the mesh camera. My goal is to simulate the Lytro camera
which consists of a lens and a micro lens array in front of the CCD. Once I get
the simulated CCD image I plan to use matlab for performing digital refocusing.
Do you think this is feasible using POV-Ray...?


clipka <ano### [at] anonymousorg> wrote:
> Am 06.07.2012 21:11, schrieb spitz:
> > There was a nice thread on LuxRender about this
> >
> > http://www.luxrender.net/forum/viewtopic.php?f=14&t=4766
> >
> > In this case the CCD sensor or image plane simply is a plane with translucent
> > material with transmission set to 1 and reflection set to 0. An orthographic
> > camera was used to sample this image plane.
> >
> > Would this be possible in POV-Ray...?
>
> In this quality? Pretty likely so.
>
> The key issue here is how to model the sensor; I can think of two
> approaches:
>
>
> (A) Use a non-solid object (e.g. a disc, or a union of two triangles);
> give it a perfectly white pigment, no highlights, no reflection, and
> "diffuse 0.0, 1.0" (sic!); this syntax introduced in POV-Ray 3.7
> specifies that 0% of incident illumination should be scattered /back/
> diffusely (first parameter), while 100% should be scattered /through/
> the surface instead (second parameter); this makes sure your sensor does
> not scatter light back into the scene.
>
> To actually make use of this approach, you'll need to use radiosity, and
> you need to use special settings that make sure that you get a lot of
> radiosity samples; "maximum_reuse" is of particular importance in this
> context and should be very low. Radiosity sample density will be a
> problem though (as it limits the resolution of your sensor), and you
> might actually end up with a high memory footprint.
>
>
> (B) Actually, the "sensor" used in the Luxrender example does nothing
> but apply pretty random perturbations to the rays going through it. It
> so happens that POV-Ray has a dedicated feature to perturb camera rays,
> aptly referred to as "Camera Ray Perturbation" in the docs, which works
> by adding a "normal" statement to the camera statement. If the specified
> perturbation is fine enough and has a suitable distribution, and you
> make generous use of anti-aliasing, it'll make the /perfect/
> sensor-imitation screen for your camera: It'll cause no backscatter at
> all, you don't have to worry about positioning it relative to the
> camera, you don't need to use radiosity if you don't like it, and so on.
>
>
> That said, I'm not yet sure what formula is used for the perturbation
> effect (whether it is equivalent to looking at a correspondingly
> perturbed mirror, or whether the ray directions are modified directly as
> if they were the normals), nor what kind of angular dependency is
> realistic for a CCD's response to incoming light (lambertian law might
> be a first approximation, but I guess it's not that simple), let alone
> what pattern would most faithfully model the scattering effect. But for
> starters I'd try with "normal { bumps 0.5 }" or something along these
> lines. There's always the possibility of changing the pattern later to
> make it more realistic once you're convinced that it's worth the effort
> and have figured out what actually /is/ realistic anyway.
>
>
> As another alternative, you could try with very out-of-focus focal blur,
> setting the focal point maybe halfway between the camera and the lens,
> and "aperture" to the difference between the camera diagonal size and
> the lens' diameter. I don't know whether this is a suitable model for
> the anular dependency of a CCD's response though, and you can't just
> tweak it very much as you can do with the camera perturbation approach.
>
>
> Oh, and then I just read about the mesh camera: You /can/ use it for
> your purposes, using distribution type 0. Use multiple meshes, each
> defining the same points from which to shoot a ray, but use different
> randomized surface normals for each mesh to jitter the directions in
> which rays are shot (you can use the very same normal for all triangles
> in a mesh, or randomize them even within one mesh). This gives you more
> direct control over the sample directions as compared to the perturbed
> camera approach. See the docs or the wiki for more details on mesh cameras.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.