|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Well now that the 3.5 source is available and I still can't code in
c/c++, I can only hope one of you likes the ideas and is willing to
implement them. Both ideas are camera related and I have no idea of
their complexity, so ...
1. dof_normal.
Currently the plane of focus is always parallel to the "lens-plane".
In photography this generaly is also the case. But when using pro
cameras there's the option to tilt the lens plane to change the depth of
field orientation yet the amount of controll is limited, for more detail
Google for the Scheimpflug and Hinge Rules.
In raytracing one could go beyond this by specifying an optional
dof_normal to orient the plane of focus.
camera {
perspective
location <0,0,-5>
look_at <0,0,0>
aperture 0.5
focal_point <0,0,5>
dof_normal <1,0,1>
}
This would create a focal plane at an 45 degree angle with the "lens-
plane". A pracatical use would be for example, a camera looking up or
down under an angle at a building. Using focal blurr in the current
situation would blurr a part of the building and keep a part in focus.
By adding a dof_normal the whole building can be kept in focus while the
rest of the scene is blurred.
2. dof_pattern.
Instead of only allowing planar focal blurr, controlled by foal_point
and aperture, one could use a pattern to controll the position and
amount of blurr.
camera {
perspective
location <0,0,-5>
look_at <0,0,0>
aperture 0.5
dof_pattern {
spherical
translate <0,0,3>
}
}
Now all parts black in the pattern are maximal blurred, all parts
white are minimal blurred. The point of focus has become a small
spherical area. An inverted spherical pattern can be used to blurr a
small area of the scene. Think of what an inverted leopard pattern would
do, or turbulated wood!
TIA,
Ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3 Aug 2002 03:08:46 -0400, ingo <ing### [at] homenl> wrote:
>1. dof_normal.
The camera is a coordinate system defined by the up, right and
direction vectors. These do not have to be perpendicular so what you
are suggesting is not incompatible with the current implementation.
Moreover, I think it can be done without a patch using a simple
transformation. All one has to do is shear the camera along the
direction vector.
I will be very busy this weekend (I am betatesting a massive database)
and will probably lack the time to do this, but I'll try it later on
if no one has bitten the bullet by then.
>2. dof_pattern.
Have you tried using a camera normal to achieve that same effect? It
is perfectly possible to do that right now, but I don't want to think
about the render times (not that focal blur is much faster, mind you)
Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] vipbg
TAG e-mail : pet### [at] tagpovrayorg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
in news:5c6nkus25q3fe3uj0v4df2ng6u6a4k22td@4ax.com Peter Popov wrote:
>>1. dof_normal.
>
> Moreover, I think it can be done without a patch using a simple
> transformation. All one has to do is shear the camera along the
> direction vector.
Shearing the camera indeed affects the focal plane, I tried this long
ago. The problem with it is that it also affects the scene and also, if
I remember well, the whole thing acts exactly opposite to what one
wants. Using a dof_normal would make the focal blurr "independent" from
the camera position (you could even make a horizontal plane of focus
with everything above and beneath blurred).
>>2. dof_pattern.
>
> Have you tried using a camera normal to achieve that same effect?
Yes, combining focal blur and camera perturbation can give nice effects,
but the result is very different from what I envision with pattern based
blurring.
Ingo
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
ingo wrote:
> 2. dof_pattern.
Isn't this impossible because it would require nonstraight rays to be
traced?
Anders
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3d4c4043$1@news.povray.org>,
"Anders K." <and### [at] prostard2gcom> wrote:
> ingo wrote:
> > 2. dof_pattern.
>
> Isn't this impossible because it would require nonstraight rays to be
> traced?
No. It would only need to vary the blurring parameters based on current
pixel.
Better idea: use a function. Functions are better designed for this kind
of thing, and you could still use patterns. Another thing that would be
useful: letting finish parameters be controlled with functions. This
would allow anything in the finish to be mapped with images, patterns,
or whatever the user wants. With functions that can handle vector math
and the right hooks into materials, you would essentially have shaders.
--
Christopher James Huff <chr### [at] maccom>
POV-Ray TAG e-mail: chr### [at] tagpovrayorg
TAG web site: http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
> Another thing that would be useful: letting
> finish parameters be controlled with functions.
I suggested that in this group long ago... :)
> This would allow anything in the finish to be
> mapped with images, patterns, or whatever the
> user wants. With functions that can handle
> vector math and the right hooks into materials,
> you would essentially have shaders.
That would require access to not just the intersection point (like in
POV-Ray 3.5) but also the direction of the incoming ray, as well as the
normal of the surface (not like trace, but from inside the function).
Oh, and the color of the ray. Did I miss something?
And the part about functions being able to handle vector math would mean
a huge change to functions in my opinion. That technique with using
separate functions for the x, y and z components just isn't useful for
advanced vector math I think.
But I still like the concept, I just don't see it coming in any near
future.
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com (updated July 12)
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sun, 4 Aug 2002 01:24:31 +0200, Rune wrote:
> That would require access to not just the intersection point (like in
> POV-Ray 3.5) but also the direction of the incoming ray, as well as the
> normal of the surface (not like trace, but from inside the function).
> Oh, and the color of the ray. Did I miss something?
You missed the d/du and d/dv of the surface, which are available in SL but
not anywhere in POV.
--
#macro R(L P)sphere{L F}cylinder{L P F}#end#macro P(V)merge{R(z+a z)R(-z a-z)R(a
-z-z-z a+z)torus{1F clipped_by{plane{a 0}}}translate V}#end#macro Z(a F T)merge{
P(z+a)P(z-a)R(-z-z-x a)pigment{rgbt 1}hollow interior{media{emission T}}finish{
reflection.1}}#end Z(-x-x.2y)Z(-x-x.4x)camera{location z*-10rotate x*90}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ron Parker wrote:
> You missed the d/du and d/dv of the surface,
> which are available in SL but not anywhere in POV.
I don't even know what that means or what it would be used for when
creating "shaders"... :)
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com (updated July 12)
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Sun, 4 Aug 2002 17:04:54 +0200, Rune wrote:
> Ron Parker wrote:
>> You missed the d/du and d/dv of the surface,
>> which are available in SL but not anywhere in POV.
>
> I don't even know what that means or what it would be used for when
> creating "shaders"... :)
SL is the Renderman Shading Language. The d/du and d/dv are my attempts
to represent the partial derivatives of the location of the surface with
respect to the surface's U and V coordinates. d/du and d/dv are used in
antialiasing, to make decisions about how intricate the texture should be
at a given point. For example, if your shader is a checkerboard, but the
rate of change at the location you're sampling is so high it'd have multiple
squares in the same pixel, you're better off returning an average of the
two base colors.
--
#macro R(L P)sphere{L F}cylinder{L P F}#end#macro P(V)merge{R(z+a z)R(-z a-z)R(a
-z-z-z a+z)torus{1F clipped_by{plane{a 0}}}translate V}#end#macro Z(a F T)merge{
P(z+a)P(z-a)R(-z-z-x a)pigment{rgbf 1}hollow interior{media{emission 3-T}}}#end
Z(-x-x.2x)camera{location z*-10rotate x*90normal{bumps.02scale.05}}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ron Parker wrote:
> For example, if your shader is a checkerboard, but
> the rate of change at the location you're sampling
> is so high it'd have multiple squares in the same
> pixel, you're better off returning an average of
> the two base colors.
Ah, clever. :)
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com (updated July 12)
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|