|
|
|
|
|
|
| |
| |
|
|
From: Samuel Benge
Subject: Projecting an image outward from the camera
Date: 4 Jul 2007 14:40:22
Message: <468be996@news.povray.org>
|
|
|
| |
| |
|
|
Hi,
The recent ambient occlusion thread(s) got me thinking about a way to
use a radiosity-derived dirtmap pigment which would not affect (final)
render times too much. The idea is to pre-render the object with a white
background and radiosity, and use that image as a mask for the final
image. But I want the image data available as a pigment for the target
object. screen.inc won't accomplish this, AKAIK, plus I like to use a
fisheye lens for all my scenes (I always hide the black circle border).
My most successful attempt used a translated and scaled image with
warp{spherical} applied. Point_At_Trans aligned the pigment to the
target object. The pigment was then translated to the camera's position,
but was applied to the target object. From there I could add different
pigments/textures to the white and black areas, thus creating the effect
of rust in corners, as an example.
The problem is that the edges of the image pigment don't match up to the
edges of the render window. I got pretty darn close by experimenting
with different values, but the image's edge curve is always different.
Has anyone run into a similar problem? Any solutions? Can anyone think
of something that *might* work? I would like to keep using my fisheye
lens camera, but I'm afraid a real solution to my problem would only
work with the standard type.
Thanks in advance~
~Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
My first thought would be to use an orthographic
camera for the pigment generation, then use a
planar mapped pigment, the rear of the object
would be textured wrong, but from the front it
should look correct.
This is a real stumper... what you'd really want is
some sort of inverted spherical camera, where the
rays are shot from a sphere surrounding the look_at.
You might be able to get a reasonable map for some
sorts of simple objects by making them hollow and
rendering with a spherical camera placed inside at the
center.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tim Attwood nous apporta ses lumieres en ce 2007/07/05 04:28:
> My first thought would be to use an orthographic
> camera for the pigment generation, then use a
> planar mapped pigment, the rear of the object
> would be textured wrong, but from the front it
> should look correct.
>
> This is a real stumper... what you'd really want is
> some sort of inverted spherical camera, where the
> rays are shot from a sphere surrounding the look_at.
>
> You might be able to get a reasonable map for some
> sorts of simple objects by making them hollow and
> rendering with a spherical camera placed inside at the
> center.
>
>
You don't need to add the hollow keyword. The purpose of hollow is to allow an
object to contain a media, nothing else. hollow will only remove a warning in
this case.
--
Alain
-------------------------------------------------
You know you've been raytracing too long when you have ever "Hand-Coded" a
bezier patch.
Stephan Ahonen
Post a reply to this message
|
|
| |
| |
|
|
From: Samuel Benge
Subject: Re: Projecting an image outward from the camera
Date: 5 Jul 2007 15:17:41
Message: <468d43d5@news.povray.org>
|
|
|
| |
| |
|
|
Tim Attwood wrote:
> My first thought would be to use an orthographic
> camera for the pigment generation, then use a
> planar mapped pigment, the rear of the object
> would be textured wrong, but from the front it
> should look correct.
I tried aligning a planar mapped pigment from the camera to the target,
but the result is terrible. Perspective causes many unwanted problems.
> This is a real stumper... what you'd really want is
> some sort of inverted spherical camera, where the
> rays are shot from a sphere surrounding the look_at.
Actually, a spherical camera would probably work, and I've thought of
this, but I would have to render a huge image for the dirt map just to
match the resolution of the final render.
> You might be able to get a reasonable map for some
> sorts of simple objects by making them hollow and
> rendering with a spherical camera placed inside at the
> center.
The whole point of projecting an image from the camera location is that
everything the camera sees would be ok (minus reflective surface
catching the backside).
Thank you for your ideas!
~Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
About creating a dirtmap pigment with radiosity pre-renders...
Idea 1. Orthographic camera with planar mapping
> I tried aligning a planar mapped pigment from the camera to the target,
> but the result is terrible. Perspective causes many unwanted problems.
...
> The whole point of projecting an image from the camera location is that
> everything the camera sees would be ok (minus reflective surface catching
> the backside).
Yeah, you'd need to calculate the angle from the camera to the object
in order to rotate the pigment to hide the bad back. So any animation
would be out, and if the back shows up in reflections it'd be wrong.
To fix that you might start piecing maps together, but by then you
might as well have drawn your dirt map by hand. I think this idea
might work in some cases, but would always be a lot of work to
align correctly.
Idea 2. Inverted-spherical camera with look_at in center of object
>> This is a real stumper... what you'd really want is
>> some sort of inverted spherical camera, where the
>> rays are shot from a sphere surrounding the look_at.
>
> Actually, a spherical camera would probably work, and I've thought of
> this, but I would have to render a huge image for the dirt map just to
> match the resolution of the final render.
I think an inverted-spherical camera would work without excessive
image sizes, because every ray would be cast at the look_at point,
so the entire map image would be of the object, but of course there
is no such camera, it'd need to be implemented... in MegaPov I think
you can setup user defined cameras based on functions on UV.
The resulting image pigment would be applied with spherical mapping.
It should be do-able.
Idea 3. Spherical camera placed at center of object
>> You might be able to get a reasonable map for some
>> sorts of simple objects by making them hollow and
>> rendering with a spherical camera placed inside at the
>> center.
This should work for objects where the whole surface is viewable
from the center of the object, the resulting image pigment could
then be spherical mapped onto the object. I haven't tested this,
and I see a couple of flaws in the idea too. First off, a complex
object is likely to have portions of the interior that are obscured.
Secondly, I'm not at all sure that the radiosity will appear correct
from the inside of an object.
Post a reply to this message
|
|
| |
| |
|
|
From: Tim Attwood
Subject: Re: Projecting an image outward from the camera
Date: 5 Jul 2007 17:26:21
Message: <468d61fd@news.povray.org>
|
|
|
| |
| |
|
|
> You don't need to add the hollow keyword. The purpose of hollow is to
> allow an object to contain a media, nothing else. hollow will only remove
> a warning in this case.
Yeah, but you might as well use it, besides that you'd need to make sure to
use merge to get rid of internal surfaces. Or maybe inverse.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Idea 2. Inverted-spherical camera with look_at in center of object
I had some moderate success generating a dirt map with this camera
in MegaPov, but it generates poor results if parts of the object are
obscured to the sphere, I guess that's similar to problem with idea 3.
// inverse-spherical camera
#declare cx = function(u,v,r) {r*cos(u*pi*2)*sin(v*pi)};
#declare cy = function(v,r) {-r*cos(v*pi)};
#declare cz = function(u,v,r) {r*sin(u*pi*2)*sin(v*pi)};
camera {
user_defined
location {
function {cx(u,v,50)}
function {cy(v,50)}
function {cz(u,v,50)}
}
direction {
function {cx(u,v,-1)}
function {cy(v,-1)}
function {cz(u,v,-1)}
}
}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tim Attwood nous apporta ses lumieres en ce 2007/07/05 17:26:
>> You don't need to add the hollow keyword. The purpose of hollow is to
>> allow an object to contain a media, nothing else. hollow will only remove
>> a warning in this case.
>
> Yeah, but you might as well use it, besides that you'd need to make sure to
> use merge to get rid of internal surfaces. Or maybe inverse.
>
>
You need merge. inverse will only invert the insideness of the objects, not the
location nor the presance of surfaces.
--
Alain
-------------------------------------------------
Did you know that Al Capone's business card said he was a used furniture dealer.
Post a reply to this message
|
|
| |
| |
|
|
From: Samuel Benge
Subject: Re: Projecting an image outward from the camera
Date: 6 Jul 2007 14:03:14
Message: <468e83e2@news.povray.org>
|
|
|
| |
| |
|
|
Tim Attwood wrote:
>> Idea 2. Inverted-spherical camera with look_at in center of object
> I had some moderate success generating a dirt map with this camera
> in MegaPov, but it generates poor results if parts of the object are
> obscured to the sphere, I guess that's similar to problem with idea 3.
Wow, something else I did not know. User-defined camera types! With
this, I might be able to make a camera similar to a fisheye lens, but
one which works in conjunction with my idea of projecting the image from
the camera!
Thanks Tim, for trying this idea. I'm thinking the best way is to make a
map to fill the camera's view of the image. Reflective surfaces might
show badly-textured back sides, but I think I can live with that. I'll
keep your inverted spherical projection as reference. Thanks again!
~Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Erm, I'm a bit late here, but this might be what you're looking for:
http://runevision.com/3d/include/#96
"Illusion Include File" - don't let the name and the description stuble you.
What it really does is just project an image from the camera onto the scene,
with parameters that makes sure the projection matches a given camera
definition exactly.
Rune
--
http://runevision.com
Samuel Benge wrote:
> Hi,
>
> The recent ambient occlusion thread(s) got me thinking about a way to
> use a radiosity-derived dirtmap pigment which would not affect (final)
> render times too much. The idea is to pre-render the object with a
> white background and radiosity, and use that image as a mask for the
> final image. But I want the image data available as a pigment for the
> target object. screen.inc won't accomplish this, AKAIK, plus I like
> to use a fisheye lens for all my scenes (I always hide the black
> circle border).
> My most successful attempt used a translated and scaled image with
> warp{spherical} applied. Point_At_Trans aligned the pigment to the
> target object. The pigment was then translated to the camera's
> position, but was applied to the target object. From there I could
> add different pigments/textures to the white and black areas, thus
> creating the effect of rust in corners, as an example.
>
> The problem is that the edges of the image pigment don't match up to
> the edges of the render window. I got pretty darn close by
> experimenting with different values, but the image's edge curve is
> always different.
> Has anyone run into a similar problem? Any solutions? Can anyone think
> of something that *might* work? I would like to keep using my fisheye
> lens camera, but I'm afraid a real solution to my problem would only
> work with the standard type.
>
> Thanks in advance~
>
> ~Sam
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|