|
 |
In article <471164bc@news.povray.org>, evi### [at] hotmail com
says...
> Patrick Elliott wrote:
> > Only problem I could see with that is that someone may want to use
> > multiple cameras, like say... stereoscopic effects. I mean, its not
> > totally absurd to consider someone wanting to either a) render bath eye
s
> > at the same time, if they have the speed and the language allowed it, o
r
> > even doing something crazier, like having a camera that "looks" at some
> > other location, which is used as part of the image some place else. A
> > good example would be something like a security booth. You have a
> > monitor on the desk, which shows a kind of post process "filtered"
> > effect in black and white, of what is "seen" by a camera that is also
> > *visible* in the larger scene, but in color, as is most of the rest of
> > what is in the camera view.
> >
> > Yeah, I know, you can do two renders, one from the camera view, then
> > post process that, then use the result as an image map on the monitor,
> > but is is just an example. Its possible that someone could have a scene
> > where this was either seriously impractical, or completely impossible,
> > unless the "camera view" was being produced in the same render as the
> > final image. Then again, what do I know. lol
>
> So which is better:
>
> A) Letting the users simply render two sets of images, with one being
> used as a texture in another; this is very easy, requiring only two
> scenes that are almost identical (the camera is positioned differently
> in the camera view), or
>
> B) Writing the renderer so that a texture can be based on the camera
> view (with rays being traced, anti-aliased, usw...) of another place in
> the scene.
>
> We already have A up and running just fine. I use it all the time.
> Leaving out the trouble of render-wrangling a second set of images
> (i.e., putting them in a different directory after rendering), A is no
> more work than B for the user; the camera still has to be positioned and
> test-rendered for either method.
>
> B will require work from programmers who are already busy with stuff
> that can't be pushed off onto the user, or if they can be, only at the
> expense of significantly reducing the user's productivity. It also
> increases the size of the renderer, in order to provide the extra feature
.
>
> Now if the memory situation is very tight, then method B does have the
> advantage that the texture does not need to be stored in memory; but in
> almost every case of this kind the video image occupies less than a
> quarter of the final view, which means that the texture requires only
> one-quarter of the memory of the rendered image. If your memory
> situation is that tight, maybe you need to get some more RAM.
>
> Regards,
> John
>
(A) doesn't let you, that easily any way, place the camera where you are
looking and it looking at its own result, which is looking at its own
result, etc. Yeah, its still possible, but it usually requires feeding
the same scene through multiple times. Its not **that** much different
than using a reflective surface though, in terms of how it would work.
You are just displacing the path of your ray to a new location, then
continuing, until you reach your recursion limit. Would be fun to see
how practical it was to do that at least, even if it proved too
inconvenient to use.
--
void main () {
call functional_code()
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |