|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
I'd like to make a suggestion for the next version of POV-Ray with regard to
alpha channel transparency support.
Right now, if alpha transparency is enabled, all areas of the image where
the background is partly or fully visible will be partly or fully
transparent. So far so good.
I have a complex animation where two highly-detailed and textured spaceships
swoop back and forth from the point-of-view of the camera in such a manner
that sometimes spaceship "A" is obscuring parts of spaceship "B", and
sometimes spaceship "B" is obscuring spaceship "A".
Because each of these spaceships take a LONG time to render by themselves
(and even longer when combined together in a scene), I want to render each
spaceship on a seperate computer using POV-Ray's alpha transparency feature,
and combine the frames together in post-production along with a third
background layer for the scene background. However, because the spaceship
that is "on top" or "in front" changes during the length of the animation, I
can't always layer the frames from spaceship "A" on top of the frames from
spaceship "B"... sometimes "B" is on top/in front.
Here's what I'm suggesting: a new POV-Ray object modifier called
"alpha_only". When applied to an object, "alpha_only" would cause any light
ray striking that object to be rendered as transparent in the alpha channel.
The way I would use this feature is as follows: On computer #1 I would
render spaceships "A" and "B" together, but spaceship "B" would have the
"alpha_only" modifier applied. As the invisible spaceship "B" moved in front
of spaceship "A" during the animation, it would mask out the areas on
spaceship "A" where "B" blocks it from the camera. I would then reverse the
process on computer #2: spaceships "A" and "B" would repeat exactly the same
animation script, only this time spaceship "A" would be "alpha_only" and
would mask itself when obscuring spaceship "B".
Since an object with "alpha_only" applied would not have to have its
textures, reflections, etc. computed, it seems that there would be a minimal
overhead to such an object to a scene. It would certainly make the
post-production step of combining the frames simpler... *especially* if
there are dozens of spaceships (or whatever) in a scene vs. only two.
Does any of this make any sense? Is it doable/feasible/desirable?
--sg
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Scott Gammans wrote:
>Hi,
>
>I'd like to make a suggestion for the next version of POV-Ray with regard to
>alpha channel transparency support.
>
>Right now, if alpha transparency is enabled, all areas of the image where
>the background is partly or fully visible will be partly or fully
>transparent. So far so good.
>
>I have a complex animation where two highly-detailed and textured spaceships
>swoop back and forth from the point-of-view of the camera in such a manner
>that sometimes spaceship "A" is obscuring parts of spaceship "B", and
>sometimes spaceship "B" is obscuring spaceship "A".
>
>Because each of these spaceships take a LONG time to render by themselves
>(and even longer when combined together in a scene), I want to render each
>spaceship on a seperate computer using POV-Ray's alpha transparency feature,
>and combine the frames together in post-production along with a third
>background layer for the scene background. However, because the spaceship
>that is "on top" or "in front" changes during the length of the animation, I
>can't always layer the frames from spaceship "A" on top of the frames from
>spaceship "B"... sometimes "B" is on top/in front.
>
>Here's what I'm suggesting: a new POV-Ray object modifier called
>"alpha_only". When applied to an object, "alpha_only" would cause any light
>ray striking that object to be rendered as transparent in the alpha channel.
>The way I would use this feature is as follows: On computer #1 I would
>render spaceships "A" and "B" together, but spaceship "B" would have the
>"alpha_only" modifier applied. As the invisible spaceship "B" moved in front
>of spaceship "A" during the animation, it would mask out the areas on
>spaceship "A" where "B" blocks it from the camera. I would then reverse the
>process on computer #2: spaceships "A" and "B" would repeat exactly the same
>animation script, only this time spaceship "A" would be "alpha_only" and
>would mask itself when obscuring spaceship "B".
>
>Since an object with "alpha_only" applied would not have to have its
>textures, reflections, etc. computed, it seems that there would be a minimal
>overhead to such an object to a scene. It would certainly make the
>post-production step of combining the frames simpler... *especially* if
>there are dozens of spaceships (or whatever) in a scene vs. only two.
>
>Does any of this make any sense? Is it doable/feasible/desirable?
What you suggest is completely pointless. In effect you do twice the work
compared to rendering both ships in one image at once. No matter what you
do it will be slower.
You should really think about solving your problem with what exists rather
than trying to invent things you think you need but that won't help you
(and in this case only hurt you). Doing so will be much more productive ;-)
Thorsten
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Well, a point about this is that many rendering software
render in multiple passes, because it would create way too
much overhead if it would be done all in once.
The problem for POV is, this feature would have to be implemented
on a fundamental level. As I see it, alpha_only would have to
switch all textures, image_maps, normals etc etc off, and just
do object-hits, in order to speed things up as is wanted.
But, in this case, I think it would be way too much work to
implement it in POV 3.5. This might be a nifty little feature
for POV 4, but until then, I see it as Thorsten does: there's
no point. POV 3.5 will probably load all image-maps, need
to parse everything as before, and you won't get a significant
speed up that way.
--
Tim Nikias v2.0
Homepage: http://www.digitaltwilight.de/no_lights
Email: Tim### [at] gmxde
>
> What you suggest is completely pointless. In effect you do twice the work
> compared to rendering both ships in one image at once. No matter what you
> do it will be slower.
> You should really think about solving your problem with what exists rather
> than trying to invent things you think you need but that won't help you
> (and in this case only hurt you). Doing so will be much more productive
;-)
>
> Thorsten
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
How about simply rendering even frames in one computer and odd frames
in the other? Or the first half of the animation in one and the latter
half in the other.
Simple solution and saves post-processing trouble (and disk space).
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I think that you can do something similar already, although in three
steps (maybe the kings of povray abuse and obfuscation can help to do
it in two).
1. Render spaceship A only (otherwise full scene).
2. Render spaceship B only (otherwise full scene).
3. Render a 3-colour picture. One colour for each spaceship and one
for the background.
Postprocessing to merge the three should be trivial.
You already know how to do the first two steps. For the third
extinguish all light sources and change the textures of the spaceships
and the background to pigment{color ___}finish{ambient 1}.
There is a catch to this idea: The reflection, shadow and radiosity of
spaceship A will not be visible in spaceship B.
If your ships use only a two-dimensional colour space (e.g: They are
shades of grey with red lights), two steps suffice. You can recode
your colours to use only red and green and put depth information into
blue. For merging take the colour with less blue (the pixel that is
nearer to the camera) and undo the colour recoding.
--
merge{#local i=-11;#while(i<11)#local
i=i+.1;sphere{<i*(i*i*(.05-i*i*(4e-7*i*i+3e-4))-3)10*sin(i)30>.5}#end
pigment{rgbt 1}interior{media{emission x}}hollow}// Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thank you for your usual nasty reply, Thorsten. At least the others gave
useful replies without the sour attitude.
"Thorsten" <nomail@nomail> wrote in message
news:web.3e7653d81663f353f50c5f410@news.povray.org...
> What you suggest is completely pointless. In effect you do twice the work
> compared to rendering both ships in one image at once. No matter what you
> do it will be slower.
> You should really think about solving your problem with what exists rather
> than trying to invent things you think you need but that won't help you
> (and in this case only hurt you). Doing so will be much more productive
;-)
>
> Thorsten
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Tim Nikias v2.0" <tim### [at] gmxde> wrote in message
news:3e765b30@news.povray.org...
> Well, a point about this is that many rendering software
> render in multiple passes, because it would create way too
> much overhead if it would be done all in once.
>
> The problem for POV is, this feature would have to be implemented
> on a fundamental level. As I see it, alpha_only would have to
> switch all textures, image_maps, normals etc etc off, and just
> do object-hits, in order to speed things up as is wanted.
If this proposed feature existed, I would turn off textures, image_maps,
etc. myself using conditional parsing of the scene file.
>
> But, in this case, I think it would be way too much work to
> implement it in POV 3.5. This might be a nifty little feature
> for POV 4, but until then, I see it as Thorsten does: there's
> no point. POV 3.5 will probably load all image-maps, need
> to parse everything as before, and you won't get a significant
> speed up that way.
Like I said, I would define my objects so that they would only load all the
computationally-intensive stuff if a conditional parsing variable were set
in my scene script.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Warp" <war### [at] tagpovrayorg> wrote in message
news:3e767821@news.povray.org...
> How about simply rendering even frames in one computer and odd frames
> in the other? Or the first half of the animation in one and the latter
> half in the other.
> Simple solution and saves post-processing trouble (and disk space).
I assume you are assuming this is for an animation with interlaced frames
(which it isn't). But even if I were doing interlaced, I still don't
understand how that would help... when one spaceship was in front of the
other, wouldn't the spaceships seem to dissolve together every other frame?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <wey### [at] informatikuni-freiburgde> wrote in message
news:3E7### [at] informatikuni-freiburgde...
>
> I think that you can do something similar already, although in three
> steps (maybe the kings of povray abuse and obfuscation can help to do
> it in two).
> 1. Render spaceship A only (otherwise full scene).
> 2. Render spaceship B only (otherwise full scene).
> 3. Render a 3-colour picture. One colour for each spaceship and one
> for the background.
> Postprocessing to merge the three should be trivial.
I actually tried that already and the anti-aliasing along the edges looked
terrible. Maybe it's a limitation of the photoshop tool (Jasc Paint Shop
Pro), but combining PNG or TGA images using alpha transparency just seems to
give much better results than using a color key.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Well, then there isn't much point to it, is there? After all,
the raytracing has to be done either way, so no matter
how complicated your object is, it will always effect the
rendering time. You'd only save texturing/reflection/
refraction/normal parsing, but not the object itself.
Or is there something I don't get? The only thing you'd might
want would be something like an invisible depth-information
in the image, so that combining images with the program
capable of "seeing" that depth would be possible... It would then
just choose, pending on depth, from which image it would have
to pick a pixel.
The only thing I can think of would be to ask Christoph Hormann if
he could implement such feature into his HCR-Edit, but POV doesn't
write two output-files at once (that might be a nice patch though), so
you'd be stuck with tracing four images instead of just two (each ship)
or just one all along.
--
Tim Nikias v2.0
Homepage: http://www.digitaltwilight.de/no_lights
Email: Tim### [at] gmxde
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|