|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
My question is: when would I use mcpov over radiosity? Also, is mcpov more
condusive to multiple processors vs. than stock pov 3.6?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jhu" <nomail@nomail> wrote:
> My question is: when would I use mcpov over radiosity? Also, is mcpov more
> condusive to multiple processors vs. than stock pov 3.6?
From what I've seen so far, I'd say you want to use mcpov when you want a shot
that doesn't show any artifacts whatsoever... provided you have enough time to
spend waiting for the render to... well... "finish", if that word makes any
sense in the mcpov context ;)
(Well, at least no lighting artifacts; it doesn't help with geometry issues.)
For some hard to grasp reason, I find that mcpov renders (and path raytraced
scenes in general, as it seems), have a certain very pleasing look - if the
geometry and the textures are done well, it all just seems "right".
Radiosity is good, given its speed. But there are some subtleties about a
radiosity-lit scene that make it somehow look inferior to mcpov renders; but
nothing that I could really put my finger on - it just doesn't have the same
look.
However, beware of media in mcpov - you can't use those reliably.
Likewise, beware of mirrors combined with radiosity in 3.6 - they make for some
nice artifact generators.
When speed is an issue, I guess mcpov is not what you want. But it maybe worse
on a single-core system than on a quad-core, because you can always run
multiple instances of mcpov, using different random seeds, to overlay them
later for a higher-quality shot.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"clipka" <nomail@nomail> schreef in bericht
news:web.49d0398d6108ca7088b6cd970@news.povray.org...
> When speed is an issue, I guess mcpov is not what you want. But it maybe
> worse
> on a single-core system than on a quad-core, because you can always run
> multiple instances of mcpov, using different random seeds, to overlay them
> later for a higher-quality shot.
>
>
That overlaying of images intrigues me. It is merging of two or more images
in a paint program isn't it? What is used in that case? Averaging?
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> When speed is an issue, I guess mcpov is not what you want.
Depends if you are OK with doing something else (eg sleeping) while it
renders. With mcpov you need virtually zero setup/tweaking time compared to
using radiosity.
I've been re-rendering a lot of my radiosity scenes with mcpov recently,
usually I just copy&paste in the mcpov global header code and then let it
run overnight. Scenes that caused hours of frustration with artifacts under
radiostity magically rendered perfectly :-) It's really really cool, it
should an option in the standard POV build IMO.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"scott" <sco### [at] scottcom> wrote:
> I've been re-rendering a lot of my radiosity scenes with mcpov recently,
> usually I just copy&paste in the mcpov global header code and then let it
> run overnight. Scenes that caused hours of frustration with artifacts under
> radiostity magically rendered perfectly :-) It's really really cool, it
> should an option in the standard POV build IMO.
I'd better re-render my ghostlight image with mcpov by the sounds of it. I never
did get rid of all the artifacts; the best updated version eventually resorted
to sneaky area lights...
*goes to look at the mcpov homepage*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thomas de Groot escreveu:
> That overlaying of images intrigues me. It is merging of two or more images
> in a paint program isn't it?
Yes.
> What is used in that case? Averaging?
Probably.
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Thomas de Groot" <tDOTdegroot@interDOTnlANOTHERDOTnet> wrote:
> That overlaying of images intrigues me. It is merging of two or more images
> in a paint program isn't it? What is used in that case? Averaging?
I geuss I'd do it with *POV* - because (a) it allows for best-quality input,
using HDR file format, and (b) I know for sure that it will use the maximum
possible precision for computation - using an orthographic scene process; and
yes, that would be plain averaging. Which is exactly what each instance of
mcpov does anyways for all of its iterations (although that's done even at
float precision; but when parallelizing mcpov runs, we're probably talking
about just 4 or 8 shots that finally need to be mixed).
One thing to pay attention to would be to take runtime differences into account
appropriately. For example, if three threads had been running for 4 hours each,
and another render was started 2 hours later and consequently ran only 2 hours,
I'd give that 2-hour shot a lower weight.
If using Photoshop instead, there's two possibilities to go; both involve just
plain normal layer combination, with some transparency to the layers:
(a) process the shots in pairs, giving the lower layer full opacity and the
higher one 50%. If, say, you had shots A, B, C, D, E, F, G and H, you'd first
(b) stack all the shots; give the lowest layer 100% opacity, the next higher one
50%, the next higher one 33%, then 25%, 20%, 17%, and so on. Merge to a single
shot.
I have no idea which of these approaches would leave you with the least loss. I
guess (a), but it depends on how Photoshop does things internally. If for
example it stays in the 8 bit integer domain for its computations, (b) is
obviously crap; but if it uses floating-point math (or at least 16-bit
arithmetics) during any "merge layers" operation, but converts back to 8-bit
after such an operation, you're definitely better off with (b).
In any case, Photoshop can never do as good as POV.
Additionally, with POV you can stay in the linear domain (speaking of gamma
here) throughout the whole process before outputting the final results; I don't
know whether Photoshop takes gamma issues into account when merging images. This
will probably not be an issue when images are low-noise already, but if you
still have some deal of graininess it may make a difference.
Note however that when working with linear output from the actual renders, you
*do* want to use HDR, otherwise you'll lose a lot of detail in dark parts of
the shot.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka escreveu:
> "Thomas de Groot" <tDOTdegroot@interDOTnlANOTHERDOTnet> wrote:
>> That overlaying of images intrigues me. It is merging of two or more images
>> in a paint program isn't it? What is used in that case? Averaging?
>
> I geuss I'd do it with *POV* - because (a) it allows for best-quality input,
> using HDR file format, and (b) I know for sure that it will use the maximum
> possible precision for computation - using an orthographic scene process; and
> yes, that would be plain averaging. Which is exactly what each instance of
> mcpov does anyways for all of its iterations (although that's done even at
> float precision; but when parallelizing mcpov runs, we're probably talking
> about just 4 or 8 shots that finally need to be mixed).
>
> One thing to pay attention to would be to take runtime differences into account
> appropriately. For example, if three threads had been running for 4 hours each,
> and another render was started 2 hours later and consequently ran only 2 hours,
> I'd give that 2-hour shot a lower weight.
>
>
> If using Photoshop instead, there's two possibilities to go; both involve just
> plain normal layer combination, with some transparency to the layers:
>
> (a) process the shots in pairs, giving the lower layer full opacity and the
> higher one 50%. If, say, you had shots A, B, C, D, E, F, G and H, you'd first
>
> (b) stack all the shots; give the lowest layer 100% opacity, the next higher one
> 50%, the next higher one 33%, then 25%, 20%, 17%, and so on. Merge to a single
> shot.
>
> I have no idea which of these approaches would leave you with the least loss. I
> guess (a), but it depends on how Photoshop does things internally. If for
> example it stays in the 8 bit integer domain for its computations, (b) is
> obviously crap; but if it uses floating-point math (or at least 16-bit
> arithmetics) during any "merge layers" operation, but converts back to 8-bit
> after such an operation, you're definitely better off with (b).
>
> In any case, Photoshop can never do as good as POV.
>
>
> Additionally, with POV you can stay in the linear domain (speaking of gamma
> here) throughout the whole process before outputting the final results; I don't
> know whether Photoshop takes gamma issues into account when merging images. This
> will probably not be an issue when images are low-noise already, but if you
> still have some deal of graininess it may make a difference.
>
> Note however that when working with linear output from the actual renders, you
> *do* want to use HDR, otherwise you'll lose a lot of detail in dark parts of
> the shot.
>
>
Yes, what he said. Just took the words right out of my mouth. ;)
--
a game sig: http://tinyurl.com/d3rxz9
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"scott" <sco### [at] scottcom> wrote:
> I've been re-rendering a lot of my radiosity scenes with mcpov recently,
> usually I just copy&paste in the mcpov global header code and then let it
> run overnight. Scenes that caused hours of frustration with artifacts under
> radiostity magically rendered perfectly :-) It's really really cool, it
> should an option in the standard POV build IMO.
I must say that I do agree with that. I'd particularly like a hybrid that does
full-fledged path tracing for diffuse or ambient-based illumination, but still
allows it to be intermixed with the classic lighting model for everything else
- if only to have a reference for radiosity...
.... which may be a valid reason for me to actually go ahead and *do* integrate
it; I can always clame it's just a debugging aid for me :P
(Indeed, I can't use mcpov as a reference because it doesn't combine with
classic lighting.)
However, we are actually talking about a whole bunch of features, which might be
integrated into POV (and make sense) all separately:
* The ability to use something like "uncached radiosity" to get artifact-free
diffuse interreflection (provided you are patient enough), either as reference
for radiosity, but also for top-quality shots when rendering time is not an
issue.
* Native support for blurred reflections (which arguably would not give much
different results nor any higher speed than the common averaged-texture
micro-/macronormals approach - except for one important detail: Standardized
easy setup of materials that use it; right now, the most convenient way to do
this would be to use some framework that provides macros for it; but there may
be various such frameworks out there, each with its own "macro syntax").
* The ability to optimize combinations of various effects that need to shoot
many rays - like focal blur + area lights + media + uncached radiosity +
diffuse reflections - by restricting each of these steps to follow only a few
randomly chosen rays (or to follow even just a single random one), and let
oversampling take care of the rest at the "root".
* Support for yet another oversampling approach besides static or adaptive
anti-aliasing and focal blur, which does not depend on a pre-set parameter, but
on the user's decision that the image is now pretty enough. Or that he made a
blunder and it's not worth wasting more rendering time on the shot.
* The ability to mark objects or regions as particularly strong sources of
diffuse illumination ("portals"; radiosity could benefit from these, too)
(Did I forget a thing here?)
Plus, there's another thing I'd like to see implemented, which even mcpov
doesn't have:
* True subsurface scattering simulation, using not any smart optimized
mathematical approximations, but simulating the path of individual photons
through the material (of course this will be awfully slow I bet, but there may
be situations where the average SSS aproximation just doesn't quite grasp some
detail (like the subtle effect of the bones in a finger); it might be of
benefit again as reference for SSS approximation, but also on top-quality shots
when rendering time is not an issue.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"clipka" <nomail@nomail> wrote:
> However, beware of media in mcpov - you can't use those reliably.
Actually, media works very well as long as you stick with absorption and
emission, as they do not depend on lights. It works great for glass or other
translucent materials, although it's certainly not subsurface scattering.
Overall, I'd say MCPov is nice if you're lazy and want to get all the benefits
of photons, radiosity, etc. without any extra work or artifacts. Somehow it
just feels more natural.
- Ricky
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|