|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
>
> Actually, thinking about it, you confused me.
>
> POV-Ray doesn't need to keep the entire image in memory in order to render
> it (after all, POV-Ray was developed on systems with limited amount of memory,
> yet was able to render images larger than any conceivable RAM size back then).
I'm too much involved in 3.7 development to have 3.6 anywhere on the
radar ;-)
At present, the architecture of POV-Ray 3.7 does not allow for writing
image output before rendering has actually finished, so the whole smash
must be buffered.
> (Of course you won't be able to *save* that image anywhere because you
> would encounter limitations in the file system. But that doesn't stop
> POV-Ray from *rendering* the image.)
You'd probably have to disable preview as well :-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
TC schrieb:
> I realize of course that render-time is dependent on the scene to be
> rendered. I just hoped to get to know if render-time behaves proportionally
> to the numer of pixels to be rendered or if render time increases
> exponentially with the number of pixels at really high resolutions.
>
> Do you have any experience?
So far, I haven't seen any significant exceptions to the rule-of-thumb
that render time is proportional to the number of pixels.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> So far, I haven't seen any significant exceptions to the rule-of-thumb
> that render time is proportional to the number of pixels.
...not forgetting that there are scenes where the parse-time dwarfs the
render-time. ;-)
(And then there's things like radiosity pre-trace, photon mapping, etc.)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> Does this amount of storage actually exist somewhere? (E.g., what kind
> of space does somebody like Google or Amazon have?)
I'd guess maybe about a million drives with maybe 250G each, tops? I'd read
somewhere they had a half-million computers, and they all use commodity 160G
drives, so something like that. What does that turn out to? 250 peta bytes?
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> But that would not be 1 petabyte as one partition. It would be 1 petabyte
> of disk storage in total, among many smaller drives/partitions.
One partition spread amongst many disks. :-) Actually, I think Windows
calls it a "volume", while a partition is part of a disk, a volume holds a
file system.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> ...not forgetting that there are scenes where the parse-time dwarfs the
> render-time. ;-)
But probably not on an 8000x3000 resolution picture. :-)
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Thu, 22 Oct 2009 13:57:41 -0700, Darren New wrote:
> Orchid XP v8 wrote:
>> Does this amount of storage actually exist somewhere? (E.g., what kind
>> of space does somebody like Google or Amazon have?)
>
> I'd guess maybe about a million drives with maybe 250G each, tops? I'd
> read somewhere they had a half-million computers, and they all use
> commodity 160G drives, so something like that. What does that turn out
> to? 250 peta bytes?
There was an article recently about someone at Google talking about
needing to manage 10 million machines....
Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> So far, I haven't seen any significant exceptions to the rule-of-thumb
> that render time is proportional to the number of pixels.
I'm sure once could artificially construct a scene which renders fast
at one resolution but extremely slow if you make the resolution even
slightly larger (by having some extremely-slow-to-render detail be so
small that no ray hits it at the lower resolution).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jim Henderson wrote:
> There was an article recently about someone at Google talking about
> needing to manage 10 million machines....
Your numbers are probably closer to mine, assuming it wasn't a "we plan
systems in ways that we can manage 10 million, even tho at the moment we
have only 1." :-) My numbers are old and estimated from outside the company.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
> I'm sure once could artificially construct a scene which renders fast
> at one resolution but extremely slow if you make the resolution even
> slightly larger (by having some extremely-slow-to-render detail be so
> small that no ray hits it at the lower resolution).
On average, that will not change a thing. So you'd have to make the
detail not only particularly small, but also place it strategically.
But a scene coded so that the detail level is driven by the image_height
and image_width variables would do - which would even make sense in some
cases, especially for fractal geometry.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |