|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> Given that a single drive can hold 1 TB, it's not infeasible that
> somebody could amass 1,000 of those in a data-center somewhere. That
> would give you 1 PB. But 1 EB? Is that really possible yet?
But that would not be 1 petabyte as one partition. It would be 1 petabyte
of disk storage in total, among many smaller drives/partitions.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> I was under the impression that with AA enabled, *all* pixels are
>> supersampled. (But with adaptive supersampling, it shoots 4 rays before
>> deciding whether to supersample further...)
>
> Nope. What do you think the factor after the +a option means?
The threshold for deciding whether or not to subdivide further...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Orchid XP v8 <voi### [at] devnull> wrote:
>> Given that a single drive can hold 1 TB, it's not infeasible that
>> somebody could amass 1,000 of those in a data-center somewhere. That
>> would give you 1 PB. But 1 EB? Is that really possible yet?
>
> But that would not be 1 petabyte as one partition. It would be 1 petabyte
> of disk storage in total, among many smaller drives/partitions.
...unless you can find a RAID controller that supports 1,000 array
elements. o_O
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> >> I was under the impression that with AA enabled, *all* pixels are
> >> supersampled. (But with adaptive supersampling, it shoots 4 rays before
> >> deciding whether to supersample further...)
> >
> > Nope. What do you think the factor after the +a option means?
> The threshold for deciding whether or not to subdivide further...
No, it's the threshold for starting antialiasing in the first place.
The comparison is against neighbour pixels.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
> If POV-Ray were to be compiled on a system without those limitations,
> it could probably achieve images of those sizes
even then, 2e9 x 2e9 wouldn't quite be reached (unless you'd have a >64
bit system), some 1e9 x 1e9 would be the limit then.
(Again, speaking of POV-Ray 3.7. I don't think anyone would want to
render such a big image without multiprocessor support ;-))
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Orchid XP v8 <voi### [at] devnull> wrote:
>>>> I was under the impression that with AA enabled, *all* pixels are
>>>> supersampled. (But with adaptive supersampling, it shoots 4 rays before
>>>> deciding whether to supersample further...)
>>> Nope. What do you think the factor after the +a option means?
>
>> The threshold for deciding whether or not to subdivide further...
>
> No, it's the threshold for starting antialiasing in the first place.
> The comparison is against neighbour pixels.
OK, I didn't know that...
So yes, POV-Ray will need to buffer some of the neighboring pixels. :-}
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
>
> Actually, thinking about it, you confused me.
>
> POV-Ray doesn't need to keep the entire image in memory in order to render
> it (after all, POV-Ray was developed on systems with limited amount of memory,
> yet was able to render images larger than any conceivable RAM size back then).
I'm too much involved in 3.7 development to have 3.6 anywhere on the
radar ;-)
At present, the architecture of POV-Ray 3.7 does not allow for writing
image output before rendering has actually finished, so the whole smash
must be buffered.
> (Of course you won't be able to *save* that image anywhere because you
> would encounter limitations in the file system. But that doesn't stop
> POV-Ray from *rendering* the image.)
You'd probably have to disable preview as well :-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
TC schrieb:
> I realize of course that render-time is dependent on the scene to be
> rendered. I just hoped to get to know if render-time behaves proportionally
> to the numer of pixels to be rendered or if render time increases
> exponentially with the number of pixels at really high resolutions.
>
> Do you have any experience?
So far, I haven't seen any significant exceptions to the rule-of-thumb
that render time is proportional to the number of pixels.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> So far, I haven't seen any significant exceptions to the rule-of-thumb
> that render time is proportional to the number of pixels.
...not forgetting that there are scenes where the parse-time dwarfs the
render-time. ;-)
(And then there's things like radiosity pre-trace, photon mapping, etc.)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> Does this amount of storage actually exist somewhere? (E.g., what kind
> of space does somebody like Google or Amazon have?)
I'd guess maybe about a million drives with maybe 250G each, tops? I'd read
somewhere they had a half-million computers, and they all use commodity 160G
drives, so something like that. What does that turn out to? 250 peta bytes?
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |