|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Both NTFS and VFS in theory support up to 8 exabytes on a partition.
>
> Actually wikipedia claims that NTFS supports partitions and files of
> 16 exabytes. That's a bit better than ReiserFS which supports "only"
> 8 terabytes.
Does this amount of storage actually exist somewhere? (E.g., what kind
of space does somebody like Google or Amazon have?)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
>>
>> Actually wikipedia claims that NTFS supports partitions and files of
>> 16 exabytes. That's a bit better than ReiserFS which supports "only"
>> 8 terabytes.
>
> Does this amount of storage actually exist somewhere? (E.g., what kind
> of space does somebody like Google or Amazon have?)
>
Well, I have 2 terabytes at home and AFAIK that's not uncommon these
days. Thinking from there, I think Google or Amazon really does have at
least exabytes.
XFS also seems to support 8 exabytes, so I won't have a problem when
upscaling the array some day :p.
-Aero
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Does this amount of storage actually exist somewhere? (E.g., what kind
>> of space does somebody like Google or Amazon have?)
>
> Well, I have 2 terabytes at home and AFAIK that's not uncommon these
> days. Thinking from there, I think Google or Amazon really does have at
> least exabytes.
Yes, 1 or 2 TB HDs are on sale if you have the cash.
Note that before you get to exabytes you must first pass through
petabytes - we're talking about *two* scale units, not just one.
Given that a single drive can hold 1 TB, it's not infeasible that
somebody could amass 1,000 of those in a data-center somewhere. That
would give you 1 PB. But 1 EB? Is that really possible yet?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> Nicolas Alvarez wrote:
> > You need to store the previous row in POV-Ray anyway, to know if you need to
> > antialias the current pixel :)
> I was under the impression that with AA enabled, *all* pixels are
> supersampled. (But with adaptive supersampling, it shoots 4 rays before
> deciding whether to supersample further...)
Nope. What do you think the factor after the +a option means?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> Given that a single drive can hold 1 TB, it's not infeasible that
> somebody could amass 1,000 of those in a data-center somewhere. That
> would give you 1 PB. But 1 EB? Is that really possible yet?
But that would not be 1 petabyte as one partition. It would be 1 petabyte
of disk storage in total, among many smaller drives/partitions.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> I was under the impression that with AA enabled, *all* pixels are
>> supersampled. (But with adaptive supersampling, it shoots 4 rays before
>> deciding whether to supersample further...)
>
> Nope. What do you think the factor after the +a option means?
The threshold for deciding whether or not to subdivide further...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Orchid XP v8 <voi### [at] devnull> wrote:
>> Given that a single drive can hold 1 TB, it's not infeasible that
>> somebody could amass 1,000 of those in a data-center somewhere. That
>> would give you 1 PB. But 1 EB? Is that really possible yet?
>
> But that would not be 1 petabyte as one partition. It would be 1 petabyte
> of disk storage in total, among many smaller drives/partitions.
...unless you can find a RAID controller that supports 1,000 array
elements. o_O
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> >> I was under the impression that with AA enabled, *all* pixels are
> >> supersampled. (But with adaptive supersampling, it shoots 4 rays before
> >> deciding whether to supersample further...)
> >
> > Nope. What do you think the factor after the +a option means?
> The threshold for deciding whether or not to subdivide further...
No, it's the threshold for starting antialiasing in the first place.
The comparison is against neighbour pixels.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
> If POV-Ray were to be compiled on a system without those limitations,
> it could probably achieve images of those sizes
even then, 2e9 x 2e9 wouldn't quite be reached (unless you'd have a >64
bit system), some 1e9 x 1e9 would be the limit then.
(Again, speaking of POV-Ray 3.7. I don't think anyone would want to
render such a big image without multiprocessor support ;-))
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Orchid XP v8 <voi### [at] devnull> wrote:
>>>> I was under the impression that with AA enabled, *all* pixels are
>>>> supersampled. (But with adaptive supersampling, it shoots 4 rays before
>>>> deciding whether to supersample further...)
>>> Nope. What do you think the factor after the +a option means?
>
>> The threshold for deciding whether or not to subdivide further...
>
> No, it's the threshold for starting antialiasing in the first place.
> The comparison is against neighbour pixels.
OK, I didn't know that...
So yes, POV-Ray will need to buffer some of the neighboring pixels. :-}
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |