POV-Ray : Newsgroups : povray.unofficial.patches : More on hair : Re: More on hair Server Time
2 Sep 2024 02:15:57 EDT (-0400)
  Re: More on hair  
From: Chris Huff
Date: 8 Dec 2000 09:00:34
Message: <chrishuff-531591.09012108122000@news.povray.org>
In article <slr### [at] tealhhjpat>, 
hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:

> Hmm, that gives me an idea (together with "media-like"). You could
> indeed just convert all those hairs into a density field. But I think
> that such a density field would need really humongous amounts of memory
> - many gigabytes even for moderate resolutions. Using an octree
> representation might bring it back into the "feasible" range, though.

You are right, that would be even more memory consuming than using 
splines...a 512*512*512 density field(which may or may not be enough for 
reasonable detail) would take 128MB, assuming only 8 bit grayscale 
precision. And you will likely want to store information on the "normal" 
of the hair, color, etc...enough to more than quadruple the memory 
usage, though much of this information could be stored at a lower 
resolution, and you could use a data structure that uses less memory for 
large solid areas with no hair.
However, I was not talking about using a media density field...not even 
close. I'm talking about a new effect that works similar to the way 
media does, but specialized for doing hair and fur, with highlights, etc.


> The t value isn't necessary for all types of splines (e.g. Bezier
> splines don't need it), but that's just hair splitting :-)

Ok...I was assuming cubic splines.
BTW, another thing which I forgot: each hair needs a color. You could 
use a single color for several hairs, and maybe include something to 
randomly modify a base color, but it still drives memory use up higher. 
Especially if you have hair that changes in color along it's length.
Worst case scenario for 1 million 5-point hairs:
4*8*n+3*4*n+8+4+48 = 267MB
Or without a t value:
3*8*n+3*4*n+8+4+48 = 228.9MB


> A PIII/500 MHz with 256 MB of RAM. Which is probably better than
> average, but not in the "have to win the lottery to afford that" range.
> 
> I have had scenes which required about 700 MB of virtual memory. After
> the parsing, the slowdown isn't that bad.

That explains part of it...this machine only has 96MB, and my other 
machine 128MB. I consider a 50MB scene big.


> Only those not contributing to the image, which shouldn't be that many
> (about half of them).

I would call the removal of half of the processing a pretty good 
improvement...and I consider 50% of several thousands or millions a 
pretty significant number.


> I would skip these for a hair object, too (a hair is too thin to have an
> interesting normal).

But the lighting and texture calculations *need* those, you can't just 
skip calculating them. That is why I don't think it should be an object, 
but a separate effect with a specialized rendering algorithm.
BTW, you are wrong about hairs not having useful normals...how else 
could you do shiny hair? Skip it, and all the hair will be flat colored.


> Note that I wrote "building on the same idea". The idea seems to be to
> model indiviual hairs.

That is what I am talking about, generating individual hairs "on the 
fly" as needed instead of storing hundreds of thousands or millions of 
them, many of which won't affect the results.


> If you find an efficient way to calculate the effect of many hairs, 
> it probably isn't the same idea any more.

I don't know what you mean...


> Anti-aliasing doesn't lose detail, it gains additional detail (or at
> least accuracy).

It finds a more accurate color for each pixel...and since hairs are so 
fine, they will virtually disappear. Because of their size, they 
contribute very little to the pixel, and the anti-aliasing algorithm 
would have to hit a hair before it knows to supersample that pixel.

-- 
Christopher James Huff
Personal: chr### [at] maccom, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tagpovrayorg, http://tag.povray.org/

<><


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.