POV-Ray : Newsgroups : povray.unofficial.patches : More on hair : Re: More on hair Server Time
2 Sep 2024 02:20:08 EDT (-0400)
  Re: More on hair  
From: Chris Huff
Date: 11 Dec 2000 15:32:12
Message: <chrishuff-526002.15330411122000@news.povray.org>
In article <slr### [at] tealhhjpat>, 
hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:

> I didn't forget it. Many hairs will have the same color, so can share a
> common texture, all they need is a pointer to it (that's the lonely 4
> bytes in my formula). 

You could get it lower by using an indexed pallete of colors instead, 
but you are limiting yourself to solid-colored hairs. Many hairs, 
especially animal hairs, have different colors along their length.


> Ah, I see you misinterpreted my formula. That was actually only for a
> single hair, and $n$ was the number of control points per hair. For a
> whole fur, that would be:
> (4*8*c+8+4+48)*h

I didn't misinterpret the formula, I simply neglected to show the 
multiplication by 1000000, which I assumed would be taken for granted. I 
understood that you meant the number of control points when you said 
"n". However, there are a couple things that I forgot or don't 
understand...

My revised calculations:
D = sizeof(DBL)
S = sizeof(SNGL)
C = number of control points(5 in this case)
H = number of hairs(1000000 in this case)

Each hair is: Point*C + Color*C + Radius*C, Point consists of a 3D 
vector and a DBL for the t value(though the t value isn't necessary if 
you restrict yourself to certain spline types).
(4*D*C + 3*S*C + D*C)*H/(1024^2) = 247.96MB

Still out of reach for most people's systems, especially if they have 
anything else in the scene. It could be lowered further by things like 
using a color_map shared among several hairs instead of colors for each 
point, using single precision where possible, etc, but storing every 
hair will still take a lot of memory. And this ignores the other 
potential problems of using hair objects: antialiasing, missing fine 
detail, speed, etc.


> So, assuming a somewhat linear correlation between RAM and the size of
> the largest possible scenes, I'd guess that a 200 MB scene should be
> about as feasible on your machines as a 700 MB scene is on mine.

Maybe under OS X...the Classic Mac OS memory management is pretty bad 
(I'm being too nice, it stinks). OS X is based on BSD UNIX, so it 
handles this type of thing much better.


> Oh, if an optimization cuts computing time or memory consumption in
> half, it's definitely worth doing. It just doesn't make the difference
> between feasible and infeasible. If I can tolerate a rendering time of
> 10 hours, I can also tolerate one of 20 hours. If I can't tolerate one
> of 1 week, 3.5 days will still be too much. (There are exceptions, of
> course - e.g. if you just miss a deadline).

I might tolerate a render time of 10 hours, I wouldn't tolerate a render 
time of 20 except under unusual circumstances. Your logic is badly 
flawed...following it, you could say anyone will tolerate an infinitely 
long render.


> Ok, I admit that I am not that familiar with the internal workings of
> povray. But I wasn't actually expecting that hairs would use much of the
> normal object code. An individual hair would be a very special object
> created only internally by povray and be heavily optimized.

Ok, the way you said things implied that it would be an ordinary object.


> I was just disagreeing with somebody's statement that only a single 
> hair needs to be in memory at a time. While strictly speaking, this 
> is also true, I think it would be prohibitely slow, as you would have 
> to recalculate a lot of hairs for each ray.

So you are saying it is "clearly infeasible"? :-)
Actually, as I understand it, that is how the program that started this 
thread does things. Since it is a scan line algorithm, it doesn't have 
to do it per pixel, though...it can compute a hair and add it to the 
scene using the z-buffer to manage what is in front of what.
For raytracing, you would have to recalculate a lot of hairs, and it 
would be slow, but maybe not as slow as you think. You could do tricks 
like not calculating additional hairs when they won't make a noticeable 
contribution. I do think there is a better way though.


> |calculating the effect of many hairs that are too small to be seen
> |individually because they would only cover a small fraction of a pixel.
> 
> I wasn't sure what you meant with that, but it sounded like you expected
> to be able to calculate the effect of many hairs without having to test
> a ray against individual hairs. Maybe by coputing some kind of hair
> texture. This would not be the same idea any more (but it might be a
> good idea, nonetheless).

No...again, I am thinking of a ***MEDIA-LIKE*** algorithm. Not a 
texture, not a real object, a separate, fully 3D effect...some way to 
simulate the effect of a mass of hair without calculating every 
individual hair. Actually, a combination of this and an object method 
would likely give the best results...you wouldn't have to compute or 
store nearly as many individual hairs, because the other algorithm would 
"fill in the blanks".


> There are also a lot of hairs on each pixel, so they should contribute a
> lot and the chance of one being hit by a ray should be high. Single
> hairs would most probably vanish, however, so you can only raytrace
> well-groomed critters :-)

This is the loss of detail I was talking about.

-- 
Christopher James Huff
Personal: chr### [at] maccom, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tagpovrayorg, http://tag.povray.org/

<><


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.