POV-Ray : Newsgroups : povray.unofficial.patches : More on hair : Re: More on hair Server Time
2 Sep 2024 02:13:30 EDT (-0400)
  Re: More on hair  
From: Peter J  Holzer
Date: 8 Dec 2000 16:02:14
Message: <slrn932he4.gcq.hjp-usenet@teal.h.hjp.at>
On Fri, 08 Dec 2000 09:01:21 -0500, Chris Huff wrote:
>In article <slr### [at] tealhhjpat>, 
>hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:
>> The t value isn't necessary for all types of splines (e.g. Bezier
>> splines don't need it), but that's just hair splitting :-)
>
>Ok...I was assuming cubic splines.
>BTW, another thing which I forgot: each hair needs a color. You could 
>use a single color for several hairs, and maybe include something to 
>randomly modify a base color, but it still drives memory use up higher. 
>Especially if you have hair that changes in color along it's length.

I didn't forget it. Many hairs will have the same color, so can share a
common texture, all they need is a pointer to it (that's the lonely 4
bytes in my formula). 

>Worst case scenario for 1 million 5-point hairs:
>4*8*n+3*4*n+8+4+48 = 267MB

Ah, I see you misinterpreted my formula. That was actually only for a
single hair, and $n$ was the number of control points per hair. For a
whole fur, that would be:

(4*8*c+8+4+48)*h

(where c is the number of control points per hair and h is the number of
hairs). Additionally, there is some overhead for a data structure to
find hair/ray intersections rapidly. I ignored this because I don't know
how it should look like and therefore cannot give a good estimate for it.
(Actually, now that i look at it again, the 48 bytes bounding box per
hair were obvioulsy intended as lower estimate for the "search
structure").

>> A PIII/500 MHz with 256 MB of RAM. Which is probably better than
>> average, but not in the "have to win the lottery to afford that" range.
>> 
>> I have had scenes which required about 700 MB of virtual memory. After
>> the parsing, the slowdown isn't that bad.
>
>That explains part of it...this machine only has 96MB, and my other 
>machine 128MB. I consider a 50MB scene big.

So, assuming a somewhat linear correlation between RAM and the size of
the largest possible scenes, I'd guess that a 200 MB scene should be
about as feasible on your machines as a 700 MB scene is on mine.

>> Only those not contributing to the image, which shouldn't be that many
>> (about half of them).
>
>I would call the removal of half of the processing a pretty good 
>improvement...and I consider 50% of several thousands or millions a 
>pretty significant number.

Oh, if an optimization cuts computing time or memory consumption in
half, it's definitely worth doing. It just doesn't make the difference
between feasible and infeasible. If I can tolerate a rendering time of
10 hours, I can also tolerate one of 20 hours. If I can't tolerate one
of 1 week, 3.5 days will still be too much. (There are exceptions, of
course - e.g. if you just miss a deadline).


>> I would skip these for a hair object, too (a hair is too thin to have an
>> interesting normal).
>
>But the lighting and texture calculations *need* those, you can't just 
>skip calculating them.

Ok, I admit that I am not that familiar with the internal workings of
povray. But I wasn't actually expecting that hairs would use much of the
normal object code. An individual hair would be a very special object
created only internally by povray and be heavily optimized.

>That is why I don't think it should be an object, but a separate effect
>with a specialized rendering algorithm. BTW, you are wrong about hairs
>not having useful normals...how else could you do shiny hair? Skip it,
>and all the hair will be flat colored.

I was actually only thinking about the normal modifier. Of course the
hair itself has a surface and therefore a normal.

>> Note that I wrote "building on the same idea". The idea seems to be to
>> model indiviual hairs.
>
>That is what I am talking about, generating individual hairs "on the 
>fly" as needed instead of storing hundreds of thousands or millions of 
>them, many of which won't affect the results.

I guess we aren't that much in disagreement as it seems.

Generating hairs on the fly is fine (it would be necessary even if they
are all generated during the parsing stage - clearly a scene file with
a million individual hair statements would not be useful unless you
have a second program to generate it), as long as a reasonable number of
them (a few thousand?) is cached. I was just disagreeing with somebody's
statement that only a single hair needs to be in memory at a time. While
strictly speaking, this is also true, I think it would be prohibitely
slow, as you would have to recalculate a lot of hairs for each ray.

I also wanted to point out that even calculating all the hairs in
advance isn't that far out of the range of current PCs that it must be
called "clearly infeasible". I did not want to suggest that there could
not be a more efficient way.

>> If you find an efficient way to calculate the effect of many hairs, 
>> it probably isn't the same idea any more.
>
>I don't know what you mean...

You wrote:

|calculating the effect of many hairs that are too small to be seen
|individually because they would only cover a small fraction of a pixel.

I wasn't sure what you meant with that, but it sounded like you expected
to be able to calculate the effect of many hairs without having to test
a ray against individual hairs. Maybe by coputing some kind of hair
texture. This would not be the same idea any more (but it might be a
good idea, nonetheless).

>> Anti-aliasing doesn't lose detail, it gains additional detail (or at
>> least accuracy).
>
>It finds a more accurate color for each pixel...and since hairs are so 
>fine, they will virtually disappear. Because of their size, they 
>contribute very little to the pixel, and the anti-aliasing algorithm 
>would have to hit a hair before it knows to supersample that pixel.

There are also a lot of hairs on each pixel, so they should contribute a
lot and the chance of one being hit by a ray should be high. Single
hairs would most probably vanish, however, so you can only raytrace
well-groomed critters :-)

	hp

-- 
   _  | Peter J. Holzer    | Es war nicht Gegenstand der Abstimmung zu

| |   | hjp### [at] wsracat      | Zahlen neu festzulegen.
__/   | http://www.hjp.at/ |	-- Johannes Schwenke <jby### [at] ginkode>


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.