POV-Ray : Newsgroups : povray.unofficial.patches : More on hair Server Time
2 Sep 2024 00:12:29 EDT (-0400)
  More on hair (Message 16 to 25 of 25)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Peter Popov
Subject: Re: More on hair
Date: 28 Nov 2000 16:43:35
Message: <56l62t8avc6bmpn3dje6if3c6u3bhvem82@4ax.com>
On 27 Nov 2000 04:41:35 -0500, Warp <war### [at] tagpovrayorg> wrote:

>  For home-use, PC has the best price/efficiency rate.
>  However, in many places they don't use PCs but other computers (Sparc
>servers, Alpha servers, Crays, SGIs...). For a reason.

There must be a reason why only the secretaries at the Uni I go to use
PCs (and most of them are still on Netware). Try to convince an
industrial designer that AutoCAD is better than CATIA :)


Peter Popov ICQ : 15002700
Personal e-mail : pet### [at] vipbg
TAG      e-mail : pet### [at] tagpovrayorg


Post a reply to this message

From: Peter J  Holzer
Subject: Re: More on hair
Date: 3 Dec 2000 14:02:04
Message: <slrn92l1n5.7l8.hjp-usenet@teal.h.hjp.at>
On Fri, 24 Nov 2000 20:37:44 -0500, Chris Huff wrote:
>> As for the memory hit, only one hair is in memory at a time, so      
>> there is no large memory hit.                                        

I don't think you can have only one hair in memory at a time with
povray. At the very least you must have all hairs which might intersect
the current ray.

>If done one at a time, you trade time for memory, since you have to
>recalculate the hairs for every pixel(storing enough hairs is obviously
>not feasible, even with sharing mesh data...).

I don't think that this is "clearly not feasible". A human has about
100,000 hairs on his head. I don't know about furry animals, but 1
million hairs seems about right to me. If we represent a hair as a
spline with a thickness, a pointer to the texture (many hairs will share
the same texture) and a bounding box, that's 24*n+8+4+48 bytes per hair,
if the spline has n control points. Assuming 5 CP's to be enough for
curvy hair, that's 180 bytes/hair or 180 MB for 1 million hairs.

A lot - but not unfeasible if you want to render a single animal (or
even a small number of them).


>It might be possible to partly compensate with some kind of bounding
>scheme, so only hairs that might be visible are tested, but it will
>still be a significant cost.

Yes, the bounding scheme is very important. Clearly you don't want to
test against 1 million hairs for every ray.

>A media-like rendering algorithm still seems to be the best option,

Maybe. But if a media-like rendering algorithm builds on the same idea,
I fail to see how it can possibly be much faster or less memory-hungry.

	hp

-- 
   _  | Peter J. Holzer    | Es war nicht Gegenstand der Abstimmung zu

| |   | hjp### [at] wsracat      | Zahlen neu festzulegen.
__/   | http://www.hjp.at/ |	-- Johannes Schwenke <jby### [at] ginkode>


Post a reply to this message

From: Chris Huff
Subject: Re: More on hair
Date: 5 Dec 2000 05:41:08
Message: <chrishuff-B90875.05415205122000@news.povray.org>
In article <slr### [at] tealhhjpat>, 
hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:

> I don't think you can have only one hair in memory at a time with
> povray. At the very least you must have all hairs which might intersect
> the current ray.

Of course you can...just generate each hair and then check for 
"intersection" instead of reading it from memory and testing it. Or 
generate each hair and add it's contribution into the total.


> that's 24*n+8+4+48 bytes per hair, if the spline has n control 
> points. Assuming 5 CP's to be enough for curvy hair, that's 180 
> bytes/hair or 180 MB for 1 million hairs.

That would be 32*n+...each control point has a vector(3 doubles) as well 
as a t value(another double). 1000000 hairs: 171.6MB


> A lot - but not unfeasible if you want to render a single animal (or
> even a small number of them).

Um, >170MB per animal for several animals as well as (possibly more 
hundreds of megabytes of) scenery? What computer are you using?!? Not 
many people are going to render empty shells of hair...though that might 
be interesting, especially with an explode include file and a poodle 
model. :-)


> Maybe. But if a media-like rendering algorithm builds on the same idea,
> I fail to see how it can possibly be much faster or less memory-hungry.

Faster...because it can skip calculating many hairs, and avoid certain 
calculations that must be done for objects such as normals and perturbed 
normals, calculating the effect of many hairs that are too small to be 
seen individually because they would only cover a small fraction of a 
pixel. Less memory hungry, since it can keep only the needed hairs in 
memory, maybe cacheing them for later access if there is enough memory.
Also, an object based hair will require anti-aliasing and could lose a 
lot of detail in that process, hair with a media-like algorithm wouldn't 
necessarily have this problem. Also, it has a chance of running on my 
machines without going to virtual memory for most of the scene.

-- 
Christopher James Huff
Personal: chr### [at] maccom, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tagpovrayorg, http://tag.povray.org/

<><


Post a reply to this message

From: Peter J  Holzer
Subject: Re: More on hair
Date: 5 Dec 2000 16:02:45
Message: <slrn92qhtv.n8s.hjp-usenet@teal.h.hjp.at>
On Tue, 05 Dec 2000 05:41:52 -0500, Chris Huff wrote:
>In article <slr### [at] tealhhjpat>, 
>hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:
>
>> I don't think you can have only one hair in memory at a time with
>> povray. At the very least you must have all hairs which might
>> intersect the current ray.
>
>Of course you can...just generate each hair and then check for
>"intersection" instead of reading it from memory and testing it. Or
>generate each hair and add it's contribution into the total.

Hmm, that gives me an idea (together with "media-like"). You could
indeed just convert all those hairs into a density field. But I think
that such a density field would need really humongous amounts of memory
- many gigabytes even for moderate resolutions. Using an octree
representation might bring it back into the "feasible" range, though.

>> that's 24*n+8+4+48 bytes per hair, if the spline has n control
>> points. Assuming 5 CP's to be enough for curvy hair, that's 180
>> bytes/hair or 180 MB for 1 million hairs.
>
>That would be 32*n+...each control point has a vector(3 doubles) as
>well as a t value(another double). 1000000 hairs: 171.6MB

The t value isn't necessary for all types of splines (e.g. Bezier
splines don't need it), but that's just hair splitting :-)

>> A lot - but not unfeasible if you want to render a single animal (or
>> even a small number of them).
>
>Um, >170MB per animal for several animals as well as (possibly more
>hundreds of megabytes of) scenery? What computer are you using?!?

A PIII/500 MHz with 256 MB of RAM. Which is probably better than
average, but not in the "have to win the lottery to afford that" range.

I have had scenes which required about 700 MB of virtual memory. After
the parsing, the slowdown isn't that bad.

So 2 or 3 animals would probably be feasible on that machine.

>Not many people are going to render empty shells of hair...though that
>might be interesting, especially with an explode include file and a
>poodle model. :-)
>
>
>> Maybe. But if a media-like rendering algorithm builds on the same
>> idea, I fail to see how it can possibly be much faster or less
>> memory-hungry.
>
>Faster...because it can skip calculating many hairs,

Only those not contributing to the image, which shouldn't be that many
(about half of them).

>and avoid certain calculations that must be done for objects such as
>normals and perturbed normals,

I would skip these for a hair object, too (a hair is too thin to have an
interesting normal).

>calculating the effect of many hairs that are too small to be seen
>individually because they would only cover a small fraction of a pixel.

Note that I wrote "building on the same idea". The idea seems to be to
model indiviual hairs. If you find an efficient way to calculate the
effect of many hairs, it probably isn't the same idea any more.


>Less memory hungry, since it can keep only the needed hairs in memory,
>maybe cacheing them for later access if there is enough memory.

True.

>Also, an object based hair will require anti-aliasing and could lose a
>lot of detail in that process,

Anti-aliasing doesn't lose detail, it gains additional detail (or at
least accuracy).

	hp

-- 
   _  | Peter J. Holzer    | Es war nicht Gegenstand der Abstimmung zu

| |   | hjp### [at] wsracat      | Zahlen neu festzulegen.
__/   | http://www.hjp.at/ |	-- Johannes Schwenke <jby### [at] ginkode>


Post a reply to this message

From: Paul Blaszczyk
Subject: Re: More on hair
Date: 8 Dec 2000 01:18:16
Message: <3A307DB1.2ACDE33D@alpharay.de>
Hi,

"Peter J. Holzer" wrote:

> On Fri, 24 Nov 2000 20:37:44 -0500, Chris Huff wrote:
> I don't think that this is "clearly not feasible". A human has about
> 100,000 hairs on his head. I don't know about furry animals, but 1
> million hairs seems about right to me. If we represent a hair as a
> spline with a thickness, a pointer to the texture (many hairs will share
> the same texture) and a bounding box, that's 24*n+8+4+48 bytes per hair,
> if the spline has n control points. Assuming 5 CP's to be enough for
> curvy hair, that's 180 bytes/hair or 180 MB for 1 million hairs.

180 MB uncompressed (!). Number values only you can compress with a factor
up 20 or higher...what about this idea??

And i've found a homepage with a hair-rendering technique using texels. It's
possible in POV to render texels?
Here the URL:
http://dspace.dial.pipex.com/adrian.skilling/texels/texels.html  (with C++
code)

Bye
 Paul


Post a reply to this message

From: Chris Huff
Subject: Re: More on hair
Date: 8 Dec 2000 09:00:34
Message: <chrishuff-531591.09012108122000@news.povray.org>
In article <slr### [at] tealhhjpat>, 
hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:

> Hmm, that gives me an idea (together with "media-like"). You could
> indeed just convert all those hairs into a density field. But I think
> that such a density field would need really humongous amounts of memory
> - many gigabytes even for moderate resolutions. Using an octree
> representation might bring it back into the "feasible" range, though.

You are right, that would be even more memory consuming than using 
splines...a 512*512*512 density field(which may or may not be enough for 
reasonable detail) would take 128MB, assuming only 8 bit grayscale 
precision. And you will likely want to store information on the "normal" 
of the hair, color, etc...enough to more than quadruple the memory 
usage, though much of this information could be stored at a lower 
resolution, and you could use a data structure that uses less memory for 
large solid areas with no hair.
However, I was not talking about using a media density field...not even 
close. I'm talking about a new effect that works similar to the way 
media does, but specialized for doing hair and fur, with highlights, etc.


> The t value isn't necessary for all types of splines (e.g. Bezier
> splines don't need it), but that's just hair splitting :-)

Ok...I was assuming cubic splines.
BTW, another thing which I forgot: each hair needs a color. You could 
use a single color for several hairs, and maybe include something to 
randomly modify a base color, but it still drives memory use up higher. 
Especially if you have hair that changes in color along it's length.
Worst case scenario for 1 million 5-point hairs:
4*8*n+3*4*n+8+4+48 = 267MB
Or without a t value:
3*8*n+3*4*n+8+4+48 = 228.9MB


> A PIII/500 MHz with 256 MB of RAM. Which is probably better than
> average, but not in the "have to win the lottery to afford that" range.
> 
> I have had scenes which required about 700 MB of virtual memory. After
> the parsing, the slowdown isn't that bad.

That explains part of it...this machine only has 96MB, and my other 
machine 128MB. I consider a 50MB scene big.


> Only those not contributing to the image, which shouldn't be that many
> (about half of them).

I would call the removal of half of the processing a pretty good 
improvement...and I consider 50% of several thousands or millions a 
pretty significant number.


> I would skip these for a hair object, too (a hair is too thin to have an
> interesting normal).

But the lighting and texture calculations *need* those, you can't just 
skip calculating them. That is why I don't think it should be an object, 
but a separate effect with a specialized rendering algorithm.
BTW, you are wrong about hairs not having useful normals...how else 
could you do shiny hair? Skip it, and all the hair will be flat colored.


> Note that I wrote "building on the same idea". The idea seems to be to
> model indiviual hairs.

That is what I am talking about, generating individual hairs "on the 
fly" as needed instead of storing hundreds of thousands or millions of 
them, many of which won't affect the results.


> If you find an efficient way to calculate the effect of many hairs, 
> it probably isn't the same idea any more.

I don't know what you mean...


> Anti-aliasing doesn't lose detail, it gains additional detail (or at
> least accuracy).

It finds a more accurate color for each pixel...and since hairs are so 
fine, they will virtually disappear. Because of their size, they 
contribute very little to the pixel, and the anti-aliasing algorithm 
would have to hit a hair before it knows to supersample that pixel.

-- 
Christopher James Huff
Personal: chr### [at] maccom, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tagpovrayorg, http://tag.povray.org/

<><


Post a reply to this message

From: Chris Huff
Subject: Re: More on hair
Date: 8 Dec 2000 09:08:27
Message: <chrishuff-AC70F7.09091508122000@news.povray.org>
In article <3A307DB1.2ACDE33D@alpharay.de>, Paul Blaszczyk 
<3d### [at] alpharayde> wrote:

> 180 MB uncompressed (!). Number values only you can compress with a 
> factor up 20 or higher...what about this idea??

Think of how slow raytracing is...now think of how slow it could be if 
it has to constantly compress and decompress large amounts of data at 
high compression rates. Not impossible, but not easy either, and the 
compression rate would have to be a lot lower.


> And i've found a homepage with a hair-rendering technique using 
> texels. It's possible in POV to render texels? Here the URL: 
> http://dspace.dial.pipex.com/adrian.skilling/texels/texels.html  
> (with C++ code)

POV supports 3D density files that can be used as patterns, and it could 
probably do something like this. That rendering algorithm seems to be 
the same or very similar to the official POV-Ray media.
See my other message about resolution and memory consumption, also 
notice how coarse and sometimes pixelated those "hairs" are, and the 
lack of shinyness.

-- 
Christopher James Huff
Personal: chr### [at] maccom, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tagpovrayorg, http://tag.povray.org/

<><


Post a reply to this message

From: Peter J  Holzer
Subject: Re: More on hair
Date: 8 Dec 2000 16:02:14
Message: <slrn932he4.gcq.hjp-usenet@teal.h.hjp.at>
On Fri, 08 Dec 2000 09:01:21 -0500, Chris Huff wrote:
>In article <slr### [at] tealhhjpat>, 
>hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:
>> The t value isn't necessary for all types of splines (e.g. Bezier
>> splines don't need it), but that's just hair splitting :-)
>
>Ok...I was assuming cubic splines.
>BTW, another thing which I forgot: each hair needs a color. You could 
>use a single color for several hairs, and maybe include something to 
>randomly modify a base color, but it still drives memory use up higher. 
>Especially if you have hair that changes in color along it's length.

I didn't forget it. Many hairs will have the same color, so can share a
common texture, all they need is a pointer to it (that's the lonely 4
bytes in my formula). 

>Worst case scenario for 1 million 5-point hairs:
>4*8*n+3*4*n+8+4+48 = 267MB

Ah, I see you misinterpreted my formula. That was actually only for a
single hair, and $n$ was the number of control points per hair. For a
whole fur, that would be:

(4*8*c+8+4+48)*h

(where c is the number of control points per hair and h is the number of
hairs). Additionally, there is some overhead for a data structure to
find hair/ray intersections rapidly. I ignored this because I don't know
how it should look like and therefore cannot give a good estimate for it.
(Actually, now that i look at it again, the 48 bytes bounding box per
hair were obvioulsy intended as lower estimate for the "search
structure").

>> A PIII/500 MHz with 256 MB of RAM. Which is probably better than
>> average, but not in the "have to win the lottery to afford that" range.
>> 
>> I have had scenes which required about 700 MB of virtual memory. After
>> the parsing, the slowdown isn't that bad.
>
>That explains part of it...this machine only has 96MB, and my other 
>machine 128MB. I consider a 50MB scene big.

So, assuming a somewhat linear correlation between RAM and the size of
the largest possible scenes, I'd guess that a 200 MB scene should be
about as feasible on your machines as a 700 MB scene is on mine.

>> Only those not contributing to the image, which shouldn't be that many
>> (about half of them).
>
>I would call the removal of half of the processing a pretty good 
>improvement...and I consider 50% of several thousands or millions a 
>pretty significant number.

Oh, if an optimization cuts computing time or memory consumption in
half, it's definitely worth doing. It just doesn't make the difference
between feasible and infeasible. If I can tolerate a rendering time of
10 hours, I can also tolerate one of 20 hours. If I can't tolerate one
of 1 week, 3.5 days will still be too much. (There are exceptions, of
course - e.g. if you just miss a deadline).


>> I would skip these for a hair object, too (a hair is too thin to have an
>> interesting normal).
>
>But the lighting and texture calculations *need* those, you can't just 
>skip calculating them.

Ok, I admit that I am not that familiar with the internal workings of
povray. But I wasn't actually expecting that hairs would use much of the
normal object code. An individual hair would be a very special object
created only internally by povray and be heavily optimized.

>That is why I don't think it should be an object, but a separate effect
>with a specialized rendering algorithm. BTW, you are wrong about hairs
>not having useful normals...how else could you do shiny hair? Skip it,
>and all the hair will be flat colored.

I was actually only thinking about the normal modifier. Of course the
hair itself has a surface and therefore a normal.

>> Note that I wrote "building on the same idea". The idea seems to be to
>> model indiviual hairs.
>
>That is what I am talking about, generating individual hairs "on the 
>fly" as needed instead of storing hundreds of thousands or millions of 
>them, many of which won't affect the results.

I guess we aren't that much in disagreement as it seems.

Generating hairs on the fly is fine (it would be necessary even if they
are all generated during the parsing stage - clearly a scene file with
a million individual hair statements would not be useful unless you
have a second program to generate it), as long as a reasonable number of
them (a few thousand?) is cached. I was just disagreeing with somebody's
statement that only a single hair needs to be in memory at a time. While
strictly speaking, this is also true, I think it would be prohibitely
slow, as you would have to recalculate a lot of hairs for each ray.

I also wanted to point out that even calculating all the hairs in
advance isn't that far out of the range of current PCs that it must be
called "clearly infeasible". I did not want to suggest that there could
not be a more efficient way.

>> If you find an efficient way to calculate the effect of many hairs, 
>> it probably isn't the same idea any more.
>
>I don't know what you mean...

You wrote:

|calculating the effect of many hairs that are too small to be seen
|individually because they would only cover a small fraction of a pixel.

I wasn't sure what you meant with that, but it sounded like you expected
to be able to calculate the effect of many hairs without having to test
a ray against individual hairs. Maybe by coputing some kind of hair
texture. This would not be the same idea any more (but it might be a
good idea, nonetheless).

>> Anti-aliasing doesn't lose detail, it gains additional detail (or at
>> least accuracy).
>
>It finds a more accurate color for each pixel...and since hairs are so 
>fine, they will virtually disappear. Because of their size, they 
>contribute very little to the pixel, and the anti-aliasing algorithm 
>would have to hit a hair before it knows to supersample that pixel.

There are also a lot of hairs on each pixel, so they should contribute a
lot and the chance of one being hit by a ray should be high. Single
hairs would most probably vanish, however, so you can only raytrace
well-groomed critters :-)

	hp

-- 
   _  | Peter J. Holzer    | Es war nicht Gegenstand der Abstimmung zu

| |   | hjp### [at] wsracat      | Zahlen neu festzulegen.
__/   | http://www.hjp.at/ |	-- Johannes Schwenke <jby### [at] ginkode>


Post a reply to this message

From: Chris Huff
Subject: Re: More on hair
Date: 11 Dec 2000 15:32:12
Message: <chrishuff-526002.15330411122000@news.povray.org>
In article <slr### [at] tealhhjpat>, 
hjp### [at] SiKituwsracat (Peter J. Holzer) wrote:

> I didn't forget it. Many hairs will have the same color, so can share a
> common texture, all they need is a pointer to it (that's the lonely 4
> bytes in my formula). 

You could get it lower by using an indexed pallete of colors instead, 
but you are limiting yourself to solid-colored hairs. Many hairs, 
especially animal hairs, have different colors along their length.


> Ah, I see you misinterpreted my formula. That was actually only for a
> single hair, and $n$ was the number of control points per hair. For a
> whole fur, that would be:
> (4*8*c+8+4+48)*h

I didn't misinterpret the formula, I simply neglected to show the 
multiplication by 1000000, which I assumed would be taken for granted. I 
understood that you meant the number of control points when you said 
"n". However, there are a couple things that I forgot or don't 
understand...

My revised calculations:
D = sizeof(DBL)
S = sizeof(SNGL)
C = number of control points(5 in this case)
H = number of hairs(1000000 in this case)

Each hair is: Point*C + Color*C + Radius*C, Point consists of a 3D 
vector and a DBL for the t value(though the t value isn't necessary if 
you restrict yourself to certain spline types).
(4*D*C + 3*S*C + D*C)*H/(1024^2) = 247.96MB

Still out of reach for most people's systems, especially if they have 
anything else in the scene. It could be lowered further by things like 
using a color_map shared among several hairs instead of colors for each 
point, using single precision where possible, etc, but storing every 
hair will still take a lot of memory. And this ignores the other 
potential problems of using hair objects: antialiasing, missing fine 
detail, speed, etc.


> So, assuming a somewhat linear correlation between RAM and the size of
> the largest possible scenes, I'd guess that a 200 MB scene should be
> about as feasible on your machines as a 700 MB scene is on mine.

Maybe under OS X...the Classic Mac OS memory management is pretty bad 
(I'm being too nice, it stinks). OS X is based on BSD UNIX, so it 
handles this type of thing much better.


> Oh, if an optimization cuts computing time or memory consumption in
> half, it's definitely worth doing. It just doesn't make the difference
> between feasible and infeasible. If I can tolerate a rendering time of
> 10 hours, I can also tolerate one of 20 hours. If I can't tolerate one
> of 1 week, 3.5 days will still be too much. (There are exceptions, of
> course - e.g. if you just miss a deadline).

I might tolerate a render time of 10 hours, I wouldn't tolerate a render 
time of 20 except under unusual circumstances. Your logic is badly 
flawed...following it, you could say anyone will tolerate an infinitely 
long render.


> Ok, I admit that I am not that familiar with the internal workings of
> povray. But I wasn't actually expecting that hairs would use much of the
> normal object code. An individual hair would be a very special object
> created only internally by povray and be heavily optimized.

Ok, the way you said things implied that it would be an ordinary object.


> I was just disagreeing with somebody's statement that only a single 
> hair needs to be in memory at a time. While strictly speaking, this 
> is also true, I think it would be prohibitely slow, as you would have 
> to recalculate a lot of hairs for each ray.

So you are saying it is "clearly infeasible"? :-)
Actually, as I understand it, that is how the program that started this 
thread does things. Since it is a scan line algorithm, it doesn't have 
to do it per pixel, though...it can compute a hair and add it to the 
scene using the z-buffer to manage what is in front of what.
For raytracing, you would have to recalculate a lot of hairs, and it 
would be slow, but maybe not as slow as you think. You could do tricks 
like not calculating additional hairs when they won't make a noticeable 
contribution. I do think there is a better way though.


> |calculating the effect of many hairs that are too small to be seen
> |individually because they would only cover a small fraction of a pixel.
> 
> I wasn't sure what you meant with that, but it sounded like you expected
> to be able to calculate the effect of many hairs without having to test
> a ray against individual hairs. Maybe by coputing some kind of hair
> texture. This would not be the same idea any more (but it might be a
> good idea, nonetheless).

No...again, I am thinking of a ***MEDIA-LIKE*** algorithm. Not a 
texture, not a real object, a separate, fully 3D effect...some way to 
simulate the effect of a mass of hair without calculating every 
individual hair. Actually, a combination of this and an object method 
would likely give the best results...you wouldn't have to compute or 
store nearly as many individual hairs, because the other algorithm would 
"fill in the blanks".


> There are also a lot of hairs on each pixel, so they should contribute a
> lot and the chance of one being hit by a ray should be high. Single
> hairs would most probably vanish, however, so you can only raytrace
> well-groomed critters :-)

This is the loss of detail I was talking about.

-- 
Christopher James Huff
Personal: chr### [at] maccom, http://homepage.mac.com/chrishuff/
TAG: chr### [at] tagpovrayorg, http://tag.povray.org/

<><


Post a reply to this message

From: Tony[B]
Subject: Re: More on hair
Date: 12 Dec 2000 09:08:03
Message: <3a363143@news.povray.org>
One thing I think that could help save some memory is if you eliminate hairs
that aren't seen... say, like on the other side of the object, where the
camera won't see them (so long as you don't have any reflective objects to
show this side). Of course, that might affect shadows... oh, well, you guys
are intelligent, you'll think of something. :)


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.