POV-Ray : Newsgroups : povray.advanced-users : volume calculations Server Time
6 Oct 2024 18:39:42 EDT (-0400)
  volume calculations (Message 18 to 27 of 27)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: David El Tom
Subject: Re: volume calculations
Date: 12 Jan 2007 20:00:15
Message: <45a82f1f@news.povray.org>
Rune schrieb:
> Warp wrote:
>> Mark Weyer <nomail@nomail> wrote:
>>> If you make the surface transparent and fill the interior with a
>>> uniformly emitting medium, then the sum over the pixels of an
>>> orthographic rendering should give you a reasonable approximation of
>>> the volume.
>>  How can a 2-dimensional projection of a 3-dimensional object give you
>> the volume of the object?
> 
> Because each pixel's color represents the volume of the object contained 
> within the confines of the area of the object that is behind that pixel? The 
> volume-to-color mapping is non-linear though, so one would have to know or 
> test the mapping used, but that should be quite simple.
> 
> Rune

I just had the same idea, ..., but the other way around.

white background, orthographic camera, cylindrical ligth, transparent
surface and a black color fade inside (light attenuation). I think this
would be even more accurate as the fade is calculated directly by the
length of the ray traveling inside the object, while with media you also
can influence the result by means of intervalls/samples parameter.

dave


Post a reply to this message

From: Rune
Subject: Re: volume calculations
Date: 12 Jan 2007 20:24:44
Message: <45a834dc@news.povray.org>
David El Tom wrote:
> I just had the same idea, ..., but the other way around.
>
> white background, orthographic camera, cylindrical ligth, transparent
> surface and a black color fade inside (light attenuation). I think
> this would be even more accurate as the fade is calculated directly
> by the length of the ray traveling inside the object, while with
> media you also can influence the result by means of
> intervalls/samples parameter.

Yes, I agree, except that I don't think you need any light source.

Rune
-- 
http://runevision.com


Post a reply to this message

From: Tim Attwood
Subject: Re: volume calculations
Date: 12 Jan 2007 21:09:15
Message: <45a83f4b@news.povray.org>
> For objects that fill a reasonably high percentage of the bounding box 
> (such as a sphere) it wouldn't be a big problem unless you really care 
> about high accuracy. For other objects though, such as, say, a cylinder 
> from <-1,-1,-1> to <1,1,1> with radius 0.01, testing all points (with some 
> proximity) within the bounding box is extremely inefficient. However, 
> pre-testing with trace lines within the bounding box parallel with one or 
> more axes could be used to weed out by far the most points beforehand, and 
> this would only be O(n^2).

It just occured to me that the object can be rotated and the bounding
box tested for minimum volume to reduce the bounding volume on long
narrow objects, it wouldn't do much for spikey things though.

>Come to think of it, the tracing could be used iteratively to do the volume 
>calculation in the first place. This would work in all cases and would only 
>be O(n^2*m) where m represents the complexity of the object.

So, if you start with a grid and trace across, measuring the surface 
locations
with the trace to determine the distance of the line that is inside the 
object
at each grid point and then average all the distances, then the average
divided by the depth of the bounding box should be the percentage of the
bounding volume that represents the object volume? Sort of like sticking
a bunch of long pins into it and measuring how much is inside? Wouldn't
some objects still squeeze thru the grid?


Post a reply to this message

From: Tim Attwood
Subject: Re: volume calculations
Date: 12 Jan 2007 23:09:57
Message: <45a85b95@news.povray.org>
Reducing the bound volume seems to keep the
errors smaller when estimating the object volume.

#macro reduce_bounding(Obj)
   #local num = 360;

   #local mn = min_extent(Obj);
   #local mx = max_extent(Obj);
   #local O = object{ Obj // center
      translate <-1*(mx.x-mn.x),-1*(mx.y-mn.y),-1*(mx.z-mn.z)>};
   #local BVol = (mx.x-mn.x)*(mx.y-mn.y)*(mx.z-mn.z);
   #local RotY = 0;
   #local c = 1;
   #while (c<num)
      #local OY = object{O rotate <0,(360/num)*c,0>};
      #local mn = min_extent(OY);
      #local mx = max_extent(OY);
      #local TV = (mx.x-mn.x)*(mx.y-mn.y)*(mx.z-mn.z);
      #if (BVol > TV )
         #local BVol = TV;
         #local RotY = (360/num)*c;
      #end
      #local c = c + 1;
   #end
   #local O = object{O rotate <0,(360/num)*RotY,0>};

   #local mn = min_extent(O);
   #local mx = max_extent(O);
   #local RotX = 0;
   #local c = 1;
   #while (c<num)
      #local OX = object{O rotate <(360/num)*c,0,0>};
      #local mn = min_extent(OX);
      #local mx = max_extent(OX);
      #local TV = (mx.x-mn.x)*(mx.y-mn.y)*(mx.z-mn.z);
      #if (BVol > TV )
         #local BVol = TV;
         #local RotX = (360/num)*c;
      #end
      #local c = c + 1;
   #end
   #local O = object{O rotate <(360/num)*RotX,0,0>};

   #local mn = min_extent(O);
   #local mx = max_extent(O);
   #local RotZ = 0;
   #local c = 1;
   #while (c<num)
      #local OZ = object{O rotate <0,0,(360/num)*c>};
      #local mn = min_extent(OZ);
      #local mx = max_extent(OZ);
      #local TV = (mx.x-mn.x)*(mx.y-mn.y)*(mx.z-mn.z);
      #if (BVol > TV )
         #local BVol = TV;
         #local RotZ = (360/num)*c;
      #end
      #local c = c + 1;
   #end
   #local O = object{O rotate <0,0,(360/num)*RotZ>};

   object{O}
#end


Post a reply to this message

From: Grassblade
Subject: Re: volume calculations
Date: 13 Jan 2007 07:40:00
Message: <web.45a8d2b9174c9403a194287f0@news.povray.org>
"Tim Attwood" <tim### [at] comcastnet> wrote:
> > For objects that fill a reasonably high percentage of the bounding box
> > (such as a sphere) it wouldn't be a big problem unless you really care
> > about high accuracy. For other objects though, such as, say, a cylinder
> > from <-1,-1,-1> to <1,1,1> with radius 0.01, testing all points (with some
> > proximity) within the bounding box is extremely inefficient. However,
> > pre-testing with trace lines within the bounding box parallel with one or
> > more axes could be used to weed out by far the most points beforehand, and
> > this would only be O(n^2).
>
> It just occured to me that the object can be rotated and the bounding
> box tested for minimum volume to reduce the bounding volume on long
> narrow objects, it wouldn't do much for spikey things though.
>
The shape of the object is irrelevant, for the algorithm you suggested, as
long as the pseudo-random number generator has "good properties". Likewise,
rotating the object to minimise the bounding box doesn't have any effect, on
average.[subliminal message] Of course, averaging the result over m tests
would increase the accuracy [/subliminal message]. Seriously, you don't
really need 5000 points in one go, you can cut that down up to a mere 30
points per sample, and select a number of m tests = 5000/n. n=100 seems
reasonale. Then average the result of the m tests.
> >Come to think of it, the tracing could be used iteratively to do the volume
> >calculation in the first place. This would work in all cases and would only
> >be O(n^2*m) where m represents the complexity of the object.
>
> So, if you start with a grid and trace across, measuring the surface
> locations
> with the trace to determine the distance of the line that is inside the
> object
> at each grid point and then average all the distances, then the average
> divided by the depth of the bounding box should be the percentage of the
> bounding volume that represents the object volume? Sort of like sticking
> a bunch of long pins into it and measuring how much is inside? Wouldn't
> some objects still squeeze thru the grid?
The trace will depend on the rounding POV-ray uses. If it doesn't use
banker's rounding, it will be over-(or under-)estimating systematically
each trace, resulting in an over(or under) estimate of the volume.
If the grid is too loose, and the object is complex, you might indeed loose
some volume.

Regarding media, suggested by others, I don't know much, but it seems to me
that the boundary of emitting media is brighter than the inside (is that
correct?), so that the method would be unsuitable for non-convex items.
Absorbing media, on the other hand, would suffer from the rounding method
mentioned above, and from the limited number of grays available. Not only
that, but for maximum accuracy (minimum inaccuracy, actually) you would
have to fine-tune the absorbtion value such that the furthest value is
exactly black.


Post a reply to this message

From: Alain
Subject: Re: volume calculations
Date: 13 Jan 2007 17:44:06
Message: <45a960b6@news.povray.org>
Rune nous apporta ses lumieres en ce 12-01-2007 20:24:
> David El Tom wrote:
>> I just had the same idea, ..., but the other way around.

>> white background, orthographic camera, cylindrical ligth, transparent
>> surface and a black color fade inside (light attenuation). I think
>> this would be even more accurate as the fade is calculated directly
>> by the length of the ray traveling inside the object, while with
>> media you also can influence the result by means of
>> intervalls/samples parameter.

> Yes, I agree, except that I don't think you need any light source.

> Rune
No. All you need is a ambient 1 white plane, or set background rgb 1

-- 
Alain
-------------------------------------------------
An organizer for the "Million Agoraphobics March" expressed disappointment in 
the turnout for last weekend's event.


Post a reply to this message

From: Tim Attwood
Subject: Re: volume calculations
Date: 13 Jan 2007 18:49:15
Message: <45a96ffb@news.povray.org>
> The shape of the object is irrelevant, for the algorithm you suggested, as
> long as the pseudo-random number generator has "good properties". 
> Likewise,
> rotating the object to minimize the bounding box doesn't have any effect, 
> on
> average.

Objects that are a small percent of their bounding box volume
encounter a rounding error from the implementation of floating
point numbers. Minimizing the bounding volume for these objects
shifts the numbers to be larger and results in more accuracy.
A test I ran on a long narrow cylinder went from being off by
1.3% to 0.18% after rotating. It had no effect on objects that
fit well into their bounding box in the first place (0.3% off).

>...[subliminal message] Of course, averaging the result over m tests
> would increase the accuracy [/subliminal message]. Seriously, you don't
> really need 5000 points in one go, you can cut that down up to a mere 30
> points per sample, and select a number of m tests = 5000/n. n=100 seems
> reasonale. Then average the result of the m tests.

The average already uses all the samples.  The code just
checks the average every 5000 samples to decide if the
average is still changing. When the average stops changing
the sampling stops.  I tried smaller blocks but there can be
false stops and big errors. 5000 forces at least 10000
samples.  A fixed number of samples might be better.

> The trace will depend on the rounding POV-ray uses. If it doesn't use
> banker's rounding, it will be over-(or under-)estimating systematically
> each trace, resulting in an over(or under) estimate of the volume.

It isn't so much that the coordinates of the point that trace returns
is accurate, as that you know that all the points on the line from
the trace origin to the trace hit are either inside or outside the
object with only one test. So it might be possible that a few traces
can result in many fold the number of samples during the same
parsing time.


Post a reply to this message

From: OJCIT
Subject: Re: volume calculations
Date: 19 Jan 2007 19:10:00
Message: <web.45b15d1c174c9403b43007370@news.povray.org>
These are some great ideas.  Thanks for all the responses.

-ojc


Post a reply to this message

From: gregjohn
Subject: Re: volume calculations
Date: 25 Jan 2007 12:45:00
Message: <web.45b8ec65174c940340d56c170@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> OJCIT wrote:
> > Is there a way to calculate and/or export the internal volumes
> > of the finite CSG objects in a POV-Ray scene?
>
> <http://tag.povray.org/povQandT/languageQandT.html#wireframes>


The answer is yes!


I put some objects through my volume macro
( http://www.irtc.org/pipermail/irtc-l/2001-August/010802.html )

CASE ONE
First a sphere of radius 20!  I took the liberty of pasting the total debug
stream (slightly modified SDL from above).  The debug stream was outputting
time into the parse.

# of sects  VOLUME CALC

   3       45037.0370
   4       32000.0000
   5       41472.0000
   6       40296.2963
   7       33399.4169
   8       35000.0000
   9       34150.8916
  10       35328.0000
  11       35534.1848
  12       33777.7778
  13       34636.3223
 0:00:01 Parsing 1405K tokens
  14       34332.3615
  15       33962.6667
  16       34000.0000
 0:00:02 Parsing 2896K tokens
  17       33257.0731
  18       34150.8916
 0:00:03 Parsing 4381K tokens
  19       34477.3291
 0:00:04 Parsing 5872K tokens
  20       33792.0000
  21       34173.4154
 0:00:05 Parsing 7355K tokens
  22       33755.0714
 0:00:06 Parsing 8768K tokens
  23       33680.6115
 0:00:08 Parsing 11672K tokens
  24       33370.3704
 0:00:09 Parsing 13152K tokens
  25       33656.8320
 0:00:10 Parsing 14643K tokens
  26       33966.3177
 0:00:12 Parsing 17614K tokens
  27       33799.7257
 0:00:14 Parsing 20575K tokens
  28       33632.6531
 0:00:16 Parsing 23524K tokens
  29       33832.9575
 0:00:18 Parsing 26478K tokens
  30       33962.6667
 0:00:20 Parsing 29411K tokens
  31       33330.8717
 0:00:23 Parsing 33863K tokens
  32       33703.1250
 0:00:26 Parsing 38317K tokens
  33       33575.2010
 0:00:29 Parsing 42629K tokens
  34       33660.8997
 0:00:32 Parsing 46963K tokens
  35       33697.9592
 0:00:36 Parsing 52721K tokens
  36       33558.2990
 0:00:40 Parsing 58510K tokens
  37       33792.2729
 0:00:44 Parsing 64278K tokens
  38       33553.5792
 0:00:49 Parsing 71598K tokens
  39       33557.4099
 0:00:54 Parsing 78956K tokens
  40       33552.0000
 0:00:59 Parsing 86364K tokens
  41       33556.7969
 0:01:05 Parsing 95205K tokens
  42       33710.3984
 0:01:11 Parsing 103976K tokens
  43       33688.4048
 0:01:18 Parsing 114265K tokens
  44       33598.7979
 0:01:25 Parsing 124562K tokens
  45       33594.6447
 0:01:33 Parsing 136299K tokens
  46       33601.7095
 0:01:41 Parsing 148106K tokens
  47       33555.5705
 0:01:49 Parsing 159903K tokens
  48       33481.4815
 0:01:58 Parsing 173054K tokens
  49       33490.8074
 0:02:08 Parsing 187765K tokens
  50       33665.0240
 0:02:18 Parsing 202533K tokens
  51       33579.3624
 0:02:29 Parsing 218729K tokens
  52       33602.1848
 0:02:40 Parsing 234686K tokens
  53       33619.1621
 0:02:52 Parsing 252150K tokens
  54       33617.6396
 0:03:05 Parsing 270935K tokens
  55       33570.8129
 0:03:19 Parsing 291231K tokens
  56       33562.6822
 0:03:33 Parsing 311744K tokens
  57       33602.3068
 0:03:48 Parsing 333600K tokens
  58       33525.9338
 0:04:04 Parsing 357075K tokens
  59       33587.2314


CASE TWO
Now, M.I.M.E. Man, a complex blob with hundreds of components in the shape
of a person.


# sects       volume
   3           0.0000
   4           0.0000
   5           0.0000
   6           0.0000
   7           0.0000
   8           0.0000
 0:00:01 Parsing 347K tokens
   9           0.0000
  10           0.0000
  11           0.0000
 0:00:02 Parsing 862K tokens
  12           0.0000
  13        5634.5571
 0:00:03 Parsing 1370K tokens
  14       11278.3546
 0:00:04 Parsing 1878K tokens
  15       11003.6640
 0:00:05 Parsing 2380K tokens
  16        4533.3699
 0:00:06 Parsing 2883K tokens
  17        7558.9998
 0:00:08 Parsing 3884K tokens
  18        7429.1713
 0:00:10 Parsing 4881K tokens
  19        9023.9992
 0:00:12 Parsing 5879K tokens
  20       10831.7318
 0:00:14 Parsing 6872K tokens
  21       12030.2450
 0:00:17 Parsing 8360K tokens
  22       13369.6378
 0:00:20 Parsing 9848K tokens
  23       13226.6448
 0:00:23 Parsing 11332K tokens
  24       12536.7266
 0:00:27 Parsing 13307K tokens
  25       11883.9572
 0:00:32 Parsing 15781K tokens
  26       11973.4339
 0:00:36 Parsing 17757K tokens
  27       12264.0289
 0:00:42 Parsing 20720K tokens
  28       11278.3546
 0:00:48 Parsing 23684K tokens
  29       12943.0322
 0:00:54 Parsing 26640K tokens
  30       11691.3931
 0:01:01 Parsing 30082K tokens
  31       12465.9683
 0:01:09 Parsing 34018K tokens
  32       12088.9864
 0:01:20 Parsing 38515K tokens
  33       11539.6552
 0:01:31 Parsing 42857K tokens
  34       11653.4581
 0:01:42 Parsing 47839K tokens
  35       13714.4792
 0:01:55 Parsing 53366K tokens
  36       14593.0151
 0:02:09 Parsing 59566K tokens
  37       17718.3256
 0:02:23 Parsing 65886K tokens
  38       20078.3981
 0:02:41 Parsing 72856K tokens
  39       20660.0429
 0:02:57 Parsing 80019K tokens
  40       20116.0733
 0:03:09 Parsing 85049K tokens

The system choked here because I'd used up all the memory on the machine.


Post a reply to this message

From: Tim Attwood
Subject: Re: volume calculations
Date: 25 Jan 2007 20:11:28
Message: <45b95540$1@news.povray.org>
> First a sphere of radius 20!  I took the liberty of pasting the total 
> debug
> stream (slightly modified SDL from above).  The debug stream was 
> outputting
> time into the parse.

The actual Volume of a sphere with a 20 unit
radius is 33510.3216

According to your test data the accuracy of
your estimations are similar to those from
the algorithm I used.

That is an interesting use of eval_pigment...
but aren't you just using it to test if a
point is inside the object?

seg  %Error
3    34.40
4     4.51
5    23.76
6    20.25
7     0.33
8     4.45
9     1.91
10    5.42
11    6.04
12    0.80
13    3.36
14    2.45
15    1.35
16    1.46
17    0.76
18    1.91
19    2.89
20    0.84
21    1.98
22    0.73
23    0.51
24    0.42
25    0.44
26    1.36
27    0.86
28    0.37
29    0.96
30    1.35
31    0.54
32    0.58
33    0.19
34    0.45
35    0.56
36    0.14
37    0.84
38    0.13
39    0.14
40    0.12
41    0.14
42    0.60
43    0.53
44    0.26
45    0.25
46    0.27
47    0.14
48    0.09
49    0.06
50    0.46
51    0.21
52    0.27
53    0.32
54    0.32
55    0.18
56    0.16
57    0.27
58    0.05
59    0.23


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.