|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Daniel Nilsson wrote:
> Tim Attwood wrote:
>>> Is there a way to calculate and/or export the internal volumes of
>>> the finite
>>> CSG objects in a POV-Ray scene?
>>
>> Using extents of an object you can calculate the
>> volume of the bounding box, then use a large number of
>> random point samples within the bounding box to determine
>> the percent of samples that are inside the object, then
>> the volume can be estimated to be that percent of the
>> bounding volume. This is sort of like filling a real object
>> with water then measuring the water.
>
>
> A better way to sample the volume (although requiring external
> processing) may be to render a series of slices of the object and then
> sum the number of covered pixels in those images. This can be done
> using an object pattern mapped to a plane and animating the
> translation of the pattern along the axis perpendicular to the plane,
> much like a CAT-scan really.
This would require exactly the same amount of insideness tests as the
suggestion by Tim Attwood; only they would be done during rendering instead
of during parsing.
Rune
--
http://runevision.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Rune wrote:
> Daniel Nilsson wrote:
>> Tim Attwood wrote:
>>> Using extents of an object you can calculate the
>>> volume of the bounding box, then use a large number of
>>> random point samples within the bounding box to determine
>>> the percent of samples that are inside the object, then
>>> the volume can be estimated to be that percent of the
>>> bounding volume. This is sort of like filling a real object
>>> with water then measuring the water.
>>
>> A better way to sample the volume (although requiring external
>> processing) may be to render a series of slices of the object and then
>> sum the number of covered pixels in those images. This can be done
>> using an object pattern mapped to a plane and animating the
>> translation of the pattern along the axis perpendicular to the plane,
>> much like a CAT-scan really.
>
> This would require exactly the same amount of insideness tests as the
> suggestion by Tim Attwood; only they would be done during rendering instead
> of during parsing.
Yes, that is true. But povray's built-in adaptive anti-aliasing will
save some samples. Implementing adaptive sampling in SDL is probably not
as fast.
Anyway, I don't think this is what the OP was looking for.
--
Daniel Nilsson
haven't really used povray for several years, still lurking here though
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Rune nous apporta ses lumieres en ce 10-01-2007 12:33:
> Rune wrote:
>> For objects that fill a reasonably high percentage of the bounding
>> box (such as a sphere) it wouldn't be a big problem unless you really
>> care about high accuracy. For other objects though, such as, say, a
>> cylinder from <-1,-1,-1> to <1,1,1> with radius 0.01, testing all
>> points (with some proximity) within the bounding box is extremely
>> inefficient. However, pre-testing with trace lines within the
>> bounding box parallel with one or more axes could be used to weed out
>> by far the most points beforehand, and this would only be O(n^2).
>
> ...in most common cases that is. It won't work for, say, a sphere with a
> slightly smaller sphere carved out of it.
>
> Come to think of it, the tracing could be used iteratively to do the volume
> calculation in the first place. This would work in all cases and would only
> be O(n^2*m) where m represents the complexity of the object.
>
> Rune
What about a complex shape with another complex shape carved out of it, making a
strange, one opening, hole?
--
Alain
-------------------------------------------------
Age is a very high price to pay for maturity.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> If you want any accuracy at all, it would take a very LONG time.
>
> I would estimate that getting one additional decimal of accuracy
> (or whichever base is used) requires O(n^3) more points to be sampled.
> You can figure out that that number grows quite fast.
I ran a test on a half sphere that came out in 13 seconds
with an accuracy of about 0.3%
The half sphere should be 2/3 pi r cubed volume, 2.094
since I used a r of 1, and the estimate was 2.087
I ran a second test with the half sphere scaled up by 100
and got the same sort of result 13 seconds and 2087920,
so the adaptive loop in my volume estimate algorithm isn't
sensitive to object size.
I did a third test with a narrow cylinder (rotated so that the
bounding was larger), it was off by about 1.3% in 11 seconds.
If I remember my statistics right, it only takes a fixed number
of samples to accurately represent a larger population to
some confidence level, which is why you see the plus or minus
4% on a lot of news polls. So it should run about O(n) time, though
that time increases O(n^3) as you modify the algorithm to be more
accurate.
I could see that a narrow enough object might return
zero volume if the real volume is only a few percent of the
bounding box.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Is there a way to calculate and/or export the internal volumes of the finite
> CSG objects in a POV-Ray scene?
Might be the same that Rune mentioned:
If you make the surface transparent and fill the interior with a uniformly
emitting medium, then the sum over the pixels of an orthographic rendering
should give you a reasonable approximation of the volume.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Mark Weyer <nomail@nomail> wrote:
> If you make the surface transparent and fill the interior with a uniformly
> emitting medium, then the sum over the pixels of an orthographic rendering
> should give you a reasonable approximation of the volume.
How can a 2-dimensional projection of a 3-dimensional object give you
the volume of the object?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Mark Weyer <nomail@nomail> wrote:
>> If you make the surface transparent and fill the interior with a
>> uniformly emitting medium, then the sum over the pixels of an
>> orthographic rendering should give you a reasonable approximation of
>> the volume.
>
> How can a 2-dimensional projection of a 3-dimensional object give you
> the volume of the object?
Because each pixel's color represents the volume of the object contained
within the confines of the area of the object that is behind that pixel? The
volume-to-color mapping is non-linear though, so one would have to know or
test the mapping used, but that should be quite simple.
Rune
--
http://runevision.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Rune schrieb:
> Warp wrote:
>> Mark Weyer <nomail@nomail> wrote:
>>> If you make the surface transparent and fill the interior with a
>>> uniformly emitting medium, then the sum over the pixels of an
>>> orthographic rendering should give you a reasonable approximation of
>>> the volume.
>> How can a 2-dimensional projection of a 3-dimensional object give you
>> the volume of the object?
>
> Because each pixel's color represents the volume of the object contained
> within the confines of the area of the object that is behind that pixel? The
> volume-to-color mapping is non-linear though, so one would have to know or
> test the mapping used, but that should be quite simple.
>
> Rune
I just had the same idea, ..., but the other way around.
white background, orthographic camera, cylindrical ligth, transparent
surface and a black color fade inside (light attenuation). I think this
would be even more accurate as the fade is calculated directly by the
length of the ray traveling inside the object, while with media you also
can influence the result by means of intervalls/samples parameter.
dave
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
David El Tom wrote:
> I just had the same idea, ..., but the other way around.
>
> white background, orthographic camera, cylindrical ligth, transparent
> surface and a black color fade inside (light attenuation). I think
> this would be even more accurate as the fade is calculated directly
> by the length of the ray traveling inside the object, while with
> media you also can influence the result by means of
> intervalls/samples parameter.
Yes, I agree, except that I don't think you need any light source.
Rune
--
http://runevision.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> For objects that fill a reasonably high percentage of the bounding box
> (such as a sphere) it wouldn't be a big problem unless you really care
> about high accuracy. For other objects though, such as, say, a cylinder
> from <-1,-1,-1> to <1,1,1> with radius 0.01, testing all points (with some
> proximity) within the bounding box is extremely inefficient. However,
> pre-testing with trace lines within the bounding box parallel with one or
> more axes could be used to weed out by far the most points beforehand, and
> this would only be O(n^2).
It just occured to me that the object can be rotated and the bounding
box tested for minimum volume to reduce the bounding volume on long
narrow objects, it wouldn't do much for spikey things though.
>Come to think of it, the tracing could be used iteratively to do the volume
>calculation in the first place. This would work in all cases and would only
>be O(n^2*m) where m represents the complexity of the object.
So, if you start with a grid and trace across, measuring the surface
locations
with the trace to determine the distance of the line that is inside the
object
at each grid point and then average all the distances, then the average
divided by the depth of the bounding box should be the percentage of the
bounding volume that represents the object volume? Sort of like sticking
a bunch of long pins into it and measuring how much is inside? Wouldn't
some objects still squeeze thru the grid?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |