POV-Ray : Newsgroups : povray.general : Amapi is free Server Time
4 Aug 2024 22:11:37 EDT (-0400)
  Amapi is free (Message 31 to 40 of 55)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Jide
Subject: Re: Amapi is free
Date: 19 Apr 2003 05:32:19
Message: <3ea117a3@news.povray.org>
Timothy R. Cook wrote:
> ingo wrote:
> > How many hours did it take DAZ to create Vicky? Does she look realistic?
>
> Vicky was not built by hand.  Vicky was made using a 3d-scanner.
> Hideously expensive to own, not exactly cheap to rent use of one
> either, but that's how they build human meshes that look realistic.

Well, AFAIK the models created by 3d-scanner are only used as a reference to
digitally sculpt a lower resolution mesh that will animate more smoothly or
then the scanned model is used to get a displacement map for a low res cage.
There are also lots of people who make realistic human models 'by hand'.
Steven Stahlberg and Peter Syomka are on the top 10 of my list. It's
actually not even too complicated when you have an app that you can work
with and you know anatomy sufficiently.

--
-Jide


Post a reply to this message

From: Simon Adameit
Subject: Re: Amapi is free
Date: 19 Apr 2003 06:15:12
Message: <3ea121b0@news.povray.org>
Thorsten Froehlich wrote:
> In article <3ea07c18$1@news.povray.org> , Simon Adameit 
> <sim### [at] gaussschule-bsde>  wrote:
> 
> 
>>Are you sure? Cinema4d seems to do raytracing
> 
> 
> Yes, it is the closest you can get to plain ray-tracing.  Still, it is
> hybrid method as it relies at least on various light and reflection tricks;
> at least that is what the images suggest when looking closely at light,
> shadow and reflection compared to knwon ray-traced images.  

A friend of mine has it and you have the option of choosing between hard 
shadows, shadow map and area light. Might be that the images you mean 
were rendered with shadow maps though I dont know what could be the 
reflection trick.

>I actually
> downloaded to manual (the whole 42 MB), but it does not say anything about
> technical issues at all :-(

Perhaps their c++ sdk doc can tell you more:
http://www.plugincafe.com/r8sdk/index.html


Post a reply to this message

From: Warp
Subject: Re: Amapi is free
Date: 19 Apr 2003 09:23:24
Message: <3ea14dcc@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> I actually
> downloaded to manual (the whole 42 MB), but it does not say anything about
> technical issues at all :-(

  I think that's one difference between "professional" commercial renderers
and amateur free ones. :)

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Amapi is free
Date: 19 Apr 2003 10:48:28
Message: <3ea161bc$1@news.povray.org>
In article <3ea14dcc@news.povray.org> , Warp <war### [at] tagpovrayorg>  wrote:

>> I actually
>> downloaded to manual (the whole 42 MB), but it does not say anything about
>> technical issues at all :-(
>
>   I think that's one difference between "professional" commercial renderers
> and amateur free ones. :)

No, the manual was included in German and English, and there where plenty of
example movies (or something like that, I didn't watch them).  So they are
just better cheating ;-)

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Ken
Subject: Re: Amapi is free
Date: 19 Apr 2003 11:05:48
Message: <3EA165EC.F4F37F61@pacbell.net>
Thorsten Froehlich wrote:
> 
> In article <3ea14dcc@news.povray.org> , Warp <war### [at] tagpovrayorg>  wrote:
> 
> >> I actually
> >> downloaded to manual (the whole 42 MB), but it does not say anything about
> >> technical issues at all :-(
> >
> >   I think that's one difference between "professional" commercial renderers
> > and amateur free ones. :)
> 
> No, the manual was included in German and English, and there where plenty of
> example movies (or something like that, I didn't watch them).  So they are
> just better cheating ;-)

Actually I think Warp was referring to the fact that the documentation from a
commercial rendering package never discusses the technology behind how it works
whereas POV-Ray (an amateur free program) does an excellent job of it.

-- 
Ken Tyler


Post a reply to this message

From: Christopher James Huff
Subject: Re: Amapi is free
Date: 19 Apr 2003 13:40:18
Message: <cjameshuff-E9934B.13392619042003@netplex.aussie.org>
In article <3ea07c18$1@news.povray.org>,
 Simon Adameit <sim### [at] gaussschule-bsde> wrote:

> > No, here you are wrong.  It simply happens that CSG is harder to do if you
> > don't use plain ray-tracing.  
> 
> I think even if people used plain raytracing, they wouldn't use CSG as 
> their primary modelling method.

Most modellers use polygon representations to provide a real-time 
preview of the scene, and faster but lower quality scanline rendering, 
so of course when they raytrace they use the same data. Primitive CSG is 
generally faster and uses less memory, but it requires primitives that 
are harder to preview and scanline render, you generally have to 
tesselate them first. If they used pure raytracing, they would use CSG 
of primitives unless meshes were absolutely necessary, it would be the 
most efficient way to do the job.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Simon Adameit
Subject: Re: Amapi is free
Date: 19 Apr 2003 14:58:15
Message: <3ea19c47@news.povray.org>
Christopher James Huff wrote:
> In article <3ea07c18$1@news.povray.org>,
>  Simon Adameit <sim### [at] gaussschule-bsde> wrote:
> 
> 
>>>No, here you are wrong.  It simply happens that CSG is harder to do if you
>>>don't use plain ray-tracing.  
>>
>>I think even if people used plain raytracing, they wouldn't use CSG as 
>>their primary modelling method.
> 
> 
> Most modellers use polygon representations to provide a real-time 
> preview of the scene, and faster but lower quality scanline rendering, 
> so of course when they raytrace they use the same data. Primitive CSG is 
> generally faster and uses less memory, but it requires primitives that 
> are harder to preview and scanline render, you generally have to 
> tesselate them first. If they used pure raytracing, they would use CSG 
> of primitives unless meshes were absolutely necessary, it would be the 
> most efficient way to do the job.
> 

Its just that most of the time meshes are absolutely necessary. 
Primitive CSG is to limited in what kind of objects you can create so it 
would be used but probably not much. I think the reason why CSG is used 
so much in POV is that its easy to create objects this way using the 
SDL. And pure raytracing doesn't mean that you are not able to model and 
preview in real-time.


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Amapi is free
Date: 19 Apr 2003 17:04:59
Message: <3ea1b9fb$1@news.povray.org>
In article <3ea19c47@news.povray.org> , Simon Adameit 
<sim### [at] gaussschule-bsde>  wrote:

> And pure raytracing doesn't mean that you are not able to model and
> preview in real-time.

Well, if the preview excludes everything (most importantly the recursive
part of the process) that makes ray-tracing what it is, sure, but then you
can just as well use other approaches which will actually produce better
results more quickly for everything but the most complex scenes.

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Simon Adameit
Subject: Re: Amapi is free
Date: 19 Apr 2003 18:56:24
Message: <3ea1d418$1@news.povray.org>
Thorsten Froehlich wrote:
> In article <3ea19c47@news.povray.org> , Simon Adameit 
> <sim### [at] gaussschule-bsde>  wrote:
> 
> 
>>And pure raytracing doesn't mean that you are not able to model and
>>preview in real-time.
> 
> 
> Well, if the preview excludes everything (most importantly the recursive
> part of the process) that makes ray-tracing what it is, sure, but then you
> can just as well use other approaches which will actually produce better
> results more quickly for everything but the most complex scenes.
> 

but is it still pure raytracing ;-)

regarding your last comment, how compares raytracing with other 
techniques when complexity grows?


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Amapi is free
Date: 20 Apr 2003 04:57:21
Message: <3ea260f1@news.povray.org>
In article <3ea1d418$1@news.povray.org> , Simon Adameit 
<sim### [at] gaussschule-bsde>  wrote:

> regarding your last comment, how compares raytracing with other
> techniques when complexity grows?

In short:

With scanline rendering you have to draw every object potentially
intersecting the view frustum in order to determine its visibility.  That
is, the bounding volumes a scene contains are only good for determining
whether objects (actually, their bounding volumes) intersect the view
frustum.  Then you have to transform them and draw those objects.  As
objects can only consist of polygons, you have to transform all those points
(or build the mesh on the fly, which is even more work).  And then start
drawing the triangles.  And it may turn out that only a tiny little fraction
of those triangles is really visible.  Many points drawn will in fact be
hidden behind others the more objects are in the scene.

With ray-tracing you can use bounding everywhere. Once you have identified
the object with the closest bounding volume, you calculate the intersection.
If there is non, pick the object with the second closest bounding volume
(which you already have as you found it while searching for the closest one)
and so on. And there are a lot of ways to easily determine which bounding
volumes are closest.  Certain algorithms allow storage of bounding
information such that you will have an almost sorted list of only the
bounding volumes within the view frustum.  Or you use other data structures
to represent the scene.

So, you end up with something like this for scanline rendering:

Lets assume you have a scene with 50 million triangles within the view
frustum at one frame (which is about as many triangles as scanline rendering
hardware can draw per second today - and I am talking about realistic
numbers, not the theoretical maximum marketing people list somewhere).

Lets further assume we have a screen with about one megapixel, that is i.e.
1152*870 pixels (a format not too common today, but then years ago it was).

So, now there are on average 50 triangles which contribute to one pixel. In
fact, assuming an average size of 10 pixels, there are 500 triangle pixels
per pixel, but lets ignore this number here now - the 50 triangles are
already bad enough.  So, for every pixel you will have to perform at least
one transformation (assuming every triangle has two points in common with
other triangles - this isn't the case, they will have less than two in
common on average), which is a matrix multiplication plus a transformation
into screen coordinates (to get your triangles from 3D to 2D).

Lets assume to perform this computation takes 20 nanoseconds (ns), which is
realistic on hardware today.  So, 50 * 20 ns = 1000 ns for the 50 triangles
(and thus for 50 million triangles one second).

And for ray-tracing, using only simple algorithms, without (!!!) advanced
algorithms that do better than plain bounding volumes, you get:

Assume that the 50 million triangles belong to 1000 objects with 50000
triangles each.  So we end up with 1000 bounding volumes, which will take no
more than 1000 * lg 1000 = (about) 10000 steps to sort by distance from the
viewpoint.  Lets assume every step takes 10 ns (probably it takes less).  So
it takes 100000 ns to do.  Lets also ignore that it took another 100 ns or
so to determine the distance of each viewing volume from the viewpoint.
Thus, this takes 1/5000 of a second, and we will ignore this for now because
the really important computation follows.

Compute the intersection of a ray with the bounding volume.  Lets assume for
simplicity that this volume is a sphere, and the intersection test takes 20
ns on average (which is fairly realistic).  But how many will we have to
compute on average?  This is the really important point, and it depends on
the size of the objects.  Remember we assumed before that every triangle
consists of about 10 visible pixels?  Lets now also assume that objects have
a volume, that is, they have two sides, so we end up with about 50000
triangles per object * 10 pixels per triangle = 500000 pixels per objects,
and half are not facing us, thus 250000 pixels is the average size of the
objects.  This yields about 250 objects per pixel.  Thus, it takes about
four intersection tests with bounding volumes to find the first one that is
intersecting (less actually, as bounding volumes won't be optimal).  Lets
assume we always miss the first object, thus there will be on average 20 ns
* 8 = 160 ns intersection tests with bounding volumes.  Which leaves up
about 840 ns to finding the actual intersection with one of the 50000
triangles.  This is possible because the triangles are again separated in
several bounding volumes.  I am not going to go into detail here, but you
can find many algorithms on the internet.  Lets assume the bounding
hierarchy splits the space in even boxes, 16 * 16 * 16 boxes (4096), this
yields on average 12 triangles per box and we can walk them following a 3D
line (which costs us essentially nothing).  Lets assume an intersection test
takes 30 ns on average (triangle intersection is a bit more complex, but
allows early bailout if no intersection can occur), we can do 840 ns/ 30 ns
= 28 intersection tests per pixel.

In conclusion, just taking the transformations, any ignoring the fact that
the scanline rendered will have to test 500 million pixels whether they are
visible or not by performing a comparison with the z-buffer, ray-tracing
will be faster finding the intersection already, without still having to
perform those 500 million comparisons, which take 1/5 of second for sure,
and generate 1.9 GB memory traffic, plus another 1.9 GB for storing the
pixels and overwriting them when a closer one was found.  Plus the 0.8 GB it
takes to read the triangles.  Which gives a total of 4.6 GB.  Texturing and
so on isn't even included.

For the plain drawing ray-tracing will only generate 4 MB of memory traffic.
The bounding volume tests will be require 8 spheres * 16 bytes (position
plus radius of the sphere) per pixel, plus 28 triangles * 36 bytes (three
points) yield only 1.6 GB of memory traffic, roughly one third the scanline
renderer needed.

So, as you can see (if you managed to read on to here), if the scene
complexity rises, ray-tracing gets more economical, even for things like
triangle meshes.

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.