|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hello all! I'm new here, I've been using POV-Ray for about a month now,
going through the tutorials that shipped with it. I have some background in
both modelling, and rendering. I went to college initially to be a drafter,
and worked with AutoCAD quite extensively. I started in R12, and used
AutoCAD through ACAD2000. I worked for an architectural design firm for
awhile, and when I was working there, I got to use a piece of software from
Autodesk called Lightscape. This was my first experience with
photorealistic rendering. I was hooked immediately. Lightscape's
raytrace/radiosity engine was leaps and bounds ahead of the internal
AutoCAD engine that I was used to working with (especially the flat shader
in R12). Anyhow, I switched career fields 4-1/2 years ago, and had kind of
forgotten about rendering. About a year ago I got myself a copy of
IntelliCAD 2001i for free (they were open sourcing it at the time).
IntelliCAD is a virtual clone of AutoCAD R14, except that the free version
had no built in rendering engine. Searching the net for an open source
renderer was kind of a disappointment. I couldn't find anything that output
the quality I had seen when I was using Lightscape. Well, I had found one.
It was called POV-Ray. The quality of renders rivalled that of the high end
renderers I had seen. The only problem was that I had to learn this "SDL"
so I put it by the wayside for awhile. About a month ago, I revisited the
copy I had downloaded, and decided to put my mind to becoming a
"raytracer". I downloaded v3.6 (WIN) and started chopping through the
tutorials. Once I got into it, I realized that it's not all that different
than AutoCAD.
That brings us to today:
I've pretty well completed the tutorials, and feel that I have a pretty good
grasp on the SDL (well, except that isosurfaces give me a headache, but I
imagine that happens to every new user that doesn't have some type of
advanced math degree). I'm working on designing a scene of my own (my first
large scale project with POV-Ray) and I'm trying to do it all with POV. I
know I won't learn the software adequately if I simply design in IntelliCAD
and render in POV-Ray. Anyhow, I've read a bit on here about POV-Ray
storing meshes in memory to speed up rendering (or maybe just parsing?)
time. I don't recall seeing that in the tutorial, but I may have missed it.
Do you just declare the mesh, then place multiple copies, or is it more
complicated than that?
Thanks in advance for your help. I'm sure I'll have lots more to ask down
the road.
Oh, and sorry to be so long winded, I'm like that sometimes. :)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Statler wrote:
> [...]
> Anyhow, I've read a bit on here about POV-Ray
> storing meshes in memory to speed up rendering (or maybe just parsing?)
> time. I don't recall seeing that in the tutorial, but I may have missed it.
> Do you just declare the mesh, then place multiple copies, or is it more
> complicated than that?
POV-Ray stores all geometry in memory during the render. The special
thing about meshes is that they are instanced, i.e. the geometry data is
only stored once if the same mesh is used multiple times.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 06 Jul. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Basically if you have this:
#declare MyMesh = mesh2 { ... };
object { MyMesh translate <1, 2, 3> }
object { MyMesh translate <-1, -2, -3> }
object { MyMesh translate <-4, 5, 6> }
object { MyMesh translate <4, -5, -6> }
object { MyMesh translate <1, -2, 3> }
you will have 5 instances of the mesh in your scene, but POV-Ray will
only have one instance of it in memory. That is, this scene only takes
the memory needed for one instance of the mesh (and not five).
When POV-Ray is raytracing the scene, it will read the mesh data from
this single mesh in memory for each instance you have in the scene
(even though all the instances may be in different places and rotated
and scaled differently).
(If you can't figure out how this is possible, it has to do with how
POV-Ray manages transformations internally. I can explain that if you
want.)
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
From: Doppelganger
Subject: Re: New User, question about mesh duplication.
Date: 31 Aug 2004 14:00:30
Message: <4134bcbe@news.povray.org>
|
|
|
| |
| |
|
|
even if Statler doesn't want that explanation, I'd rather like it, as I'm
getting more and more interested in the internals of raytracing.
Thanks in advance (I'll thank you also after I read it, mind you, but it
will be in a long while, as I'm a little short on internet access right now
and I don't want to seem rude ;)
"Warp" <war### [at] tagpovrayorg> wrote in message
news:413465ea@news.povray.org...
> Basically if you have this:
>
> #declare MyMesh = mesh2 { ... };
>
> object { MyMesh translate <1, 2, 3> }
> object { MyMesh translate <-1, -2, -3> }
> object { MyMesh translate <-4, 5, 6> }
> object { MyMesh translate <4, -5, -6> }
> object { MyMesh translate <1, -2, 3> }
>
> you will have 5 instances of the mesh in your scene, but POV-Ray will
> only have one instance of it in memory. That is, this scene only takes
> the memory needed for one instance of the mesh (and not five).
>
> When POV-Ray is raytracing the scene, it will read the mesh data from
> this single mesh in memory for each instance you have in the scene
> (even though all the instances may be in different places and rotated
> and scaled differently).
> (If you can't figure out how this is possible, it has to do with how
> POV-Ray manages transformations internally. I can explain that if you
> want.)
>
> --
> #macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb
x]
> [1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
> -1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// -
Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Doppelganger <ped### [at] netcabopt> wrote:
> even if Statler doesn't want that explanation, I'd rather like it, as I'm
> getting more and more interested in the internals of raytracing.
If one does not know how transformations should be applied to objects,
one could make the naive assumption that when you transform a mesh, all
the vertex points are just modified according to the transformations.
This could actually work (and most scanline renderers and 3D cards
probably do exactly this), but that would cause at least two problems:
1. If you want to make several instances of the mesh, naturally all
of them transformed in different ways, you'll have to make copies of
the mesh. (Another alternative is to transform all the vertex points
each time you test the intersection of the ray with the mesh instance,
but that would probably be quite slow, specially if the mesh is very
large).
2. It would work only with meshes. You can't use the same idea with
other raytraceable primitives.
The point 2 is quite important in a raytracer. Imagine you have
a box: How do you apply transformations to it?
Translate? No problem. Uniform scale? No problem. Non-uniform scale?
Uh... you probably can manage. Rotate? Uh*2... Maybe. But then...
"rotate x*30 scale <1,.1,1> rotate x*-30"... Uh! No way! The box would
not have 90 degree angles anymore...
You could get away by making the box a mesh... but then there are
the more complex primitives. For example, imagine an infinitely large
polynomial object (eg. a paraboloid). You can't convert everything to
meshes, but the raytracer works with those as well, with transformations
and all.
So obviously transformations are done in a quite different way.
In fact, transformations work with *any* raytraceable object, no matter
how complex it is. There's actually a quite ingenuous way to transform
*any* object in raytracing.
Another ingenuous thing about transformations is that it doesn't matter
how many of them you apply to an object: It will not slow down its
rendering. You can apply a thousand transformations to an object and
it will still render as fast as if you had applied only one (supposing
its size is about the same on screen, etc; the point is that the
*amount* of transformations does not affect its rendering speed).
Have you ever wondered why there's such a limited amount of different
transformations available? Only so-called linear transformations are
available. And there's a reason.
Instead of transforming the object itself (which can be quite a complex
operation, if not impossible), the ray is transformed with the inverse
transformation before testing it against the object.
That is, every transformation you think you are applying to the object
are in fact applied reversely in reverse order to the rays tested against
this object instead.
This is actually a quite simple but ingenuous idea: The object itself
does not need any support for transformations, it's enough that it's
raytraceable and that's it. This method is thus a generic way of
"transforming" *any* object, no matter what that object is.
This is how mesh duplication works: Transform the ray with the inverse
transformations of one mesh "instance" and test against the (single) mesh
data in memory. Then transform the ray with the inverse transformations
of another mesh "instance" and test against the same mesh data in memory.
And so on.
This exact same principle could be applied to any object, not just
meshes. (The reason why POV-Ray does not do exactly this are different,
and perhaps partially historical.)
This also explains why only linear transformations can be applied to
objects: It's the ray which is transformed, not the object, and the
ray must keep straight after the transformation.
Then, what about any number of transformations applied to an object
not affecting its rendering speed?
Transformations are actually all applied to a single transformation
matrix related to the object instead of kept separately.
Try googling for "transformation matrix" for tons of info if you
are interested.
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
From: Florian Brucker
Subject: Re: New User, question about mesh duplication.
Date: 1 Sep 2004 19:53:33
Message: <413660fd$1@news.povray.org>
|
|
|
| |
| |
|
|
> This is how mesh duplication works: [...]
> This exact same principle could be applied to any object, not just
> meshes. (The reason why POV-Ray does not do exactly this are different,
> and perhaps partially historical.)
I always wondered why POV does not do the same with "normal"
primitives/csg objects. Are there any technical reasons aside from the
historical ones you mention?
Florian
--
If all goes well, you should see an ugly, loathsome, repulsive,
deformed window manager called twm, probably the smallest window
manager available. (Gentoo Linux Handbook)
[------------ http://www.torfbold.com - POV-Ray gallery ------------]
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I always wondered why POV does not do the same with "normal"
> primitives/csg objects. Are there any technical reasons aside from the
> historical ones you mention?
One good reason is that when primitives are copied instead of referenced,
they can be optimised better. For instance, if you copy a box and translate
the copy, it can simply change the actual coordinates of the copied box and
thereby avoid actually having to use a transformation matrix. Or, if the
original box was already transformed, new transformations can simply affect
the current transformation matrix rather than having to transform the ray
more than once. There may be other similar optimizations.
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Slime" <fak### [at] emailaddress> wrote in message news:41366329$1@news.povray.org...
> more than once. There may be other similar optimizations.
<snip>
Adding to this, I would hazard a guess that the memory requirements for
pov-primitives are pretty insignificant (i.e. making a new one doesn't generally
use more memory than using a pointer).
There would probably be some mileage to be gained if complex csg objects were
handled like meshes, and a pointer to the whole csg object could be supplied,
but meshes are the obvious candidate for this treatment.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tom Melly <tom### [at] tomandlucouk> wrote:
> There would probably be some mileage to be gained if complex csg objects were
> handled like meshes, and a pointer to the whole csg object could be supplied,
> but meshes are the obvious candidate for this treatment.
Perhaps there should be an option like "don't make a deep copy if making
only a reference would save at least a certain amount of memory".
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
From: Doppelganger
Subject: Re: New User, question about mesh duplication.
Date: 3 Sep 2004 08:03:49
Message: <41385da5@news.povray.org>
|
|
|
| |
| |
|
|
I'd just make a define_once keyword. So if you did something like #declare
myobj box {<0,0,0>, <1,1,1> define_once}, it would know that all
substitutions of myobj would point to the same representation of the object.
thanks for the transform explanation, warp. I've spent 3 years studying
advanced maths, so the actual transformation matrix issue isn't a problem.
just the trick on the inverse. quite smart! :)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|