|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
ingo wrote:
> in news:404a7076$1@news.povray.org Jim Charter wrote:
>
>
>>*easier* is a fairly big leap for me.
>>
>
>
> easier in the sense that the mesh2 object is very close to arrays and that
> the output of macros that generate vertices will generaly be arrays. So if
> you keep some structure in your generation macro it needs relative little
> code to turn it into a mesh2. My first versions of the makemesh macro
> wrote mesh{} and was quite a bit longer than the current one.
>
> Ingo
I'll take a closer look. There are definitely some interesting
possibilities there I think.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Gilles Tran wrote:
> But now, the current machines have enough RAM to digest gigantic meshes
> without complaining so that primitives have become much less competitive.
> This is why, for instance, my Maketree objects are no longer interesting,
> since we have POV-Tree, which exports in mesh (with a better algorithm) and
> allows the creation of entire forests thanks to mesh instanciation.
>
This concept still confuses me.
//case 1: mesh instantiation:
//create tree
#declare SomeTree =
mesh {
triangle {}
triangle {}
triangle {}
...
texture T
}
//create forest
LOOP 10000 TIMES
object { SomeTree translate RandomLocation }
ENDLOOP
//case 2: primitive instantiation
//create tree
#declare SomeTree =
union {
cone {}
sphere {}
triangle {}
...
texture T
}
//create forest
LOOP 10000 TIMES
object { SomeTree translate RandomLocation }
ENDLOOP
How is it that case 1 gets a performance/memory gain and case 2 doesn't?
Especially since, as I found out recently, every intersection of a mesh
element spawns a texture calculation anyway, up to a limit of 100.
> The big remaining issue for hobbyists, however, is uv-mapping. AFAIK there
> is no free uv-mapping tool allowing real-time 3D painting and some
> automatisation for vertex unwrapping (such as the expensive Bodypaint).
> Until we have such a tool, mapping will remain a limitation for mesh users.
>
I take it that real-time 3D painting obviates the need for uv coordinate
unwrapping by hand?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
news:404a2ce2@news.povray.org...
> Recently ran across a uv-mapping program called ManifoldLab, (go here:
> pub58.ezboard.com/bggaliens )
I'll have a look, thanks.
G.
--
**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
news:404bbcb8$1@news.povray.org...
> How is it that case 1 gets a performance/memory gain and case 2 doesn't?
> Especially since, as I found out recently, every intersection of a mesh
> element spawns a texture calculation anyway, up to a limit of 100.
This is something that should be asked to programmers... There's a thread
somewhere about this issue, and IIRC someone from the POV-Team agreed that
the other primitives should be instanciated as well. There's also the issue
that it's not possible to instanciate a mesh independently of the textures.
> I take it that real-time 3D painting obviates the need for uv coordinate
> unwrapping by hand?
Real-time 3D painting and automatic unwrapping are two different features.
In 3D painting, one paints the model through a 3D interactive view, just
like one would do in real life. UV Mapper Pro lets you switch between the 3D
view and the 2D painting program but it's not quite the same.
Automatic unwrapping tries to guess the best possible uv layout for a given
model in order to avoid distortions. It's still a tough job anyway. To be
fair, I still have to learn Bodypaint so I'm not completely aware yet of
what it can or cannot do, and how difficult it is...
G.
--
**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Gilles Tran wrote:
> Real-time 3D painting and automatic unwrapping are two different features.
Okay fair enough, I really wasn't confused about that but as I try to
explain myself further, I see that my question was more complex than it
seemed at first.
In the whole uvmapping process, is about relating 2d information, a
pattern of colors, to 3d information, a surface which carves through
space, via the use of intermediary 2d uv coordinates. In this way the
3d surface can be colored with a specific pattern. Techniques and the
interfaces which support them, traditionally try to picture the 3d
surface flattened onto a 2d uv register in the form of a pattern or
template. But doing this by simply collapsing one of the 3 dimensions
usually leads to a tangled pattern of mappings, between vertices and uv
coordinates, which then corresponds awkwardly to any coherent pattern of
colors.
So instead, the 3d surface is "unwrapped" by splitting edges
systematically along well chosen lines, so that the surface being
colored can be spread out, so to speak, onto the 2d register in the most
coherent way. Now the pattern of color can be more easily corresponded
to the pattern of uv coordinates, and hence mapped to the 3d surface.
This unwrapping process is supported by tools native to modellers or
independently such as uvmapper, and can support processes that are
'manual' and 'automated' to varying degrees. Some strategies for
splitting up the 3d surface and unwrapping it onto the 2d uv register
are so complex as to be only possibly in an automated way. This is what
I understand by the term "auto unwrap". Other strategies involve an
automated first guess at what the flattened pattern might be, and then
the tool supports further 'unwrapping' of the surface, by allowing the
adjustment of the positions of the 3d vertices on the uv register,
'manually'. Either way this whole approach might viewed as bringing the
3d information to the 2d information.
> In 3D painting, one paints the model through a 3D interactive view, just
> like one would do in real life.
Which, as I picture it, is like unwrapping in reverse. Again you are
mapping from a 2d space to a 3d space but in this case the "2d"
information, the pattern of the colours, is corresponded to the 3d
information, the pattern of the vertices in space, through automated
support of a manual process (painting).
So my question was, since I have never had any experience with such a
tool first hand, does it indeed obviate any need for manually
rearranging vertex mappings on a uv template, such as we get involved in
when using uvmapper? If so, it would seem to be a real productivity
boost. Basically I was just ooing and aweing.
-Jim
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
news:404c835d@news.povray.org...
> So my question was, since I have never had any experience with such a
> tool first hand, does it indeed obviate any need for manually
> rearranging vertex mappings on a uv template, such as we get involved in
> when using uvmapper?
From what I've seen, no. You still have to move the vertices yourself if
necessary, due to the limitations of the uv wizards. The added productivity
is due to the following 1) the wizards are more sophisticated than the
primitives (box, sphere, cylinder) offered by UVMapper and 2) real-time
painting which removes the usual worries about putting paint at the right
place. But it's still quite theoretical to me and I still need to plunge
headfirst in a real BP project before I can really talk about it.
G.
--
**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jim Charter <jrc### [at] msncom> wrote:
> How is it that case 1 gets a performance/memory gain and case 2 doesn't?
When you create instances of a mesh, the mesh data is not copied. However,
when creating instances of a union, the contents are copied for each
instance.
The reason is that transformations made to the instance may change the
contents of the union.
In order to get the same advantage as with a mesh, all transformation
optimizations would need to be removed from all existing primitives.
This might have a negative impact on the rendering speed of some
scenes... (It's not an impossible idea, but it would be nice to know
how much it would impact in practice.)
--
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Jim Charter <jrc### [at] msncom> wrote:
>
>>How is it that case 1 gets a performance/memory gain and case 2 doesn't?
>
>
> When you create instances of a mesh, the mesh data is not copied. However,
> when creating instances of a union, the contents are copied for each
> instance.
> The reason is that transformations made to the instance may change the
> contents of the union.
> In order to get the same advantage as with a mesh, all transformation
> optimizations would need to be removed from all existing primitives.
> This might have a negative impact on the rendering speed of some
> scenes... (It's not an impossible idea, but it would be nice to know
> how much it would impact in practice.)
>
Thanks Warp, that helps. For the record, I did check your FAQ before
asking.
Two follow up questions if I may...
I have:
#local Shape =
sphere { ... }
#local Pattern =
union {
sphere { ... }
sphere { ... }
...
}
#local Result =
object { Shape
clipped_by { Pattern }
}
LOOP ( many times )
object { Result translate ... }
END LOOP
1) How expensive is this...
compared with if Result was just a simple unclipped sphere primitive?
ie If the clipping Pattern is some complex thing, how does that
contribute to the expense of the instantiated Result?
2) Is there any general way of estimating computational expense of
different primitives? ie Are two triangle primitives exactly twice as
expensive as one box?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I just noticed this thread this morning and thought I'd put in my two cents
worth. I highly recommend Moray 3.5 for mesh2 modeling. Moray will produce a
UV map of the mesh object, flattened out into 2D. For my head model which I
posted to p.b.i., I made a screenshot of the 2D UV map and imported it into
Corel PhotoPaint, where I was able to paint over it on new layers, being
able to see where everything I painted would show up on the UV map. After
doing all the painting I wanted, I deleted the UV map screenshot layer,
combined the newly painted layers into a .jpg file, which I then used for
the UV map texture for the mesh. Everything painted on the flat surface goes
exactly where it should on the 3D mesh.
Steve Shelby
"Jim Charter" <jrc### [at] msncom> wrote in message
news:40479763@news.povray.org...
snip
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I agree with you whole-heartedly on the value of SDL. While mesh models
certainly are valuble for many things, as a machinist I daily carve parts
out of solid blocks of metal and simply THINK in CSG. The ablity of POV to
use multi-dimensional, array defined, splines is a powerful yet poorly
documented feature that has taken it out of hobby status and solidly put it
on my workbench. I hope in future versions to see more tools like pipes to
get the data directly into CAD/CAM instead having to dump via #debug into a
file.
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.590 / Virus Database: 373 - Release Date: 2/16/2004
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|