![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <Xns### [at] povray org>,
ingo <ing### [at] tag povray org> wrote:
> Just for my understanding; Everytime I see a statement simular to this I
> wonder, does it mean building a CSG from POV-Ray primitives and then
> tesselate it, or tesselate the "POV-Ray primitives" to a certain level and
> then do the CSG (as I saw it in a very old version of 3D-studio)?
It generally refers to doing CSG with meshes. Making sure you don't end
up with gaps or odd seams where the formerly separate meshes touch is
not an easy problem to solve. Unions are easy, but merge, difference,
and intersection are not.
There are algorithms for tessellating arbitrary objects, but they are
never optimal. You end up with huge amounts of triangles in places where
you don't need them, and you can still lose small details, the point of
a cone or edges of a box for example. You can tessellate a box with 12
triangles, but with one of these algorithms you are likely to end up
with hundreds of thousands just to get rid of the faceting artifacts.
CSG of multiple meshes is really the best way to go, it's just really
hard.
One possible compromise would be to just avoid the problem of joining
the seams by using solid geometry CSG on the meshes, like what POV does
now with other primitives and meshes with an inside_vector specified.
This is not really the best possible solution, but could give some
improvements.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Hi ingo, you recently wrote in povray.general:
> wonder, does it mean building a CSG from POV-Ray primitives and then
> tesselate it,
Well, you can't really do that.... I mean, what would you build?
> or tesselate the "POV-Ray primitives" to a certain level and
> then do the CSG (as I saw it in a very old version of 3D-studio)?
Yes, this is how you do it. You have to do CSG on triangle meshes and
that just gets very ugly. It is doable, Moray does this. But the
result is not pretty. If (in Moray) you've ever evaluated a CSG and
then converted it to mesh and went to edit that mesh, you'll know what
I mean.
- Lutz
email : lut### [at] stmuc com
Web : http://www.stmuc.com/moray
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Hi Ray Gardener, you recently wrote in povray.general:
> Well, CSG is undoubtedly a challenge.
> As long as "extremely hard" doesn't
> wind up becoming "intractable" I'm
> willing to forge ahead. In the reverse
> direction, I would love to see Moray
> display POV CSG objects.
Well, it can display them (after you do evaluate), but it doesn't
always work and the result, while displaying nicely on the screen, is
actually quite ugly once you look at it :-) But if all you want is
screen display, then yes, it's not intractable.
> In REYES, which tesselates down to
> micropolygons, one can consider the
> 3D location of the polygon to be
> inside or outside another primitive, and
> thus cull it at render time.
That would be a nice way, but you're running into boundary conditions
as well that you need to be aware of. Because you're often going to
end up testing polygons against the boundary of the object... but if
you kept track to which object the polygon belonged, you could avoid
testing against it... sounds promising.
I guess the problem can be solved differently at rendering time.
Regards,
- Lutz
email : lut### [at] stmuc com
Web : http://www.stmuc.com/moray
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <web.3edc9d6bc13294582ff34a90@news.povray.org>,
tom### [at] compsoc man ac uk says...
> Patrick Elliott wrote:
> >With the number of component we could fit into a new card
> >the result would be no bigger than an existing NVidia card, however the
> >development time needed to produce the chip, make sure it was stable and
> >then pray that game companies that are used to using the cheats supplied
> >through OpenGL and DirectX will actually use it makes the odds of anyone
> >seeing such a card on the market any time soon very unlikely.
>
> Cheats? Was there ever a 30fps algorithm free of them? :-)
>
True enough, but in this case I mean 'cheat' as in faking real geometry
like boxes and spheres using triangles. I consider it a cheat because it
takes advantage of the existing hardware capability to produce something
that only looks real by using so many triangles that you can't place
anything else in the scene with it. That is the nature of most cheats,
something that looks good enough for you purpose, but doesn't really
reproduce the result accurately.
> >The irony being that in many cases you have to upload large chunks of
> >additional geometry and pre-made textures every few seconds in a game,
> >while a true raytrace engine would only have to upload terrain geometry
> >and those textures you couldn't produce using procedural systems (which for
> >most games would be maybe 10% of them...)
>
> Why won't a raytracing card have to load new geometry regularly? It's not
> necessarily going to know ahead of time which objects will be needed during
> a game, and they are not all going to fit in memory at the same time for a
> modern game.
>
True, but some things don't need to be fed back into in continually and
those bases on primitives would take less room to store, meaning you can
leave more of them in memory than normal. This should cut in half the
amount of stuff you have to cram into the card in each frame, possibly
more.
> Procedural textures will have to be calculated by the graphics card, which
> surely has enough to do already (if you get the CPU to do it and send it to
> the graphics card, you might as well send an image). I thought anyway that
> procedural textures are not specific to raytracing?
>
Think some new cards may use them, but the same issue existed for them as
for a true card based engine, the methods used to produce them where
simply too complex to 'fit' in the existing architecture.
> Whilst I'm sure that, given sufficient time, you could reproduce 90% (or
> 100%) of game image textures with procedural textures, the question surely
> is whether it is worth the extra time taken to generate the textures in the
> first place, every frame. I can especially see problems producing things
> like text at speed.
>
Kind of hard to say, since the option isn't exactly common or if it does
exist is not used from what I have seen.
--
void main () {
call functional_code()
else
call crash_windows();
}
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Lutz Kretzschmar wrote:
> If (in Moray) you've ever evaluated a CSG and
> then converted it to mesh and went to edit that mesh, you'll know what
> I mean.
Perhaps a triangle beautification process would be an idea :)
--
Rick
Kitty5 NewMedia http://Kitty5.co.uk
POV-Ray News & Resources http://Povray.co.uk
TEL : +44 (01270) 501101 - FAX : +44 (01270) 251105 - ICQ : 15776037
PGP Public Key
http://pgpkeys.mit.edu:11371/pks/lookup?op=get&search=0x231E1CEA
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3edb8f1d@news.povray.org>,
"Ray Gardener" <ray### [at] daylongraphics com> wrote:
> Sorry for the confusion. For my purposes,
> I don't require all primitives, at least in
> the short to mid-term. Further along, it
> would be desirable to have as many POV
> primitives supported as possible.
What *are* your purposes? You still haven't explained this.
> I need the "lots of shapes" capability to implement shaders,
What do shaders have to do with the number of shapes?
> For shapes like hair strands and grass blades,
> however, supporting a spline (rope?) primitive
> should be done.
And what does this have to do with scanlining?
> I also don't see how an isosurface would
> be able to plant lots of trees or independant
> rocks on the terrain and maintain the
> same memory/rendering performance. If I
> was limited to raytracing, I would definitely
> brush up on isosurfaces, but otherwise...
Simple: Make a few dozen variations of tree meshes and plant a few tens
of thousands (or more if you like) on the landscape. Blobs and
isosurfaces can make good rocks, or you could use meshes for those too.
And when viewed from a sufficient distance, a noisy isosurface can make
a good bush.
> Like I mentioned, I think I have to try
> a prototype scanline implementation in POV to
> see how it goes. The jury is kind of out
> on conclusions just yet, and I don't
I haven't seen anyone but you suggest this is a good idea. Personally, I
agree with Thorsten: adding scanline rendering to POV-Ray is silly and
not that useful.
> want to endorse anything until I have
> accurate benchmarks. By Mr. Froehlich's
> own admission, raytracing speed doesn't
> compete with scanlining until you have
> billions of objects, so there's definitely
> room for exploration under that limit.
That's not what he said. Scanlining can compete with rendering triangles
when memory is limited for the data, but you're ignoring other
primitives. Your only reason given for thinking scanline would be better
is an old paper for a computer system vastly different from modern
computers, and your messages show extremely poor understanding of
optimization.
Basically, POV doesn't need it, can't really benefit from it, it would
be extremely limited compared to the raytracer, and it would be a huge
amount of work to implement. You've given memory use as one reason, but
raytracing can be optimized similarly. The fact that POV doesn't
currently use these doesn't mean it never will. Speed as another, but
you haven't compared a scanline rendering with an analogous raytracing.
The speed advantages of scanlining are much less once you include all
the work to make it do what raytracing does automatically. You've used
video games as an example...POV isn't a video game, it is designed under
entirely different constraints. If you want a preview, a much simpler
and more effective solution would be an OpenGL preview.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3edb7069$1@news.povray.org>,
"Ray Gardener" <ray### [at] daylongraphics com> wrote:
> I have considered that. I was about to write
> or modify a RIB renderer, for example.
> But I wasn't aware of the camera statement
> flexibility per se; thanks. If the POV-Team
> didn't mind me copying their parser code,
> the argument for doing a new app would
> certainly be better advanced. POV also
> has all the other infrastructure, such
> as texture file loading, in one nice spot.
A simple look at the license would tell you that you can only use the
source for a custom version of POV-Ray, and it must only add to the
functionality of the program (you can't remove the raytracer). Aside
from this, the parser would not be that useful to you: it builds the
entire scene in memory for the raytracer to render, changing it to do
something else for your scanliner would require rewriting huge amounts
of it (like pretty much all the code handling objects).
> I believe POV's future lies in being a more powerful platform for
> creating 3D graphics in general, not just as a raytracer.
The future of the Persistance Of Vision Raytracer is to be a scanline
renderer?
> For all I know, this experiment
> may one day lead to POV-Ray becoming the dominant
> film production tool for CG effects, and enable
> a whole new population of movie producers.
Why? That's not the goal of POV, and there are already tools for those
jobs. POV is primarily a hobbyist tool, and is oriented more for complex
stills with extremely realistic effects.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> What *are* your purposes? You still haven't explained this.
I have several. Myself, I just want to draw landscapes
how I want. My company would like to provide end users
with a standalone renderer (preferably one they already
know and use) that can do landscapes more efficiently
and with procedural shaders.
I like displacement mapping, and it appears to be a
lot easier and faster with scanline rendering. Or at
least, I know how to do it that way here and now,
which is more expedient than trying to figure out
how to get a raytracer to do it. Besides, POV doesn't
handle procedural shaders anyway... that's another
thing I'd add. That'll make it a totally custom patch,
but that's alright. And down the road, I'd like
to get into film production, but I don't feel
like shelling out for PrMan when I'm perfectly
willing to study and write code, and perhaps there are
others who feel the same. And making something like
that available to everyone has a nice feel to it.
Maybe it won't be based on POV in the end, but
right now, to have something I can experiment with,
using POV saves me a ton of time. If it also happens
to let people test what POV would be like with
such features, that's a useful bonus, even if in
the end the majority thinks it isn't worthwhile.
> What do shaders have to do with the number of shapes?
It's the way I write some of my shaders. I generate lots of geometry.
e.g., a fractal cubeflake has lots of cubes in it. It's a crude
approach, I guess, but it works, and it's way easy to do.
The same reason some people use shadeops in Renderman
instead of SL.
> > For shapes like hair strands and grass blades,
> > however, supporting a spline (rope?) primitive
> > should be done.
>
> And what does this have to do with scanlining?
They'd draw faster scanlined and take less memory.
> Simple: Make a few dozen variations of tree meshes and plant a few tens
> of thousands (or more if you like) on the landscape. Blobs and
> isosurfaces can make good rocks, or you could use meshes for those too.
> And when viewed from a sufficient distance, a noisy isosurface can make
> a good bush.
Cool. But I'd rather not take up the memory,
even if it's just pointers, and, well --
I just don't like using isosurfaces. I think they're
very neat, but they're not my cup of tea.
Ray
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> The future of the Persistance Of Vision Raytracer is to be a scanline
> renderer?
No, I wouldn't dream of replacing the raytracer
functionality. Having both raytracing and scanline
code is more desirable.
> Why? That's not the goal of POV, and there are already tools for those
> jobs. POV is primarily a hobbyist tool, and is oriented more for complex
> stills with extremely realistic effects.
What are POV's goals, actually? Does the
POV-Team have a specific mission statement
in mind, or do they incrementally review
and adjust the code on a as-things-crop-up basis?
Is POV a renderer (method unimportant) or
a raytracer? Is the goal to produce graphics
or specifically to raytrace?
It does have an animation feature. The thing
to ask is, does that feature exist solely
to do raytraced animations, was it an after-thought
to placate people who wanted to make short clips,
is it there just to augment film producers'
other footage generators, or is it a core
feature to let POV evolve towards doing
full production film work?
When one considers the scripting and anim features
combined, it's pretty powerful. For something
that isn't meant to do movies, it's also
pretty compelling in that sense. It seems almost
a shame not to leverage that.
Ray
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3edcd7d5$1@news.povray.org>,
"Ray Gardener" <ray### [at] daylongraphics com> wrote:
> One difficulty I have with doing that is that
> sometimes its easier to drive a shader using
> some stepped coordinate system of the primitive rather
> than the world locations a raytracer returns.
What stops you from doing the same thing with raytracing? What makes you
think you can only compute world coordinates?
> For example, when scanlining a heightfield,
> I do it cell by cell, so rocks can be distributed
> based on how likely a cell is to be occupied.
> With raytracing, I have to derive the cell
> coords, and then maintain some kind of map
> to keep track of which cell was painted with what.
No you don't...why would you do this? You're just scattering rocks
around. Just use trace() to place them on the surface at random
locations. If you want an uneven distribution, use some function to
control the probability of a rock landing.
> Displacement shading is also easier when scanlining.
> At least, I haven't figured an easy way to do it
> when raytracing.
Sounds like you're talking about something like this:
http://www.cs.utah.edu/~bes/papers/height/paper.html
There is a landscape example with the equivalent of 1,000,000,000,000
triangles. And instead of generating and discarding millions (or
billions) of unused microtriangles, it generates them as needed.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |