![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On Tue, 3 Jun 2003 17:28:29 -0700, "Ray Gardener" <ray### [at] daylongraphics com>
wrote:
> My company would like to provide end users
> with a standalone renderer (preferably one they already
> know and use) that can do landscapes more efficiently
> and with procedural shaders.
To be clear: you mean your company would sell your modified POV ?
ABX
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3edd3d1f$1@news.povray.org>,
"Ray Gardener" <ray### [at] daylongraphics com> wrote:
> > What *are* your purposes? You still haven't explained this.
>
> I have several. Myself, I just want to draw landscapes
> how I want. My company would like to provide end users
> with a standalone renderer (preferably one they already
> know and use) that can do landscapes more efficiently
> and with procedural shaders.
POV textures *are* procedural. Image maps are relatively rarely used.
> Besides, POV doesn't
> handle procedural shaders anyway... that's another
> thing I'd add. That'll make it a totally custom patch,
> but that's alright.
Uh, you may want to look at the POVMAN patch, which lets you use
Renderman shaders on top of the existing procedural texture system.
> Maybe it won't be based on POV in the end, but
> right now, to have something I can experiment with,
> using POV saves me a ton of time. If it also happens
> to let people test what POV would be like with
> such features, that's a useful bonus, even if in
> the end the majority thinks it isn't worthwhile.
I don't think you're going to save any time by building a scanline
renderer on POV. Instead, you'll be spending lots of time figuring out
how to hack your renderer into a system that was never designed with it
in mind. I don't think you have any idea what you're getting into.
You're free to try, of course, but I honestly don't expect to ever see
anything substantial out of this. You'd be far better off staying with a
separate program than trying to wedge a scanline renderer with those
features into POV.
> > What do shaders have to do with the number of shapes?
>
> It's the way I write some of my shaders. I generate lots of geometry.
> e.g., a fractal cubeflake has lots of cubes in it. It's a crude
> approach, I guess, but it works, and it's way easy to do.
> The same reason some people use shadeops in Renderman
> instead of SL.
I'm not familiar with Renderman, so...what are you talking about? Some
kind of procedural geometry? That can be raytraced, within certain
limits. Either generate and store it beforehand or use a specialized
algorithm to do it at render time (as was used in that paper I mentioned
in another reply).
> Cool. But I'd rather not take up the memory,
> even if it's just pointers, and, well --
> I just don't like using isosurfaces. I think they're
> very neat, but they're not my cup of tea.
You like procedural geometry but don't like isosurfaces?
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
On Tue, 3 Jun 2003 21:55:41 -0700, "Ray Gardener" <ray### [at] daylongraphics com>
wrote:
> > Why? That's not the goal of POV, and there are already tools for those
> > jobs. POV is primarily a hobbyist tool, and is oriented more for complex
> > stills with extremely realistic effects.
>
> What are POV's goals, actually? Does the
> POV-Team have a specific mission statement
> in mind, or do they incrementally review
> and adjust the code on a as-things-crop-up basis?
> Is POV a renderer (method unimportant) or
> a raytracer? Is the goal to produce graphics
> or specifically to raytrace?
The registered trademark is 'POV-Ray', not 'POV'. I suppose '-Ray' exist there
for some reason.
ABX
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <cja### [at] netplex aussie org> ,
Christopher James Huff <cja### [at] earthlink net> wrote:
> Sounds like you're talking about something like this:
> http://www.cs.utah.edu/~bes/papers/height/paper.html
>
> There is a landscape example with the equivalent of 1,000,000,000,000
> triangles. And instead of generating and discarding millions (or
> billions) of unused microtriangles, it generates them as needed.
Ah, look, one billion triangles rendered in 43 hours* for 1080 Kpixels.
Scaling your previously reported result of 16.8 million triangles drawn in
12 minutes as 275 Kpixels, your scanline method would take over 47 hours.
And their scene render time includes global illumination and shadows.
Thorsten
* Note that they give time in CPU hours, not actual runtime - they used a
shared-memory supercomputer, but each node is not faster than a desktop
processor at all. So runtime is probably a few minutes. BTW, all existing
scanline algorithms do have problems scaling well on supercomputers or
clusters when working on the same image. This is not the case for ray
tracing...
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3edda353$1@news.povray.org> , "Thorsten Froehlich"
<tho### [at] trf de> wrote:
> your
This response was to Ray Gardener, not to you Chris...
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
3edd7bbf$1@news.povray.org...
> What are POV's goals, actually?
The real question is, is there presently another free graphic application
that does what POV-Ray accomplishes already, with the same level of
popularity and user support? And how many apps have we seen that started up
with a bang only to die in silence few years later? POV-Ray's development
model is certainly bizarre in some ways, but, with all its shortcomings, it
has been working for more than 10 years now.
If you really want a "goal" in POV-Ray, it is to be what its users want it
to be. It's self-evolving. I'm not really sure that stating an ambitious
goal, while laudable and a normal procedure for commercial software where
one has to convince financial backers, is the way to go: it's a little like
putting the cart before the horse in that case.
In these forums, we seen a lot of very well-minded, talented people offer
help of this sort, but in the end it all comes down to this: do users really
care? Do I want to see POV-Ray used in commercial movies? No interest. A
fast preview? Some interest, but the one I have is fast enough already so it
has a rather low priority. But do I want better, faster, artefact-free
global illumination and volumetrics? Programmable shaders? Full HDRI
support? A usable atmospheric (clouds and sky) model? You bet!
G.
**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Patrick Elliott wrote:
>True enough, but in this case I mean 'cheat' as in faking real geometry
>like boxes and spheres using triangles. I consider it a cheat because it
>takes advantage of the existing hardware capability to produce something
>that only looks real by using so many triangles that you can't place
>anything else in the scene with it. That is the nature of most cheats,
>something that looks good enough for you purpose, but doesn't really
>reproduce the result accurately.
You can exactly reproduce a mathematical box with 12 triangles. The sphere
is a better example, but where does this approach end? You end up with a
plethora of "fundamental" objects (including triangles!) to avoid
impersonating one geometry with another. This is great for many purposes
(see: POV), but I assert that it's not too good for games - being able to
manufacture anything out of a single shape (especially one with the
projection properties of a triangle) is valuable.
Nobody is going to bother with perfect spheres if conventional
triangle-based characters slow to a comparative crawl.
As for real geometry, I have yet to see a perfect triangle, sphere or box
in the real world :-)
>True, but some things don't need to be fed back into in continually and
>those bases on primitives would take less room to store, meaning you can
>leave more of them in memory than normal. This should cut in half the
>amount of stuff you have to cram into the card in each frame, possibly
>more.
I don't see why a raytracing card can retain an object but a scanline card
cannot, *if* both know in advance that an object will persist from frame to
frame (which they most likely won't).
Looking at typical objects that appear in games, a fast triangle is very
valuable. There once was a game that built characters from ellipsoids, but
they too are an approximation and the concept could not obviously benefit
from improving system speed. Objects in games are often not simple spheres.
>Think some new cards may use them, but the same issue existed for them as
>for a true card based engine, the methods used to produce them where
>simply too complex to 'fit' in the existing architecture.
I thought some modern cards implement things like pixel shaders, and even
small shading languages, which fit in nicely. I don't think that procedural
textures (or textures generated on the fly) are unique to raytracing, or
are intrinsically harder in scanline than raytracing (or vice versa). I've
seen some raytracers that rely entirely on image mapping.
>Kind of hard to say, since the option isn't exactly common or if it does
>exist is not used from what I have seen.
I think some games companies are still getting used to programmable shaders
on cards (and also have the problem that on some platforms, a lot of cards
are not modern enough to have them). Perhaps that's another point - a new
raytracing card for games would have to be compatible with already-released
games, or nobody would buy it. It would need to support the existing OpenGL
or DirectX interfaces.
Procedural textures (or programmable textures on these cards) are
increasingly useful, but they do take time to compute and hence seem mainly
restricted to special effects. When you need to output 800x600 pixel frames
at >24fps, almost every trick becomes useful.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Ray Gardener wrote:
> What are POV's goals, actually? Does the
> POV-Team have a specific mission statement
> in mind, or do they incrementally review
> and adjust the code on a as-things-crop-up basis?
> Is POV a renderer (method unimportant) or
> a raytracer? Is the goal to produce graphics
> or specifically to raytrace?
I think you may be surprised to learn that the goal of the POV-Team is not
so much POV-Ray itself as much as it is the programming challenges involved
in making it, which are many and varied. The core developers of the program
are either professional programmers who work on if for fun in their spare
time or are computer science students working on it to help develop their
programming skills. That we as a public get to enjoy the fruits of their
labor is secondary to that goal. With that in mind, the development model
of POV-Ray is not clearly defined and is pretty much at the whim of the
developers and whatever tickles their fancy at the time. For a commercial
application that would be a disaster. For a hobbyists toy, and student
learning tool, it is an amazing success.
The above is based solely on my own personal observations and does not
necessarily represent the views and opinions of the POV-Team.
--
Ken Tyler
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <web.3eddb93bc1329458541c87100@news.povray.org>,
"Tom York" <tom### [at] compsoc man ac uk> wrote:
> You can exactly reproduce a mathematical box with 12 triangles. The sphere
> is a better example, but where does this approach end? You end up with a
> plethora of "fundamental" objects (including triangles!) to avoid
> impersonating one geometry with another. This is great for many purposes
> (see: POV), but I assert that it's not too good for games - being able to
> manufacture anything out of a single shape (especially one with the
> projection properties of a triangle) is valuable.
And why is being able to manufacture things out of many shapes worse
than only having one shape to use?
(Actually, more than one shape, at least in OpenGL: points, lines,
line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads,
quad-strip, polygon. And utilities to make disks, cylinders, spheres,
other quadrics, NURBSs, etc...though those are tessellated. GLUT even
includes a *teapot* primitive!)
> Nobody is going to bother with perfect spheres if conventional
> triangle-based characters slow to a comparative crawl.
Nobody will use the faster option if the slower option is slower? Or
nobody will use the raytracing card if it is slower, even if it gives
better quality?
> As for real geometry, I have yet to see a perfect triangle, sphere or box
> in the real world :-)
Well, a triangle doesn't model anything in the real world, you need at
least 4 of them to get a decent approximation of a real-world object,
and you don't see many tetrahedrons lying around. There are many things
that a box models quite closely, to a level that would be invisible on a
typical camera image with comparable resolution.
> I don't see why a raytracing card can retain an object but a scanline card
> cannot, *if* both know in advance that an object will persist from frame to
> frame (which they most likely won't).
Both can, and there are have been ways to do so for quite some time. The
difference is that a few thousand primitives can be stored in the space
of a few thousand triangle mesh.
> Looking at typical objects that appear in games, a fast triangle is very
> valuable. There once was a game that built characters from ellipsoids, but
> they too are an approximation and the concept could not obviously benefit
> from improving system speed. Objects in games are often not simple spheres.
Marathon 1, an old Mac first-person shooter, had some left-over bits and
pieces for a sphere renderer in its code...wasn't ever used in the game,
though, and this predated 3D accelerator cards for home computers. But
why couldn't a game benefit from a sphere primitive? With procedural
shaders, you could do a lot with a sphere, like virtually any kind of
energy bolt or blast effect. With more complex CSG available, you could
build a complex room with primitives and procedural shaders and still
have space available on the card for character meshes and skins.
> I think some games companies are still getting used to programmable shaders
> on cards (and also have the problem that on some platforms, a lot of cards
> are not modern enough to have them). Perhaps that's another point - a new
> raytracing card for games would have to be compatible with already-released
> games, or nobody would buy it. It would need to support the existing OpenGL
> or DirectX interfaces.
That is the primary problem: R&D would cost money and resources that
could be used on a better but more conventional card, and at release
there would be nobody ready to make use of the card, and no guarantee
that there ever would be. It requires a card supporting those features,
software to use the new features of the card, and game designers to use
the new features of the software.
> Procedural textures (or programmable textures on these cards) are
> increasingly useful, but they do take time to compute and hence seem mainly
> restricted to special effects. When you need to output 800x600 pixel frames
> at >24fps, almost every trick becomes useful.
There is no limit on how long they can take, but that doesn't mean they
are too slow to be useful. Dedicated hardware should be able to evaluate
procedural textures extremely quickly, more quickly than an image map if
it has to drag the image data back from main memory. Here you have that
size issue again: image maps are big, and video card memory is limited,
so things often have to be shuffled between video card memory and main
system memory, which is surprisingly slow.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Christopher James Huff wrote:
>And why is being able to manufacture things out of many shapes worse
>than only having one shape to use?
>(Actually, more than one shape, at least in OpenGL: points, lines,
>line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads,
>quad-strip, polygon.
Complexity (and so cost) of the card. And the primitives that the 3D API
gives you are not necessarily the same as the primitives that get passed to
the card (for instance, DirectX now deals in pixel-sized triangles instead
of points, and seems to have done away with the 2D DirectDraw stuff).
>And utilities to make disks, cylinders, spheres,
>other quadrics, NURBSs, etc...though those are tessellated. GLUT even
>includes a *teapot* primitive!)
The card isn't going to see any of these.
>Nobody will use the faster option if the slower option is slower? Or
>nobody will use the raytracing card if it is slower, even if it gives
>better quality?
Nobody will use the raytracing card for games if the quality gain is
insufficient given the speed drop. I (emph.) assume that there will be a
speed drop because I have seen many real-time scanline-based engines that
didn't use a 3D card. I have seen one real-time raytracer, and that was one
of the highly hand-optimised demos that used to be popular in the '90s. The
resolution was very low, the framerate was very low, and the reflection at
that resolution was indistinguishable from a reflection map. I would be
very happy for someone to prove me wrong with a realtime raytracer that can
compete on equal terms with a good realtime scanline renderer (in software,
of course - no 3D accelerator).
>Well, a triangle doesn't model anything in the real world, you need at
>least 4 of them to get a decent approximation of a real-world object,
>and you don't see many tetrahedrons lying around. There are many things
>that a box models quite closely, to a level that would be invisible on a
>typical camera image with comparable resolution.
I don't see a single box convincing anyone nowadays. You must use groups of
them, just as you must use groups of triangles to resemble anything useful.
And triangles certainly have their uses - what about terrain?
I think the ability to deform/split up objects in realtime using a triangle
mesh has quite a few advantages in games. Can you explode a box? Not as
easily as you can explode a few triangles. If you model an object using
more and more complex primitives, you necessarily have problems if at some
point you want to treat the object as a collection of smaller items.
>Both can, and there are have been ways to do so for quite some time.
In a game, an object may be removed at almost any time, either due to player
action directly or to something else. Surely unpredictable, especially
given trends towards destroyable/interacting scenery.
>The
>difference is that a few thousand primitives can be stored in the space
>of a few thousand triangle mesh.
But who's going to construct a game with thousands of sphere or box
primitives but no triangles? Room games maybe, but games in the open or set
in space? Surely you're not proposing the isosurface as a practical
realtime primitive shape? :-)
>Marathon 1, an old Mac first-person shooter, had some left-over bits and
>pieces for a sphere renderer in its code...wasn't ever used in the game,
>though, and this predated 3D accelerator cards for home computers. But
>why couldn't a game benefit from a sphere primitive? With procedural
>shaders, you could do a lot with a sphere, like virtually any kind of
>energy bolt or blast effect. With more complex CSG available, you could
>build a complex room with primitives and procedural shaders and still
>have space available on the card for character meshes and skins.
Yes, the game I mentioned also predated 3D cards (part of the reason they
were impelled to try ellipsoids, perhaps). For things like energy bolts, a
player will usually insufficient time to see the difference between a
sphere and a couple of crossed texture-mapped polygons (or a more
complicated model). For blast effects, I have seen textures mapped to a
sphere used as a kind of blast, and it generally looks terrible IMO. The
edge is too sharp, too uniform (same problem as in Quake 2-style blasts,
which were done with a sort of simple polygonal explosion model). I think
some sort of volumetric method would be far better here. With polygons, you
can have the procedural shader and a flexible type that doesn't have to
enforce spherical or elliptical symmetry, or be a closed surface.
For CSG applied to rooms, it would need to be as fast as the BSP-style
methods used for static geometry at the moment (although I guess BSPs are
not used for interactive scenery).
>That is the primary problem: R&D would cost money and resources that
>could be used on a better but more conventional card, and at release
>there would be nobody ready to make use of the card, and no guarantee
>that there ever would be. It requires a card supporting those features,
>software to use the new features of the card, and game designers to use
>the new features of the software.
Yes. I think you could get away with introducing a new card, even if nobody
used the new features, but it would have to support existing games,
performing at least as well as the conventional cards. This would be
difficult, particularly since it would inevitably be more expensive.
>There is no limit on how long they can take, but that doesn't mean they
>are too slow to be useful. Dedicated hardware should be able to evaluate
>procedural textures extremely quickly, more quickly than an image map if
>it has to drag the image data back from main memory.
Why? The procedural must be calculated using a (probably) user-specified
formula for every pixel that uses it. The image map must certainly be
projected every pixel (a single unchanging operation), but the
time-consuming step of actually acquiring the bitmap from system memory
hopefully occurs only once for a scene.
>Here you have that
>size issue again: image maps are big, and video card memory is limited,
>so things often have to be shuffled between video card memory and main
>system memory, which is surprisingly slow.
I think 3D card time is more limited. Bitmaps have the advantage that
texture loading time is independent of surface texture complexity. A flat
grey texture will take no more time to apply than a weathered texture with
"hello world" scrawled on it of the same resolution. Obviously, there's
going to be a significant difference with a procedural approach. Procedural
shaders are certainly going to be useful for particular effects, but I
don't believe they're going to dominate any time soon, if ever.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |