POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
5 Aug 2024 02:24:17 EDT (-0400)
  Scanline rendering in POV-Ray (Message 71 to 80 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: ABX
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 04:37:34
Message: <f3vtdv8kobsem4upmcc2m23789nkuuha7l@4ax.com>
On Thu, 5 Jun 2003 00:49:56 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> Hmm... I understand, but what I now wonder is
> what happens when a critical mass of users have
> made large investments in using POV-Ray.

They have it, they use it.

> Speaking hypothetically, let's say that all copies of POV-Ray
> somehow vanished, along with any source code versions, and the
> developers got tired of the whole thing as well. Essentially,
> no more POV-Ray.

This is only case of whole universe disaster.

> Intermediate
> scenarios (e.g., the POV-Team decides on a radical change
> that breaks half of all scene files, but another group instantly
> springs up forking POV-Ray to preserve compatability, or
> the backlash is so strong the POV-Team changes their minds)
> would also demonstrate the same effect. It's similar to
> what people notice about political leaders: yes, the people
> at the top technically have absolute power, but in practice,
> they do not.

Your are blowing this problem out of proportions.
I do not need any Team to play with scenes with existing binaries. I do not need
any Team to make own extensions I need. But that's the Team who initialized my
interest, start my fun, released application for me. I love it. I use it. 

> Let's imagine another case: an outsider develops a really
> popular patch. Nearly everyone loves it, but (again, hypothetically)
> the POV-Team doesn't. What would happen? The code would be
> forked and the majority of the user base would follow
> whatever group managed the fork.

Excuse me, how long are you with POV? Have you noticed differences in features
betwen POV-Ray 3.1, MegaPOV 0.7, POV-Ray 3.5 and MegaPOV 1.0 and some
announcements about MegaPOV 1.1?

ABX


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 05:10:02
Message: <web.3edf071ec1329458541c87100@news.povray.org>
Patrick Elliott wrote:
>But it will see the triangles that make them up, which takes lots of
>space.

I think the textures take up rather more. Games often minimise geometry when
they can get away with better texture maps.

>POV-Ray has a built in example of real time raytracing. It is small, but
>then you are dealing with an engine that is running on top of and OS and
>can't take full advantage of the hardware, since it will 'never' have
>100% total access to the processor. A card based one would likely be far more
>optimized, support possible speed improvements that don't exist in POV-Ray and
>have complete access to the full power of the chip running it. This would be
>slower why?

Neither will a software-only (I emphasize software only - no card support)
scanline renderer (under windows), yet they can work in realtime at higher
resolution. Surely there is an example that is optimised for game-style
geometry, rather than being the raytracing equivalent of lightwave?
I emphasize that I'm not denying your argument on strong evidence, but I
have not seen a realtime raytracer do what a decent realtime
scanline/S-buffer based system can do. I'd really like to, mind.

>That is a point, but nothing prevents you from making explodable objects
>from triangles. In facts, the increase in available memory by using
>primitives in those things that are not going to undergo such a change
>means you can use even more triangles and make the explosion even more
>realistic. Current AGP technology it reaching its limits as to how much
>you can shove through the door and use. Short of a major redesign of both
>the cards and the motherboards, simply adding more memory or a faster
>chip isn't going to cut it.

Until it actually happens I remain unconvinced. We've always been on the
edge of running out of bandwidth since the first 3D cards. Of course,
there's a
limit, but I hear convincing speculation each way on when we'll reach it.
If a competitive realtime raytracer on a chip is possible, why would it
require favours in terms of the competition running out of room for
triangles? It should be able to do better at any time.

>Again. Why would anyone design one that 'only' supported such primitives?
>That's like asking why POV-Ray supports meshes if we all think primitives
>are so great. You use what is appropriate for the circumstances. If you
>want a space ship that explodes into a hundred fragments use a mesh, if
>you want one that gets sliced in half by a beam weapon, then use a mesh
>along the cut line and primitive where it makes sense. Duh!

Speed and complexity (hence cost). Current cards (at the high end for games)
can cost a couple of hundred dollars. Any simplification that can be made
can save cost, and having a card that doesn't need to switch over from
using triangles to drawing boxes halfway through a scene is good for
efficiency. Your particular example seems to require conversion between
triangles and other primitives (the spaceship is at one point a mesh, and
then it's half a mesh and half something else), which is not exactly rapid
in many cases. Aside from that, what's POV got to do with it? Cards don't
support NURBS as primitives either, despite them being popular in
non-realtime scanline renderers. I don't think a raytracer on a card
designed for realtime game use has to solve exactly the same problems as a
top-level raytracer for non-realtime use. In the same way that the Quake
engine and 3D Studio aren't kin.

>Well, that kind of describes most of the new cards that come out. lol
>Yes, it would need compatibility with the previous systems, but that
>isn't exactly an impossibility.

It is more difficult when you have completely changed the philosophy behind
the card, but want it to remain compatible with the previous philosophy.
You don't agree?

>There is nothing to prevent using a second chip dedicated to processing
>such things and having it drop the result into a block of memory to be
>used like a 'normal' bitmap. This assumes that the speed increase gained
>by building the rendering engine into the card wouldn't offset the time
>cost of the procedural texture. In any case, there are ways around this
>issue, especially if such methods turn out to already be in use on the
>newer DirectX cards.

Then aren't you going to lose the advantage of generating textures on the
card? If I generate a bitmap by procedure or by artist and subsequent
loading, I must still store it. Newer cards do procedural shading on a
pixel as it's rendered (or so I thought), so no extra storage is required.


Post a reply to this message

From: ABX
Subject: Re: Perhaps a "process_geometry" keyword
Date: 5 Jun 2003 05:11:25
Message: <hj0udvgot7jf3c67ne14kol96iuid9kt78@4ax.com>
On Wed, 4 Jun 2003 13:32:10 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> Sorry, I meant 'procedural' in the user-definable sense.
> POV-Ray's procedural textures are predefined.
>
> To be fair, I thought I remembered reading
> somewhere that POV 3.5 let textures use
> user-defined functions, although just going
> through the docs again I can't find
> references to that.

http://www.povray.org/search/?s=user-defined

> There's definitely
> no displacement shading, at any rate,
> except with isosurfaces (more below).

I think, you are mistaking texturing ang geometry in SDL syntax (thought they
are connected in reality). You can use the same informations for create both but
you will not find information about changing geometry in chapters about creating
textures.

> > You like procedural geometry but don't like isosurfaces?
>
> Well, isosurfaces have a sampling function interface
> analogous to regular shading in Renderman. For some
> shapes, this is preferable -- e.g., a sphere is
> just x^2 + y^2 + z^2.

That's not equation of f_sphere but of f_r internal function.
And becouse you do not see difference you wrote next sentence.

> But for other shapes,
> determining from (x,y,z) whether the point is
> inside or outside gets complicated.

Actually test for inside/outside is far simpler then looking for intersection. 
Inside are all points where value of function for given point is below value of
threshold. Outside are all points where value of function for given point is
over value of threshold. On surface are all points where value of function for
given point is equal value of threshold.

> POV-Ray might benefit from including a
> procedural geometry keyword

Adding one keyword does not change functionality. That's internall support of
keyword which makes feature useful ;-)

> which would
> have an option of emitting the geometry
> to the scene's object tree (for raytracing
> along with the other objects)

If permanently, why this can't be done by script?

> There would be a DLL (on win32) named landscape_detail.dll
> that would be passed the object, along with a reference
> to the zbuffer, the object tree, and cam/lighting.
> The DLL could then proceed to create whatever geometry
> it wanted, adding it to the tree or rasterizing it
> into the zbuffer (the bool arg after the DLL name indicates which).

You can read about adding support for dlls for example in this thread
http://news.povray.org/povray.programming/31596/

> The geometry_processing keyword would also be available
> outside global_settings, so that one could easily change
> the rendering manner for sections of a script in specific ways.

Two side notes, since number of your post increases can I ask you kindly about
making your lines longer? Is there any particular reason for you to use
sometimes less than 50 characters? It would make reading your long posts easier
for those who do not have wheel in mouse and use only keybord. Thanks.

And before you will start discuting all features you want discuss I just want to
told you about existence of povray.unofficial.patches and povray.programming
groups. Their existence is well described at
http://news.povray.org/povray.announce.frequently-asked-questions/

ABX


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 05:50:01
Message: <web.3edf118dc1329458bd7f22250@news.povray.org>
Christopher James Huff wrote:
>True, as I said, they were reduced to triangles. The API could probably
>hook up to built-in primitives if they were available, though.

Oh, no doubt. You might be able to implement an OpenGL-conformant interface,
so that the teapot really does go through as a primitive to the card... :-)

>Real-time involves several simplifications in computation and geometry,
>which does push the advantage to scanline. But do you need 200FPS? If
>the raytracing card is *fast enough*, its advantages could outweigh
>those of a scanline card.

I expect most people would be happy with 30 fps or above (some people need
higher in multiplayer games, but they turn off detail). If the card cannot
match other cards on scenes of similar complexity at that rate, it will be
visually annoying.

>(There was some RTRT...Real Time Ray-Tracing...demo called something
>like Heaven 7, you may want to look at it. It was quite impressive on my
>PC.)

Yes, and Rubicons 1 and 2, and Fresnel 1 and 2. The Rubicons did the
raytracing at low resolution and used interpolation. Fresnel 2 (and 1?)
actually took advantage of a 3D card to accelerate the raytracing,
apparently. Heaven 7 looks incredibly good (it only needed one light source
and no shadow checking!). All of these did no space subdivision (no octrees
or BSP methods). Are there then only RTRT *demos*? No equivalent of the
various toy engines that you can find on places like the 3D Engine List?

>I didn't say they were useless. Triangles are definitely useful for
>things like terrain, though I wonder if a hardware isosurface solver
>could compete...

It takes long enough on an Athlon to solve those things...

>And the advantage of a box is memory: one box takes up less memory than
>one triangle. You *can* use groups of them, nobody has said you would be
>limited to single primitives.

Sure, but people tend to minimise geometry in favour of textures in games. I
know it's often the other way round on non-PC graphics systems (hence
OpenGL being caught behind the times when DirectX started supporting all
these fancy texturing methods), so perhaps a primitive rich card would
reverse this.

>There was a POV-Ray include that used CSG to "explode" objects. If that
>method wasn't suitable, an animated mesh would probably be the best way
>to go. Again, I'm not saying meshes are useless, just that there are
>often better things to use.

I know, I've used it (and very useful it is too :-) However, isn't
differencing famously slow? Also, wouldn't a differenced object be doing
calculations for all the empty "subtracted" pixels as well?

>This is pretty much irrelevant. I just said you can add and remove
>geometry. What you are talking about would be more switching between
>different models for different frames of the animation.

I see.

>Who said no triangles? "More primitives" != "Ban triangles"!

I'm guessing that triangles would remain the most popular shape, except in
the simplest room-based games (irrespective of technical arguments, it
would take a while for programmers/artists to start using the other
primitives extensively).

>With a raytracing engine, the sphere would likely be faster than the
>crossed polygons.

I've no idea whether that's true, but I imagine it's unlikely to be true
with a volumetrically shaded sphere.

>I was not talking about texture-mapped spheres. I was specifically
>talking about using procedural shaders: something volumetric or based on
>angle of incidence to the surface, or something more like the glow patch.

Aren't atmosphere/volumetric effects often faster with scanline methods (I
don't know)?

>You're assuming all the texture maps and models fit in the card memory.
>Yes, when the image map is already local, using it would be faster than
>all but the simplest shaders. That doesn't mean procedural shaders are
>too slow. If a specific shader is too slow, you use a faster one, maybe
>resorting to an image map if it will do the job and there is memory for
>it.

They usually do, because the programmers know what sort of minimum memory
requirements on the card they're prepared to tolerate. When they don't, you
really notice. However, using predominantly procedural shaders you throw
the bottleneck on the card's processor, and card memory is cheaper.


Post a reply to this message

From: Andreas Kreisig
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 09:21:58
Message: <3edf43f5@news.povray.org>
Christopher James Huff wrote:



> (There was some RTRT...Real Time Ray-Tracing...demo called something
> like Heaven 7, you may want to look at it. It was quite impressive on my
> PC.)

If I remember correctly, Heaven 7 is an OpenGL demo, not RTRT?

Andreas

-- 
http://www.render-zone.com


Post a reply to this message

From: Tim Cook
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 09:46:14
Message: <3edf49a6$1@news.povray.org>
Ray Gardener wrote:
> scenarios (e.g., the POV-Team decides on a radical change
> that breaks half of all scene files

Wasn't there some thread a while back about completely ditching all
the POV primitives and using isosurfaces for everything instead? :)

-- 
Tim Cook
http://home.bellsouth.net/p/PWP-empyrean

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GFA dpu- s: a?-- C++(++++) U P? L E--- W++(+++)>$
N++ o? K- w(+) O? M-(--) V? PS+(+++) PE(--) Y(--)
PGP-(--) t* 5++>+++++ X+ R* tv+ b++(+++) DI
D++(---) G(++) e*>++ h+ !r--- !y--
------END GEEK CODE BLOCK------


Post a reply to this message

From: Ray Gardener
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 10:32:39
Message: <3edf5487$1@news.povray.org>
> Your are blowing this problem out of proportions.

What problem? I was just responding to Ken's assertion that
"the development model of POV-Ray was pretty much at the whim
of the developers", when that is not actually true. I'm not making any
connection between this particular subject and the scanline stuff;
I'm just pondering it independantly as an interesting socioeconomic
phenomenon in its own right.

Roughly speaking, whoever supports a file format best gains the
majority of the user base and relegates other supporters to the sidelines,
or even displaces them entirely. But the users decide what constitutes
"best", so whatever feeling of control developers have is illusory.

The only point I can make relating that and the scanline feature
is that *if* the majority of users decided scanlining was desirable,
then it wouldn't matter what the developers thought; the feature
would inherently find its way in. OTOH, sometimes users don't know
what they want, and they realize only in retrospect that they badly needed
a certain feature (it explains those "How did I ever live without X?"
sayings).
Software is prone to that phenomenon because myths build up easily about
software, and the only way to change thinking is to just develop working
code and demo it. So I risk failure to demo scanlining just in case
it can only be desirable retroactively.

The point about the different unofficial POVs is interesting too,
actually. They demonstrate what happens when a large but minor part
(say 30% or 40%) of the user base prefers a feature. A parallel fork
winds up being separately maintained. A scanlining feature could
end up like that too.

As for my narrow lines of text, I find reading long lines difficult
(it's why newspaper columns are narrow -- imagine if newspaper articles
ran their text all the way across a page. It's also why all the
main news sites like Salon, ABC, MSNBC, etc. use columns). I'd rather
scroll (I just hit the down arrow key) than keep losing track of
which line is next.

Ray


Post a reply to this message

From: ABX
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 10:44:03
Message: <dfludvgjjbrkpmjg835455iqksqq84906s@4ax.com>
On Thu, 5 Jun 2003 07:32:58 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> > Your are blowing this problem out of proportions.
>
> What problem?

Importance of listed possibilities of the future.

ABX


Post a reply to this message

From: Ray Gardener
Subject: Re: Perhaps a "process_geometry" keyword
Date: 5 Jun 2003 10:50:48
Message: <3edf58c8@news.povray.org>
> Adding one keyword does not change functionality. That's internall support
of
> keyword which makes feature useful ;-)

Well, duh. Obviously there has to be code implementing
what the keyword does. I wouldn't be digging into
the POV source otherwise.


> > which would
> > have an option of emitting the geometry
> > to the scene's object tree (for raytracing
> > along with the other objects)
>
> If permanently, why this can't be done by script?

Because the SDL is not always the preferred language.
A person might have legacy code in another language,
or prefer to use C++ because it's object oriented,
or compiled code runs faster when object generation
involves some complex process, etc. All else being
equal, I'd rather write in C/C++ just because... I'm
tired of having to type "#declare" every time I
want to assign a value to a variable in SDL.

Ray


Post a reply to this message

From: Gilles Tran
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 11:55:49
Message: <3edf6805$1@news.povray.org>

3edf5487$1@news.povray.org...
> Roughly speaking, whoever supports a file format best gains the
> majority of the user base and relegates other supporters to the sidelines,
> or even displaces them entirely. But the users decide what constitutes
> "best", so whatever feeling of control developers have is illusory.

The point you're missing is that in the case the Povray, the users are also
the developers. It's not a users vs developers game. The two latest members
of the POV-Team (Nathan Kopp and Ron Parker) started out as patchers, took
it on their own to add features that became extremely popular, maintained
and supported complex multipatches for a while (thus demonstrating their
goodwill and ability to interact with users) and were quickly "promoted" as
developers.
The fact is, if you're a proven talented developer, create POV-Ray code that
1) works 2) is actually useful and 3) doesn't turn into a support headache,
your work will find its way into a future POV-Ray. I too was wondering about
the forking problem a few years ago (I've been using patches since 1996) but
it just didn't happen and the system self-regulated very nicely... which is
an amazing socioeconomic phenomenon indeed...

>Software is prone to that phenomenon because myths build up easily about
>software, and the only way to change thinking is to just develop working
>code and demo it.

That's certainly true, and could be the case for a scanline alternative.
But beware of developer's hubris, something we've seen a lot in this groups,
i.e. announcements/proposals from programmers about radical features
supposed to change the future of Pov-Ray, but that eventually came to nought
for various reasons, the main one being that the developers weren't familiar
enough with POV-Ray itself and how it is actually used.
My only advice would be for you to start using POV-Ray yourself to create
complex scenes and animations (I'm not talking about demo scenes, but images
created for the IRTC or another real-life purpose) and then rethink your
patch from this experience.

G.

--

**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.