POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
4 Aug 2024 18:18:22 EDT (-0400)
  Scanline rendering in POV-Ray (Message 67 to 76 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Ray Gardener
Subject: Benchmarking
Date: 4 Jun 2003 18:35:06
Message: <3ede741a@news.povray.org>
> Ah, look, one billion triangles rendered in 43 hours* for 1080 Kpixels.
> Scaling your previously reported result of 16.8 million triangles drawn in
> 12 minutes as 275 Kpixels, your scanline method would take over 47 hours.
> And their scene render time includes global illumination and shadows.

That scene I submitted did generate a lot of
geometry that was ultimately occluded by nearer geometry.
Here is another scene,
http://www.daylongraphics.com/products/leveller/render/testimg_001.jpg

which took 67 seconds.

When rendering, after a few seconds, the image looks like this:
http://www.daylongraphics.com/products/leveller/render/testimg_004.jpg

Sometimes it's nice to scanline to see the object draw
that way, instead of having the render window draw
line by line (or mosaic into smaller and smaller squares).

Has anyone implemented displacment shading in POV-Ray
using that algorithm? If not, I'd volunteer.


Ray


Post a reply to this message

From: Christopher James Huff
Subject: Re: Raytracing displacement shading
Date: 4 Jun 2003 18:39:38
Message: <cjameshuff-4B5D53.17322304062003@netplex.aussie.org>
In article <3ede7089$1@news.povray.org>,
 "Ray Gardener" <ray### [at] daylongraphicscom> wrote:

> That is very neat. It has some restrictions on
> how the displacements can be expressed, but still,
> offers a compelling option over bumpmapping.

As the paper mentions, it is more restrictive than REYES, but is the 
same type used in Maya, so in practice it may not be as restrictive as 
it first seems.


> Would you know if this algorithm is being considered
> for inclusion in POV-Ray?

In a near-term official version, it is unlikely. 3.5 is near the end of 
life for this code-base, there are plans for a redesign and rewrite. 
There's several patch writers interested in it though, and the POV Team 
is aware of it. I'm going to try to implement it in my own raytracer, if 
I succeed I will get to work on a POV patch. This and Schaufler and 
Jensen's point geometry raytracing algorithm both look quite 
interesting. (In fact, I've been wondering if the point geometry 
rendering could be extended with a similar feature, generating 
additional displaced points on the fly.)

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Francois Labreque
Subject: Re: The Meaning of POV-Ray
Date: 4 Jun 2003 19:58:28
Message: <3EDE86E1.8080600@videotron.ca>
Program ended abnormally on 6/4/03 4:32 AM, Due to a catastrophic Gilles
Tran error:
> But do I want better, faster, artefact-free
> global illumination and volumetrics? Programmable shaders? Full HDRI
> support? A usable atmospheric (clouds and sky) model? You bet!
> 
> 

In your hands, these weapons would be deadly.

;)

-- 
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/*    flabreque    */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/*        @        */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/*   videotron.ca  */}camera{orthographic location<6,1.25,-6>look_at a }


Post a reply to this message

From: Ray Gardener
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 03:49:38
Message: <3edef612$1@news.povray.org>
> ... The core developers of the program
> are either professional programmers who work on if for fun in their spare
> time or are computer science students working on it to help develop their
> programming skills. That we as a public get to enjoy the fruits of their
> labor is secondary to that goal. With that in mind, the development model
> of POV-Ray is not clearly defined and is pretty much at the whim of the
> developers and whatever tickles their fancy at the time.

Hmm... I understand, but what I now wonder is
what happens when a critical mass of users have
made large investments in using POV-Ray.

Speaking hypothetically, let's say that all copies of POV-Ray
somehow vanished, along with any source code versions, and the
developers got tired of the whole thing as well. Essentially,
no more POV-Ray.

But there would be terabytes of POV-Ray scene files still left,
and tons of POV-Ray experience in its user base. So how much
time would elapse before someone would recreate it? I would
guess that a new POV-Team would spring up in a few days.

It's a weird extreme scenario, but I think it demonstrates
that at a certain point, a program's existence isn't
confined tightly to its development team. Intermediate
scenarios (e.g., the POV-Team decides on a radical change
that breaks half of all scene files, but another group instantly
springs up forking POV-Ray to preserve compatability, or
the backlash is so strong the POV-Team changes their minds)
would also demonstrate the same effect. It's similar to
what people notice about political leaders: yes, the people
at the top technically have absolute power, but in practice,
they do not.

Let's imagine another case: an outsider develops a really
popular patch. Nearly everyone loves it, but (again, hypothetically)
the POV-Team doesn't. What would happen? The code would be
forked and the majority of the user base would follow
whatever group managed the fork.

Here's a really disturbing thought: Like they did with
C/C++ and web browsing, Microsoft clones POV-Ray and
does the usual embrace and extend. They do a good enough
job that most of the user base eventually migrates all of its
files to use MS-POV (arrgh, I actually get queasy just
thinking about it :). As the years go by, the POV-Team is
left maintaining an increasingly incompatible and
not-as-powerful program. To most users, whatever code
the POV-Team is maintaining is no longer POV-Ray, because
they're not using that code anymore.

I'm not trying to sound weird or anything, but I
thought it would be interesting to ponder the
deeper nature of project ownership when the project
has reached a critical mass of users and legacy data.
I would say that it really does rest with the users; the
developers basically have no choice but to keep
doing what the users want, or risk having their
leadership given to others.

Ray


Post a reply to this message

From: ABX
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 04:37:34
Message: <f3vtdv8kobsem4upmcc2m23789nkuuha7l@4ax.com>
On Thu, 5 Jun 2003 00:49:56 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> Hmm... I understand, but what I now wonder is
> what happens when a critical mass of users have
> made large investments in using POV-Ray.

They have it, they use it.

> Speaking hypothetically, let's say that all copies of POV-Ray
> somehow vanished, along with any source code versions, and the
> developers got tired of the whole thing as well. Essentially,
> no more POV-Ray.

This is only case of whole universe disaster.

> Intermediate
> scenarios (e.g., the POV-Team decides on a radical change
> that breaks half of all scene files, but another group instantly
> springs up forking POV-Ray to preserve compatability, or
> the backlash is so strong the POV-Team changes their minds)
> would also demonstrate the same effect. It's similar to
> what people notice about political leaders: yes, the people
> at the top technically have absolute power, but in practice,
> they do not.

Your are blowing this problem out of proportions.
I do not need any Team to play with scenes with existing binaries. I do not need
any Team to make own extensions I need. But that's the Team who initialized my
interest, start my fun, released application for me. I love it. I use it. 

> Let's imagine another case: an outsider develops a really
> popular patch. Nearly everyone loves it, but (again, hypothetically)
> the POV-Team doesn't. What would happen? The code would be
> forked and the majority of the user base would follow
> whatever group managed the fork.

Excuse me, how long are you with POV? Have you noticed differences in features
betwen POV-Ray 3.1, MegaPOV 0.7, POV-Ray 3.5 and MegaPOV 1.0 and some
announcements about MegaPOV 1.1?

ABX


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 05:10:02
Message: <web.3edf071ec1329458541c87100@news.povray.org>
Patrick Elliott wrote:
>But it will see the triangles that make them up, which takes lots of
>space.

I think the textures take up rather more. Games often minimise geometry when
they can get away with better texture maps.

>POV-Ray has a built in example of real time raytracing. It is small, but
>then you are dealing with an engine that is running on top of and OS and
>can't take full advantage of the hardware, since it will 'never' have
>100% total access to the processor. A card based one would likely be far more
>optimized, support possible speed improvements that don't exist in POV-Ray and
>have complete access to the full power of the chip running it. This would be
>slower why?

Neither will a software-only (I emphasize software only - no card support)
scanline renderer (under windows), yet they can work in realtime at higher
resolution. Surely there is an example that is optimised for game-style
geometry, rather than being the raytracing equivalent of lightwave?
I emphasize that I'm not denying your argument on strong evidence, but I
have not seen a realtime raytracer do what a decent realtime
scanline/S-buffer based system can do. I'd really like to, mind.

>That is a point, but nothing prevents you from making explodable objects
>from triangles. In facts, the increase in available memory by using
>primitives in those things that are not going to undergo such a change
>means you can use even more triangles and make the explosion even more
>realistic. Current AGP technology it reaching its limits as to how much
>you can shove through the door and use. Short of a major redesign of both
>the cards and the motherboards, simply adding more memory or a faster
>chip isn't going to cut it.

Until it actually happens I remain unconvinced. We've always been on the
edge of running out of bandwidth since the first 3D cards. Of course,
there's a
limit, but I hear convincing speculation each way on when we'll reach it.
If a competitive realtime raytracer on a chip is possible, why would it
require favours in terms of the competition running out of room for
triangles? It should be able to do better at any time.

>Again. Why would anyone design one that 'only' supported such primitives?
>That's like asking why POV-Ray supports meshes if we all think primitives
>are so great. You use what is appropriate for the circumstances. If you
>want a space ship that explodes into a hundred fragments use a mesh, if
>you want one that gets sliced in half by a beam weapon, then use a mesh
>along the cut line and primitive where it makes sense. Duh!

Speed and complexity (hence cost). Current cards (at the high end for games)
can cost a couple of hundred dollars. Any simplification that can be made
can save cost, and having a card that doesn't need to switch over from
using triangles to drawing boxes halfway through a scene is good for
efficiency. Your particular example seems to require conversion between
triangles and other primitives (the spaceship is at one point a mesh, and
then it's half a mesh and half something else), which is not exactly rapid
in many cases. Aside from that, what's POV got to do with it? Cards don't
support NURBS as primitives either, despite them being popular in
non-realtime scanline renderers. I don't think a raytracer on a card
designed for realtime game use has to solve exactly the same problems as a
top-level raytracer for non-realtime use. In the same way that the Quake
engine and 3D Studio aren't kin.

>Well, that kind of describes most of the new cards that come out. lol
>Yes, it would need compatibility with the previous systems, but that
>isn't exactly an impossibility.

It is more difficult when you have completely changed the philosophy behind
the card, but want it to remain compatible with the previous philosophy.
You don't agree?

>There is nothing to prevent using a second chip dedicated to processing
>such things and having it drop the result into a block of memory to be
>used like a 'normal' bitmap. This assumes that the speed increase gained
>by building the rendering engine into the card wouldn't offset the time
>cost of the procedural texture. In any case, there are ways around this
>issue, especially if such methods turn out to already be in use on the
>newer DirectX cards.

Then aren't you going to lose the advantage of generating textures on the
card? If I generate a bitmap by procedure or by artist and subsequent
loading, I must still store it. Newer cards do procedural shading on a
pixel as it's rendered (or so I thought), so no extra storage is required.


Post a reply to this message

From: ABX
Subject: Re: Perhaps a "process_geometry" keyword
Date: 5 Jun 2003 05:11:25
Message: <hj0udvgot7jf3c67ne14kol96iuid9kt78@4ax.com>
On Wed, 4 Jun 2003 13:32:10 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> Sorry, I meant 'procedural' in the user-definable sense.
> POV-Ray's procedural textures are predefined.
>
> To be fair, I thought I remembered reading
> somewhere that POV 3.5 let textures use
> user-defined functions, although just going
> through the docs again I can't find
> references to that.

http://www.povray.org/search/?s=user-defined

> There's definitely
> no displacement shading, at any rate,
> except with isosurfaces (more below).

I think, you are mistaking texturing ang geometry in SDL syntax (thought they
are connected in reality). You can use the same informations for create both but
you will not find information about changing geometry in chapters about creating
textures.

> > You like procedural geometry but don't like isosurfaces?
>
> Well, isosurfaces have a sampling function interface
> analogous to regular shading in Renderman. For some
> shapes, this is preferable -- e.g., a sphere is
> just x^2 + y^2 + z^2.

That's not equation of f_sphere but of f_r internal function.
And becouse you do not see difference you wrote next sentence.

> But for other shapes,
> determining from (x,y,z) whether the point is
> inside or outside gets complicated.

Actually test for inside/outside is far simpler then looking for intersection. 
Inside are all points where value of function for given point is below value of
threshold. Outside are all points where value of function for given point is
over value of threshold. On surface are all points where value of function for
given point is equal value of threshold.

> POV-Ray might benefit from including a
> procedural geometry keyword

Adding one keyword does not change functionality. That's internall support of
keyword which makes feature useful ;-)

> which would
> have an option of emitting the geometry
> to the scene's object tree (for raytracing
> along with the other objects)

If permanently, why this can't be done by script?

> There would be a DLL (on win32) named landscape_detail.dll
> that would be passed the object, along with a reference
> to the zbuffer, the object tree, and cam/lighting.
> The DLL could then proceed to create whatever geometry
> it wanted, adding it to the tree or rasterizing it
> into the zbuffer (the bool arg after the DLL name indicates which).

You can read about adding support for dlls for example in this thread
http://news.povray.org/povray.programming/31596/

> The geometry_processing keyword would also be available
> outside global_settings, so that one could easily change
> the rendering manner for sections of a script in specific ways.

Two side notes, since number of your post increases can I ask you kindly about
making your lines longer? Is there any particular reason for you to use
sometimes less than 50 characters? It would make reading your long posts easier
for those who do not have wheel in mouse and use only keybord. Thanks.

And before you will start discuting all features you want discuss I just want to
told you about existence of povray.unofficial.patches and povray.programming
groups. Their existence is well described at
http://news.povray.org/povray.announce.frequently-asked-questions/

ABX


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 05:50:01
Message: <web.3edf118dc1329458bd7f22250@news.povray.org>
Christopher James Huff wrote:
>True, as I said, they were reduced to triangles. The API could probably
>hook up to built-in primitives if they were available, though.

Oh, no doubt. You might be able to implement an OpenGL-conformant interface,
so that the teapot really does go through as a primitive to the card... :-)

>Real-time involves several simplifications in computation and geometry,
>which does push the advantage to scanline. But do you need 200FPS? If
>the raytracing card is *fast enough*, its advantages could outweigh
>those of a scanline card.

I expect most people would be happy with 30 fps or above (some people need
higher in multiplayer games, but they turn off detail). If the card cannot
match other cards on scenes of similar complexity at that rate, it will be
visually annoying.

>(There was some RTRT...Real Time Ray-Tracing...demo called something
>like Heaven 7, you may want to look at it. It was quite impressive on my
>PC.)

Yes, and Rubicons 1 and 2, and Fresnel 1 and 2. The Rubicons did the
raytracing at low resolution and used interpolation. Fresnel 2 (and 1?)
actually took advantage of a 3D card to accelerate the raytracing,
apparently. Heaven 7 looks incredibly good (it only needed one light source
and no shadow checking!). All of these did no space subdivision (no octrees
or BSP methods). Are there then only RTRT *demos*? No equivalent of the
various toy engines that you can find on places like the 3D Engine List?

>I didn't say they were useless. Triangles are definitely useful for
>things like terrain, though I wonder if a hardware isosurface solver
>could compete...

It takes long enough on an Athlon to solve those things...

>And the advantage of a box is memory: one box takes up less memory than
>one triangle. You *can* use groups of them, nobody has said you would be
>limited to single primitives.

Sure, but people tend to minimise geometry in favour of textures in games. I
know it's often the other way round on non-PC graphics systems (hence
OpenGL being caught behind the times when DirectX started supporting all
these fancy texturing methods), so perhaps a primitive rich card would
reverse this.

>There was a POV-Ray include that used CSG to "explode" objects. If that
>method wasn't suitable, an animated mesh would probably be the best way
>to go. Again, I'm not saying meshes are useless, just that there are
>often better things to use.

I know, I've used it (and very useful it is too :-) However, isn't
differencing famously slow? Also, wouldn't a differenced object be doing
calculations for all the empty "subtracted" pixels as well?

>This is pretty much irrelevant. I just said you can add and remove
>geometry. What you are talking about would be more switching between
>different models for different frames of the animation.

I see.

>Who said no triangles? "More primitives" != "Ban triangles"!

I'm guessing that triangles would remain the most popular shape, except in
the simplest room-based games (irrespective of technical arguments, it
would take a while for programmers/artists to start using the other
primitives extensively).

>With a raytracing engine, the sphere would likely be faster than the
>crossed polygons.

I've no idea whether that's true, but I imagine it's unlikely to be true
with a volumetrically shaded sphere.

>I was not talking about texture-mapped spheres. I was specifically
>talking about using procedural shaders: something volumetric or based on
>angle of incidence to the surface, or something more like the glow patch.

Aren't atmosphere/volumetric effects often faster with scanline methods (I
don't know)?

>You're assuming all the texture maps and models fit in the card memory.
>Yes, when the image map is already local, using it would be faster than
>all but the simplest shaders. That doesn't mean procedural shaders are
>too slow. If a specific shader is too slow, you use a faster one, maybe
>resorting to an image map if it will do the job and there is memory for
>it.

They usually do, because the programmers know what sort of minimum memory
requirements on the card they're prepared to tolerate. When they don't, you
really notice. However, using predominantly procedural shaders you throw
the bottleneck on the card's processor, and card memory is cheaper.


Post a reply to this message

From: Andreas Kreisig
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 09:21:58
Message: <3edf43f5@news.povray.org>
Christopher James Huff wrote:



> (There was some RTRT...Real Time Ray-Tracing...demo called something
> like Heaven 7, you may want to look at it. It was quite impressive on my
> PC.)

If I remember correctly, Heaven 7 is an OpenGL demo, not RTRT?

Andreas

-- 
http://www.render-zone.com


Post a reply to this message

From: Tim Cook
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 09:46:14
Message: <3edf49a6$1@news.povray.org>
Ray Gardener wrote:
> scenarios (e.g., the POV-Team decides on a radical change
> that breaks half of all scene files

Wasn't there some thread a while back about completely ditching all
the POV primitives and using isosurfaces for everything instead? :)

-- 
Tim Cook
http://home.bellsouth.net/p/PWP-empyrean

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GFA dpu- s: a?-- C++(++++) U P? L E--- W++(+++)>$
N++ o? K- w(+) O? M-(--) V? PS+(+++) PE(--) Y(--)
PGP-(--) t* 5++>+++++ X+ R* tv+ b++(+++) DI
D++(---) G(++) e*>++ h+ !r--- !y--
------END GEEK CODE BLOCK------


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.