POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
4 Aug 2024 16:11:19 EDT (-0400)
  Scanline rendering in POV-Ray (Message 77 to 86 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Ray Gardener
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 10:32:39
Message: <3edf5487$1@news.povray.org>
> Your are blowing this problem out of proportions.

What problem? I was just responding to Ken's assertion that
"the development model of POV-Ray was pretty much at the whim
of the developers", when that is not actually true. I'm not making any
connection between this particular subject and the scanline stuff;
I'm just pondering it independantly as an interesting socioeconomic
phenomenon in its own right.

Roughly speaking, whoever supports a file format best gains the
majority of the user base and relegates other supporters to the sidelines,
or even displaces them entirely. But the users decide what constitutes
"best", so whatever feeling of control developers have is illusory.

The only point I can make relating that and the scanline feature
is that *if* the majority of users decided scanlining was desirable,
then it wouldn't matter what the developers thought; the feature
would inherently find its way in. OTOH, sometimes users don't know
what they want, and they realize only in retrospect that they badly needed
a certain feature (it explains those "How did I ever live without X?"
sayings).
Software is prone to that phenomenon because myths build up easily about
software, and the only way to change thinking is to just develop working
code and demo it. So I risk failure to demo scanlining just in case
it can only be desirable retroactively.

The point about the different unofficial POVs is interesting too,
actually. They demonstrate what happens when a large but minor part
(say 30% or 40%) of the user base prefers a feature. A parallel fork
winds up being separately maintained. A scanlining feature could
end up like that too.

As for my narrow lines of text, I find reading long lines difficult
(it's why newspaper columns are narrow -- imagine if newspaper articles
ran their text all the way across a page. It's also why all the
main news sites like Salon, ABC, MSNBC, etc. use columns). I'd rather
scroll (I just hit the down arrow key) than keep losing track of
which line is next.

Ray


Post a reply to this message

From: ABX
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 10:44:03
Message: <dfludvgjjbrkpmjg835455iqksqq84906s@4ax.com>
On Thu, 5 Jun 2003 07:32:58 -0700, "Ray Gardener" <ray### [at] daylongraphicscom>
wrote:
> > Your are blowing this problem out of proportions.
>
> What problem?

Importance of listed possibilities of the future.

ABX


Post a reply to this message

From: Ray Gardener
Subject: Re: Perhaps a "process_geometry" keyword
Date: 5 Jun 2003 10:50:48
Message: <3edf58c8@news.povray.org>
> Adding one keyword does not change functionality. That's internall support
of
> keyword which makes feature useful ;-)

Well, duh. Obviously there has to be code implementing
what the keyword does. I wouldn't be digging into
the POV source otherwise.


> > which would
> > have an option of emitting the geometry
> > to the scene's object tree (for raytracing
> > along with the other objects)
>
> If permanently, why this can't be done by script?

Because the SDL is not always the preferred language.
A person might have legacy code in another language,
or prefer to use C++ because it's object oriented,
or compiled code runs faster when object generation
involves some complex process, etc. All else being
equal, I'd rather write in C/C++ just because... I'm
tired of having to type "#declare" every time I
want to assign a value to a variable in SDL.

Ray


Post a reply to this message

From: Gilles Tran
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 11:55:49
Message: <3edf6805$1@news.povray.org>

3edf5487$1@news.povray.org...
> Roughly speaking, whoever supports a file format best gains the
> majority of the user base and relegates other supporters to the sidelines,
> or even displaces them entirely. But the users decide what constitutes
> "best", so whatever feeling of control developers have is illusory.

The point you're missing is that in the case the Povray, the users are also
the developers. It's not a users vs developers game. The two latest members
of the POV-Team (Nathan Kopp and Ron Parker) started out as patchers, took
it on their own to add features that became extremely popular, maintained
and supported complex multipatches for a while (thus demonstrating their
goodwill and ability to interact with users) and were quickly "promoted" as
developers.
The fact is, if you're a proven talented developer, create POV-Ray code that
1) works 2) is actually useful and 3) doesn't turn into a support headache,
your work will find its way into a future POV-Ray. I too was wondering about
the forking problem a few years ago (I've been using patches since 1996) but
it just didn't happen and the system self-regulated very nicely... which is
an amazing socioeconomic phenomenon indeed...

>Software is prone to that phenomenon because myths build up easily about
>software, and the only way to change thinking is to just develop working
>code and demo it.

That's certainly true, and could be the case for a scanline alternative.
But beware of developer's hubris, something we've seen a lot in this groups,
i.e. announcements/proposals from programmers about radical features
supposed to change the future of Pov-Ray, but that eventually came to nought
for various reasons, the main one being that the developers weren't familiar
enough with POV-Ray itself and how it is actually used.
My only advice would be for you to start using POV-Ray yourself to create
complex scenes and animations (I'm not talking about demo scenes, but images
created for the IRTC or another real-life purpose) and then rethink your
patch from this experience.

G.

--

**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 13:09:29
Message: <cjameshuff-C1B5D2.12004005062003@netplex.aussie.org>
In article <3edf43f5@news.povray.org>,
 Andreas Kreisig <and### [at] gmxde> wrote:

> > (There was some RTRT...Real Time Ray-Tracing...demo called something
> > like Heaven 7, you may want to look at it. It was quite impressive on my
> > PC.)
> 
> If I remember correctly, Heaven 7 is an OpenGL demo, not RTRT?

You remember incorrectly, it is RTRT.
http://www.acm.org/tog/resources/RTNews/demos/overview.htm

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 14:49:33
Message: <cjameshuff-1216D5.13405205062003@netplex.aussie.org>
In article <web.3edf118dc1329458bd7f22250@news.povray.org>,
 "Tom York" <tom### [at] compsocmanacuk> wrote:

> >I didn't say they were useless. Triangles are definitely useful for
> >things like terrain, though I wonder if a hardware isosurface solver
> >could compete...
> 
> It takes long enough on an Athlon to solve those things...

Primarily because an Athlon is not designed to solve them. I was 
specifically talking about a hardware solver...no function VM overhead, 
the functions would execute natively, and as much as possible hard-wired 
into the logic circuits. And remember that raytracing doesn't have the 
same kind of CPU constraints as scanlining, it can be made very parallel.


> >And the advantage of a box is memory: one box takes up less memory than
> >one triangle. You *can* use groups of them, nobody has said you would be
> >limited to single primitives.
> 
> Sure, but people tend to minimise geometry in favour of textures in games. I
> know it's often the other way round on non-PC graphics systems (hence
> OpenGL being caught behind the times when DirectX started supporting all
> these fancy texturing methods), so perhaps a primitive rich card would
> reverse this.

Geometry is minimized because it is expensive, most of it has to be sent 
to the card for each frame. Cheaper geometry == more geometry possible.


> I know, I've used it (and very useful it is too :-) However, isn't
> differencing famously slow?

No. It is slower than a plain primitive, that doesn't automatically mean 
"too slow to be useable".


> Also, wouldn't a differenced object be doing
> calculations for all the empty "subtracted" pixels as well?

No, that's what bounding is for.


> >With a raytracing engine, the sphere would likely be faster than the
> >crossed polygons.
> 
> I've no idea whether that's true, but I imagine it's unlikely to be true
> with a volumetrically shaded sphere.

Unless you're doing scattering media in the thing, it can be done very 
quickly. If you're doing a simple surface shader, even faster. You're 
picking the slowest possible way to do it with raytracing and comparing 
it to the fastest possible scanlining method, which is ridiculous.


> >I was not talking about texture-mapped spheres. I was specifically
> >talking about using procedural shaders: something volumetric or based on
> >angle of incidence to the surface, or something more like the glow patch.
> 
> Aren't atmosphere/volumetric effects often faster with scanline methods (I
> don't know)?

You can fake them quickly using multiple layers of transparent surfaces, 
but getting anything realistic is nearly impossible and you can fake 
them exactly the same way with raytracing. Again, a hardware volume 
evaluator might speed things up greatly.


> They usually do, because the programmers know what sort of minimum memory
> requirements on the card they're prepared to tolerate. When they don't, you
> really notice. However, using predominantly procedural shaders you throw
> the bottleneck on the card's processor, and card memory is cheaper.

You can't keep all the data on the card, a lot of it has to be sent 
every frame. Bus bandwidth is considerably more expensive than card 
memory and more limited than processor power. You could reduce memory, 
use the extra space for some additional raytracer processors, and still 
have a card capable of far more geometry and more realistic rendering at 
perfectly reasonable frame rates.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Ray Gardener
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 15:44:16
Message: <3edf9d90$1@news.povray.org>
> But beware of developer's hubris, something we've seen a lot in this
groups...

Yeah, I've certainly seen some of that here and there. :)

I'm just speculating out loud what the implications might be if I
do get a scanlining patch working (however minimal at first).
The feature is fairly nontrivial... it's a significant
paradigm change on the entire program, and for a lot of people,
it changes the very nature of POV-Ray. I'm starting to debate
in my mind whether I should take ABX's suggestion and do
a totally different application, even if writing a parser
will be a headache. Maybe a simple app with a simple SDL,
just to demonstrate the idea in isolation at first.

I think hybrid rendering is the future, because you see the technique
being used more and more. Gritz developed Entropy at Exluna and
managed to sell a few copies before Pixar shut him down. The question
I'm perhaps asking is, has the time come for an open source/free
hybrid renderer? And if so, what's the most sensible route to it?
One thing I worry about is people saying "This is really nice,
but why on Earth didn't you make it a POV patch? Now I have
to maintain some scenes in one format and some in another."


> i.e. announcements/proposals from programmers about radical features
> supposed to change the future of Pov-Ray, but that eventually came to
nought
> for various reasons, the main one being that the developers weren't
familiar
> enough with POV-Ray itself and how it is actually used.

I can believe that. For what it's worth, I've given the issue a
lot of thought prior to this thread. I've done several large
projects (Corel Draw for Mac, a PostScript interpreter, a vector
file format, Leveller, and an internal tool for EA I can't
talk about), and I fully appreciate what the logistics are.
I wouldn't do this if I felt I wasn't doing it properly or
was just going to stop halfway through.



> My only advice would be for you to start using POV-Ray yourself to create
> complex scenes and animations (I'm not talking about demo scenes, but
images
> created for the IRTC or another real-life purpose) and then rethink your
> patch from this experience.

It's a good idea, and I'd love to, but I'm afraid I don't have the time.
I did do some map diagram work for a music festival once
with POV-Ray, so I'm not too unfamiliar with it I hope.

Ray


Post a reply to this message

From: Tom Galvin
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 15:58:37
Message: <Xns9391A2785E019tomatimporg@204.213.191.226>
"Ray Gardener" <ray### [at] daylongraphicscom> wrote in
news:3edf9d90$1@news.povray.org: 

> 

>> created for the IRTC or another real-life purpose) and then rethink
>> your patch from this experience.
> 
> It's a good idea, and I'd love to, but I'm afraid I don't have the
> time. 
> 
> 

That's actually taking the long way around.  Gilles suggestion will 
actually save you time if you are looking to patch POV-Ray.


Post a reply to this message

From: Patrick Elliott
Subject: Re: Scanline rendering in POV-Ray
Date: 5 Jun 2003 17:04:45
Message: <MPG.19495df84baf0efe98981a@news.povray.org>
In article <web.3edf071ec1329458541c87100@news.povray.org>, 
tom### [at] compsocmanacuk says...
> Speed and complexity (hence cost). Current cards (at the high end for games)
> can cost a couple of hundred dollars. Any simplification that can be made
> can save cost, and having a card that doesn't need to switch over from
> using triangles to drawing boxes halfway through a scene is good for
> efficiency.
Cost will always be high in the first of some new technology. This is a 
given. As for it changing the way it does things when it goes from a mesh 
to a box, any implementation would optimize this, assuming of course that 
calculating a single point on the surface of a triangle that is defined 
by three points is 'really' that incredibly different and calculating a 
point on the surface of a sphere or box. They both require very similar 
calculations. I am not sure how you would get a major change in speed due 
to what minor transition actually happens.

> I don't think a raytracer on a card
> designed for realtime game use has to solve exactly the same problems as a
> top-level raytracer for non-realtime use. In the same way that the Quake
> engine and 3D Studio aren't kin.
> 
Obviously, however neither you nor I know exactly what set of features or 
capabilities would really need or make sense to support, or even 
necessarily how. All I have is POV-Ray as an example of a top-level 
raytracer and stripped down to the bare bones it might be able to produce 
decent real time by itself in a 640x480 mode (maybe even in higher 
resolutions). However you would have to severely strip it down to do so. 
As it is, most of the time spent is parsing the SDL, which in the 3D card 
would be replaced by feeding it much more specific data that wouldn't 
require parsing. Even the bounding boxes could be pre-defined and 
supplied as part of the object. If you eliminate the #1 biggest time 
waster and then optimize for those features most useful in a real time 
game... Its not like there are a lot of examples of stuff like this 
people have written. I think there where a few that used such predefined 
objects and pre-calculated information to do real time demos for the 
Apple IIgs in the 80s and it managed a frame rate good enough for a 
simple game on a 2.5mhz system. Are you honestly telling me that someone 
couldn't do thousand times better on even a 500mhz machine, let alone a 
1ghz. And that is just running it as software.

The key issues are development time, cost of the product and 
backward compatibility. The last item being the only one that no one that 
wouldn't bury any attempt made right now to do it.

> >Well, that kind of describes most of the new cards that come out. lol
> >Yes, it would need compatibility with the previous systems, but that
> >isn't exactly an impossibility.
> 
> It is more difficult when you have completely changed the philosophy behind
> the card, but want it to remain compatible with the previous philosophy.
> You don't agree?
> 
Not necessarily. OpenGL structure for storing objects differs from POV-
Rays SDL for example, but the underlying data is more or less identical. 
You need a converter only because no native support exists to load such 
an object. If you designed a card, then you wouldn't likely be 
incorporating the ability to support the same structures as previous 
cards already use. There is no practical reason to not do so.

> >There is nothing to prevent using a second chip dedicated to processing
> >such things and having it drop the result into a block of memory to be
> >used like a 'normal' bitmap. This assumes that the speed increase gained
> >by building the rendering engine into the card wouldn't offset the time
> >cost of the procedural texture. In any case, there are ways around this
> >issue, especially if such methods turn out to already be in use on the
> >newer DirectX cards.
> 
> Then aren't you going to lose the advantage of generating textures on the
> card? If I generate a bitmap by procedure or by artist and subsequent
> loading, I must still store it. Newer cards do procedural shading on a
> pixel as it's rendered (or so I thought), so no extra storage is required.
> 
This contradicts you previous suggestion that somehow using such 
procedural systems is tied to complexity and the a bitmap has added 
advantages. If that is true, then there would be no reason to not simply 
generate a bitmap from the procedural texture and use it. Now you say 
this isn't needed, since newer cards already do what I said..?

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Gilles Tran
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 19:04:05
Message: <3edfcc65@news.povray.org>

3edf9d90$1@news.povray.org...
>
>The question I'm perhaps asking is, has the time come for an open
source/free
>hybrid renderer? And if so, what's the most sensible route to it?

You could also consider adding raytracing to open source scanline renderers
such as Blender and OpenFX, which are after all already more suited to
animation (and particularly character animation) than POV-Ray since both
come with a native GUI modeller.

> It's a good idea, and I'd love to, but I'm afraid I don't have the time.

Well, what's the hurry ? We're talking raytracing after all, time is not an
issue ;)
Take 3-4 months off, play with the features, try to make nice pictures, show
them here. I certainly trust your competence as far as software development
per se is concerned, but I think that the objectives for such an ambitious
project will be much better defined if you get a first-hand knowledge of
what it is to work and develop real scenes with POV-Ray. Only then you'll
fully understand what the software can/cannot do, not from a theoretical
point of view but from a practical one.
There are lots of YARs (Yet Another Renderer) and YAMs (Yet Another
Modeller) out there and I wish the developers had taken the time to
understand what people needed before wasting their own time (and the time of
their users)...

G.


--
**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.