POV-Ray : Newsgroups : povray.general : zRCube (POV Clone) : Re: zRCube (POV Clone) Server Time
8 Aug 2024 06:14:53 EDT (-0400)
  Re: zRCube (POV Clone)  
From: Warp
Date: 31 May 2001 10:02:16
Message: <3b164ee8@news.povray.org>
This is an interesting "our program is better than POV-Ray but it isn't".
Fancy words, but mostly crap.

  "zRcube aims to be more modern than pov, using more recent algorithms."

  This resumes the whole crap: It assumes that POV-Ray is old, that it
uses old algorithms and is not developed anymore.
  What is old in POV-Ray? Take any modern raytracer and its basic algorithms
will be the same as in POV-Ray.
  Moreover, POV-Ray even uses more modern and efficient algorihtms than many
other raytracers (such as LightFlow etc). These include vista buffers, light
buffers, automatic bounding, polynomial objects, adaptive antialiasing and so
on. The upcoming 3.5 will have even more sophisticated features, such as
functions, isosurfaces and photon mapping. Most opimizations in POV-Ray
make it often faster than other raytracers with similar scenes.

  "The idea behind this project is that most of available free
rendering software were designed a long time ago, when computers were
really slower than now, and consequently these engines were a
compromise between rendering quality and speed. But now, our computers
are fast, maybe too fast for what we use them for... I don't care if I
spent 40 seconds or 25 secs for my final rendering if the final result
is better for the longer rendering..."

  The crap continues.
  For some reason they assume that image quality was compromised in order
to get faster rendering. I really don't understand where did they get that
idea.
  What I understand of that paragraph is that they think that the developers
of POV-Ray thought something like "ok, we could do this to look better, but
it would be very slow, so we'll just rip off something from the quality in
order to get more speed". This makes no sense. I would like to hear some
examples of POV-Ray where this kind of compromise was made. Which feature
could look better if we don't worry about the rendering time (and you have
no way of tuning its quality so that you can get the better result)?

  They also have this funny conception that in current computers the
scenes made by people take usually much less than 1 minute to render.
What a crap. You can look at almost any high-quality image (no matter
which renderer) and see how much time it took to render: The render
time of a final rendering is usually several hours, no matter what
computer was used. For example most IRTC images take many hours to
render (often the rendering times can be even several days).
  What they are saying in the paragraph above is "as in current computers
scenes take less than a minute to render, it doesn't matter if it takes
a half minute or one minute". This is crap. If you are making a good-quality
final render, it will usually take hours to render.
  Besides, this has a funny drawback: What they are actually saying is
that their renderer is 1.6 times slower than POV-Ray. This means that
something that takes 1 hour to render with POV-Ray will take more than
1 hour and a half to render with their program. Not too positive. Not
something that encourages you to switch to their program.

  "When we began to think about this project, we first said:
'radiosity is a great algorithm for illumination, it makes no
compromises and the pictures rendered using it are really realistic'.
So we have to use it. Then, one of us said 'But we cannot handle refraction
and reflexion with radiosity, only ray-tracing can do that'."

  I really don't understand why everyone speaks about "radiosity" as if
it was a _rendering_ algorithm. That is, you _render_ a scene with the
"radiosity" algorithm and you get a final image.
  I know how the "radiosity" algorithm works and I have seen how it is
calculated (at mathematical formula level).
  "Radiosity" is _NOT_ a rendering technique. I just don't understand why
everyone speaks about it as if it was.
  What "radiosity" does is to calculate the illumination of the surfaces.
That is, it calculates how a surface (or a point on the surface, depending
on how accurately it is implemented) is illuminated by all the other
surfaces in the scene.
  That is, "radiosity" just calculates illumination values. It does _NOT_
project the surfaces on screen, it does _NOT_ write anything to the screen
(or an image file for that matter). You _DON'T_ get a final image from
the radiosity calculations, just illumination values (for example in
the form of a light map or something similar).
  In order to get the final image from the polygons you have to use a
rendering algorithm. The most viable ones are scanline rendering (which
is the most commonly used) and raytracing (which isn't very rare in
radiosity renderings either).
  That is, the polygons are rendered with scanline rendering or raytracing
and taking into account the lighting values given by the radiosity algorithm.

  (People who talk about "radiosity" as a rendering technique often oppose
it to raytracing, as if they were equal techniques and mutually exclusive.
This is just plain laughable. Radiosity just calculates illumination values;
it couldn't care less what is the actual rendering algorithm used to
draw the polygons on screen; it can be scanline rendering or raytracing;
it doesn't care.)

  I personally think about these kind of people as ignorants. And it seems
that these guys fall into this category.


  By the way, looking at their "complete features" list is quite funny.
  Their "modern" raytracer lacks most of POV-Ray's "old" (and perhaps
obsolete?) features, such as CSG, media, matrix transformations, most
objects, most camera types, normal modifiers, antialiasing and almost every
flow control feature of the POV-Ray SDL (ie. #if, #while, #switch, #macro
and so on).

  Many of those are probably explained with the fact that they probably
use polygons and some things are quite hard to tesselate (eg. CSG).

  Modern my ass.



  PS: Yes, every time someone makes a stupid claim of something being "better"
than POV-Ray, I see red.
  This is not because I think POV-Ray is perfect and the best renderer
possible in every aspect, but it's because I consider it completely stupid
to start comparing different renderers and say that one is better than
the other. Every renderer is good in its own field of expertise and every
renderer is good for certain things. There's no such a thing as a renderer
which is better than another renderer.
  Specially "I have made a renderer better than POV-Ray"-type of people are
extremely irritating.

-- 
#macro N(D,I)#if(I<6)cylinder{M()#local D[I]=div(D[I],104);M().5,2pigment{
rgb M()}}N(D,(D[I]>99?I:I+1))#end#end#macro M()<mod(D[I],13)-6,mod(div(D[I
],13),8)-3,10>#end blob{N(array[6]{11117333955,
7382340,3358,3900569407,970,4254934330},0)}//                     - Warp -


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.