POV-Ray : Newsgroups : povray.off-topic : CGI Server Time
1 Nov 2024 19:15:44 EDT (-0400)
  CGI (Message 1 to 10 of 11)  
Goto Latest 10 Messages Next 1 Messages >>>
From: Orchid Win7 v1
Subject: CGI
Date: 24 Jun 2012 17:20:10
Message: <4fe7848a$1@news.povray.org>
When I first got into computer graphics, "3D" meant a white wire-frame 
on a black background. Because that's just about all the video hardware 
could handle.

Later on, I got to work with programs like Expert 4D, Imagine 3D, Real 
3D, Cinema 4D and so on. All of these run on the Commodore Amiga, and 
(with the exception of Real 3D) they all draw polygons with texturing 
and light sourcing, and maybe shadows. Mostly they are distinguished by 
their particular mesh editing abilities and texturing options. Imagine 
3D has a special "ray tracing" mode, in which reflections work (but only 
if the objects are added to the scene in the correct order), and sadly 
it makes the render about 45x slower.

Remember that the Amiga is powered by the Motorola 68000 series, and may 
or may not have an FPU installed. My dad's Amiga 600 took about 2 hours 
to render a single torus mesh with one point light source and a simple 
wood-grain procedural texture. It was slow times, my friend, slow times. 
(Amiga 600 = M68000 running at 7.09 MHz and no FPU.)

Real 3D allows spherical spheres, conical cones, and so forth, plus it 
has CSG. It also has a nasty GUI that makes it really fiddly to actually 
select the thing you want. The named hierachy of objects is useful though.

Then along came the rise of graphics cards, and with them a whole 
industry of drawing texture-mapped polygons really, really fast. I 
played all of Quake II without a 3D graphics card. Because, remember, 

when we bought it - on finance, obviously.) Nobody has that kind of 
money laying around. Well, not in 1995 anyway.

I still remember playing Quake II with hardware acceleration for the 
first time, and being amazed at the glory of trilinear filtering and 
coloured light sources. (And framerates exceeding 10 FPS.) If that 
sounds silly, try playing without those things, and you'll quickly see 
what I mean!

(I was also surprised when the game /didn't/ take 20 minutes to load 
each level. I had assumed it was normal for it to take so long; 
apparently it was just our PC's 16MB of RAM causing massive swapping!)

And then, of course, I discovered POV-Ray. While computer games amuse 
themselves drawing polygons really, really fast, POV-Ray directly 
simulates things like curved surfaces, 3D textures, reflection, 
refraction, and so forth.

I wrote elaborate Pascal programs to draw wireframe models, to perform a 
backface cull, and to do simple Gouraud shading, do rotations, and so 
forth. In game graphics, more sophisticated trickery is used to fake 
curved surfaces, rough textures, and even reflections.

POV-Ray uses no such trickery. It just directly solves the rendering 
problem, in a simple and elegant way. Every now and then somebody would 
pop up and say "why bother with POV-Ray when a modern graphics card can 
easily out-do it?" But the example images offered never looked anywhere 
near as good as what a few lines of code with POV-Ray could produce.

The SDL user interface is radical in its own way too. With a visual 
modeller, it would be ridiculously difficult to position a sphere so 
that it exactly coincides with the end of a cylinder. With SDL, it's 
trivial. Complex shapes can be built up using nothing but perfect 
geometric primitives. And if you apply a wood texture, surfaces with 
different orientations show different grain in the correct way. You 
can't do /that/ with 2D pixel maps. And let's not forget, of course, 
that if bump maps just don't cut it for you, with an isosurface you can 
make genuinely curved geometry. (Although the render time explodes...)

Then of course, computers keep getting faster. POV-Ray added photon 
maps. It added radiosity. It added area lights. It added focal blur, 
atmospheric media, and dispersion. Ever more rendering power, requiring 
ever longer to draw, but computers keep getting faster.

(The old benchmark, skyvase.pov, used to take /hours/ to render. My 
current PC rips through it at previously unthinkable resolutions in a 
just second or two.)



Now, however, I sense that the tables have turned. Once it seems that 
scanline renderers went through all sorts of elaborate tricks to try to 
simulate constructs that POV-Ray just renders directly, physically 
correctly, without any tricks or gimmicks. It just directly simulates 
the laws of optics, and all these individual effects are just an 
automatic result of that.

But now we have the so-called unbiased renderers, the path tracers, 
whatever you want to call them. With POV-Ray I need to configure photon 
maps and set up area lights and waste hours tweaking and tuning 
radiosity parameters. And then I see a demo that runs IN A FRIGGING WEB 
BROWSER which uses my GPU to almost instantly render a Cornell box that 
would take multiple hours to draw with POV-Ray.

Now the boot is on the other foot: These renderers just shoot lots and 
lots of rays in all directions, and average the results. The longer you 
run it, the sharper the image becomes. No tweaking parameters, no 
setting up different simulations for different aspects of the result. 
Just tell it what to draw, and wait until it looks good enough. All the 
effects fall out of the simulation; they don't have to be configured 
individually.

I will freely admit, however, that almost all of the actual demo images 
are still suspiciously grainy. Which suggests that for "real" scenes, 
rendering still isn't instant. For example,

http://tinyurl.com/87oyfmh

You'd need some insane radiosity settings to get that out of POV-Ray. 
I'm not sure if it's even /possible/ to get that glow around the light 
sources. On the other hand, no one can deny the image is slightly grainy 
in places.

And then I see something like this:

http://tinyurl.com/7r72o7k

At first glance, this is an amazing, near-photographic image. And then I 
notice THE POLYGONS! IT BURNS!!! >_< KILL IT WITH FIRE!

It seems that while these renderers have vastly superior lighting 
capabilities, they're still stuck in the obsolete technology of drawing 
lots and lots and lots of tiny flat surfaces and desperately trying to 
pretend that they're actually curved. Yuk!

If only there was a way I could use [something like] SDL to control a 
GPU-powered renderer that has all the great features of POV-Ray, yet has 
a modern illumination system...



Now here's an interesting question: Does anybody know how this stuff 
actually /works/?

In the simplest form, it seems that for each pixel, you shoot a ray 
backwards, and let it bounce off things at random angles (subject to how 
rough or smooth the materials are) until it finds a light source. For 
one ray, the result is pretty random. But average together a few 
bazillion rays, and slowly the random speckle converges to a beautiful, 
smooth (but slightly grainy) image.

Trouble is, depending on what your light sources are like, the vast 
majority of rays will never hit one. It's an old problem; you can start 
from the lights and worry about never hitting the camera, or you can 
start from the camera and worry about never hitting the lights. POV-Ray 
and its ilk solve this problem by "knowing" exactly where all the light 
sources are and shooting rays directly at them. But this misses any 
light reflected from other surfaces - even trivial specular reflections. 
(Hence, photon maps or radiosity.)

You can force it so the last bounce /always/ hits a known light source. 
(Or, the other direction, the last ray always hits the camera.) 
Apparently "bidirectional path tracing" works by starting one end from 
the camera, the other end from a light source, and forcing them to 
connect in the middle.

Wikipedia asserts "Contrary to popular myth, Path Tracing is /not/ a 
generalisation of ray tracing." This statement makes no sense 
whatsoever. :-P

Then there's this whole idea of "unbiased rendering". The general idea 
is that, while any individual ray shot might produce any old colour, /in 
average/ each pixel's colour will converge to the physically correct 
one. The term comes from statistics. A statistical estimator may or may 
not be "consistent", and may or may not be "unbiased". Does /anybody/ 
understand the difference? Because I sure as hell don't!

Then today, I made the grave mistake of attempting to understand 
"Metropolis light transport".

http://graphics.stanford.edu/papers/metro/gamma-fixed/metro.pdf

Does this paper make /any/ sense to anybody?

In particular, the basic idea is that, rather than tracing /random/ ray 
paths, you start with a random path, and then gradually "adjust" it, 
such that each path is similar to the previous one. This immediately 
raises two important questions:

1. How does this not utterly bias the calculation, greatly favouring one 
set of paths rather than exploring the entire path space?

2. Why does this produce superior results?

The actual mathematics is too dense for me to follow. In particular, 
it's difficult to follow the physical significance of some of the terms 
and notations.

I'd almost be tempted to write a renderer myself - except that running 
on a CPU, it will undoubtedly take several billion millennia to draw 
anything. (There's a /reason/ nobody has tried this until today...)


Post a reply to this message

From: nemesis
Subject: Re: CGI
Date: 24 Jun 2012 20:10:01
Message: <web.4fe7ac42f858a57f8372724d0@news.povray.org>
Andy, andy... you won't be getting much popularity here tossing around these
rough arguments...

Orchid Win7 v1 <voi### [at] devnull> wrote:
> And then, of course, I discovered POV-Ray.

you discovered povray after Cinema 4D and quake... okey, dokey...

> While computer games amuse
> themselves drawing polygons really, really fast, POV-Ray directly
> simulates things like curved surfaces, 3D textures, reflection,
> refraction, and so forth.

yeah, now put povray to directly simulate the curved surfaces of a woman's body.

You may use parametric surfaces if writing the right isosurface function is
proving too much of a challenge.


> But now we have the so-called unbiased renderers, the path tracers,
> whatever you want to call them. With POV-Ray I need to configure photon
> maps and set up area lights and waste hours tweaking and tuning
> radiosity parameters. And then I see a demo that runs IN A FRIGGING WEB
> BROWSER which uses my GPU to almost instantly render a Cornell box that
> would take multiple hours to draw with POV-Ray.

Truth be told, a cornell box today doesn't take even a minute in povray...

> I will freely admit, however, that almost all of the actual demo images
> are still suspiciously grainy. Which suggests that for "real" scenes,
> rendering still isn't instant. For example,
>
> http://tinyurl.com/87oyfmh

Scenes like that may take several hours on CPU, just like raytraced scenes in
the old days.

> You'd need some insane radiosity settings to get that out of POV-Ray.
> I'm not sure if it's even /possible/ to get that glow around the light
> sources.

the glow most likely is just a post-processing 2D effect.

> On the other hand, no one can deny the image is slightly grainy
> in places.

I prefer film grain rather than ugly splotches every corner.  Also, a  night
shot like that will look noisy in an actual photograph too.  You either get the
best lenses possible or let it render for more time to gather more light
samples.


> And then I see something like this:
>
> http://tinyurl.com/7r72o7k
>
> At first glance, this is an amazing, near-photographic image. And then I
> notice THE POLYGONS! IT BURNS!!! >_< KILL IT WITH FIRE!

I don't note any polygons in particular, but the surfaces are all too flat and
regular, denouncing it as CG.  Luxrender, like povray these days, is
conspicuously lacking in true artists rather than programmers trying to show off
the software.

That scene, I believe, took about 1 day or 2 to be that smooth if memory serves
me well...

This is a better scene, from an artist I truly enjoy, Enrico Cerica:

http://enricocerica.cgsociety.org/gallery/962953/

Modeled in Blender, rendered in Octane, a commercial path tracer that runs
exclusively on GPU.

Truly a photograph of a virtual environment.


> Now here's an interesting question: Does anybody know how this stuff
> actually /works/?
>
> In the simplest form, it seems that for each pixel, you shoot a ray
> backwards, and let it bounce off things at random angles (subject to how
> rough or smooth the materials are) until it finds a light source. For
> one ray, the result is pretty random. But average together a few
> bazillion rays, and slowly the random speckle converges to a beautiful,
> smooth (but slightly grainy) image.

http://www.pbrt.org/

The book explains all and source code for the renderer is GPLed.  In fact, it's
the base for Luxrender.


> Wikipedia asserts "Contrary to popular myth, Path Tracing is /not/ a
> generalisation of ray tracing." This statement makes no sense
> whatsoever. :-P

To me pathtracing looks as much of a generalisation of raytracing as raytracing
was of raycasting (which BTW, is progressively being more used in games for
things like LOD management).  But I believe they were talking about how it is
not simply raytracing with (possibly) infinite ray bouncing.


> Does this paper make /any/ sense to anybody?

certainly not to someone without the particular math and rationale insight. :)

> I'd almost be tempted to write a renderer myself - except that running
> on a CPU, it will undoubtedly take several billion millennia to draw
> anything. (There's a /reason/ nobody has tried this until today...)

No, just a few hours or days like povray back in 1990's.


Post a reply to this message

From: Patrick Elliott
Subject: Re: CGI
Date: 25 Jun 2012 02:52:30
Message: <4fe80aae$1@news.povray.org>
On 6/24/2012 2:20 PM, Orchid Win7 v1 wrote:
> The SDL user interface is radical in its own way too. With a visual
> modeller, it would be ridiculously difficult to position a sphere so
> that it exactly coincides with the end of a cylinder. With SDL, it's
> trivial.

Number one irritation that I have with the damn things. Wings 3D you can 
bring up the "literal location", or what ever, find the real center, and 
then move your other object to coincide, but it would still be better if 
it combined that with the hierarchical system in say Moray, so you could 
just "uncurve/mode/rotate" the damn thing, relative what you are trying 
to position it against, and fix the problem.

Still, you can get a version of Wings, from here:

http://s331378245.onlinehome.us/

which includes a mess of stuff for everything from Greebling, to Carve 
3D CSG functions (which appear to be buggy, as I reported, when you try 
to CSG an object/surface, with a face, instead of where you cross "two" 
faces (i.e., a side). No idea if they have figured out where the bug in 
that is or not, but someone else made the code for the library, so...

But, this is the sort of thing that **should be** trivial, if the people 
that made the application took any thought, at all, to the idea that 
people might want to be at all precise about anything, instead of just 
throwing it in, and hoping. lol

BTW though-- if they can find the bugs, Carve 3D would be a crazy 
addition to Moray. ;)


Post a reply to this message

From: Stephen
Subject: Re: CGI
Date: 25 Jun 2012 03:46:16
Message: <4fe81748@news.povray.org>
On 24/06/2012 10:20 PM, Orchid Win7 v1 wrote:
>
> The SDL user interface is radical in its own way too. With a visual
> modeller, it would be ridiculously difficult to position a sphere so
> that it exactly coincides with the end of a cylinder. With SDL, it's
> trivial.


you can type coordinates to get an exact location. In Bishop3D you can 
enter formulas as well. Both Moray and Bishop3D have a snap function 
when placing objects using the mouse.

-- 
Regards
     Stephen


Post a reply to this message

From: Invisible
Subject: Re: CGI
Date: 25 Jun 2012 04:27:12
Message: <4fe820e0@news.povray.org>
On 25/06/2012 01:09 AM, nemesis wrote:
> Andy, andy... you won't be getting much popularity here tossing around these
> rough arguments...

I'm sure I can't be the only person feeling a little dissatisfied, or at 
least a bit concerned.

>> And then, of course, I discovered POV-Ray.
>
> you discovered povray after Cinema 4D and quake... okey, dokey...

What, that's a problem somehow?

I didn't say they were /created/ in that order, merely that I discovered 
them in that order. If I said that I discovered Queen before I 
discovered Bach, nobody would be the slightest bit surprised, despite 
the /several hundred years/ between these. :-P

>> While computer games amuse
>> themselves drawing polygons really, really fast, POV-Ray directly
>> simulates things like curved surfaces, 3D textures, reflection,
>> refraction, and so forth.
>
> yeah, now put povray to directly simulate the curved surfaces of a woman's body.
>
> You may use parametric surfaces if writing the right isosurface function is
> proving too much of a challenge.

Drawing such a thing is equally impossible with POV-Ray or with a mesh 
renderer, so it's not a useful comparison. (Although, maybe if I had 
infinite point-cloud dartah... :-P )

Saying that, you might be able to model something acceptable with blobs. 
Or maybe if POV-Ray supported spline surfaces...

>> But now we have the so-called unbiased renderers, the path tracers,
>> whatever you want to call them. With POV-Ray I need to configure photon
>> maps and set up area lights and waste hours tweaking and tuning
>> radiosity parameters. And then I see a demo that runs IN A FRIGGING WEB
>> BROWSER which uses my GPU to almost instantly render a Cornell box that
>> would take multiple hours to draw with POV-Ray.
>
> Truth be told, a cornell box today doesn't take even a minute in povray...

Funny you should say that. I spent yesterday trying to set up an 
animation of a Cornell box, and it appears that to get stable lighting, 
I'm going to have to crank the radiosity settings up to the point where 
it takes about 30 minutes per frame...

>> I will freely admit, however, that almost all of the actual demo images
>> are still suspiciously grainy. Which suggests that for "real" scenes,
>> rendering still isn't instant.
>
> Scenes like that may take several hours on CPU, just like raytraced scenes in
> the old days.

There's a /reason/ POV-Ray does all this complicated radiosity trickery 
rather than just directly simulating the whole thing. It's to make it 
render in less than a decade. ;-) The trick, it seems, is to use the GPU 
to make it faster.

>> You'd need some insane radiosity settings to get that out of POV-Ray.
>> I'm not sure if it's even /possible/ to get that glow around the light
>> sources.
>
> the glow most likely is just a post-processing 2D effect.

Can you actually do that? I mean, automatically figure out all the parts 
of the image which are bright enough to need glow? I suppose the only 
way it would fail is if there's a curved reflection of something that 
glows; the glow should be curved, but if it's post-processed, it 
wouldn't be.

>> On the other hand, no one can deny the image is slightly grainy
>> in places.
>
> I prefer film grain rather than ugly splotches every corner.

Sure. Same here.

(I am surprised nobody has come up with a filter to estimate what's 
sampling noise and attempt to filter it out... It can't be that hard.)

>> And then I see something like this:
>>
>> http://tinyurl.com/7r72o7k
>>
>> At first glance, this is an amazing, near-photographic image. And then I
>> notice THE POLYGONS! IT BURNS!!!>_<  KILL IT WITH FIRE!
>
> I don't note any polygons in particular

You didn't notice that the table is hexagonal??

> but the surfaces are all too flat and
> regular, denouncing it as CG.  Luxrender, like povray these days, is
> conspicuously lacking in true artists rather than programmers trying to show off
> the software.

Heh, well, I wouldn't hold that against it. ;-)

> That scene, I believe, took about 1 day or 2 to be that smooth if memory serves
> me well...

That's /a lot/ of render time, man... o_O

>> Now here's an interesting question: Does anybody know how this stuff
>> actually /works/?
>
> http://www.pbrt.org/
>
> The book explains all and source code for the renderer is GPLed.  In fact, it's
> the base for Luxrender.

I'll have to go read that.

>> Wikipedia asserts "Contrary to popular myth, Path Tracing is /not/ a
>> generalisation of ray tracing." This statement makes no sense
>> whatsoever. :-P
>
> To me pathtracing looks as much of a generalisation of raytracing as raytracing
> was of raycasting

Indeed.

>> I'd almost be tempted to write a renderer myself - except that running
>> on a CPU, it will undoubtedly take several billion millennia to draw
>> anything. (There's a /reason/ nobody has tried this until today...)
>
> No, just a few hours or days like povray back in 1990's.

A day is a /long/ time to wait while debugging your code to see if it 
works correctly. o_O


Post a reply to this message

From: Stephen
Subject: Re: CGI
Date: 25 Jun 2012 05:00:19
Message: <4fe828a3$1@news.povray.org>
On 25/06/2012 9:27 AM, Invisible wrote:

>> yeah, now put povray to directly simulate the curved surfaces of a
>> woman's body.
>>
>> You may use parametric surfaces if writing the right isosurface
>> function is
>> proving too much of a challenge.
>
> Drawing such a thing is equally impossible with POV-Ray or with a mesh
> renderer, so it's not a useful comparison. (Although, maybe if I had
> infinite point-cloud dartah... :-P )
>
> Saying that, you might be able to model something acceptable with blobs.
> Or maybe if POV-Ray supported spline surfaces...
>

Ian MacKay has made a credible attempt in his TC-RTC entry Lilli Marlene 
using CSG.
And Poser meshes can fool many people. The shapes are very good now.



-- 
Regards
     Stephen


Post a reply to this message

From: nemesis
Subject: Re: CGI
Date: 25 Jun 2012 15:50:00
Message: <web.4fe8c009f858a57f352a052d0@news.povray.org>
On  25 Jun 2012 08:27:12, Invisible wrote:
> What, that's a problem somehow?

I shouldn't be surprised by your quirks by now, but it still happens now and
then... :p

>> yeah, now put povray to directly simulate the curved surfaces of a woman's body.
>>
>> You may use parametric surfaces if writing the right isosurface function is
>> proving too much of a challenge.

> Drawing such a thing is equally impossible with POV-Ray or with a mesh
> renderer, so it's not a useful comparison. (Although, maybe if I had
> infinite point-cloud dartah... :-P )

http://forums.cgsociety.org/showthread.php?f=121&t=1036380

http://forums.cgsociety.org/showthread.php?f=121&t=829134

Should I also point out several of the Poser meshes people have been posting in
povray images for years?  Or game and movie meshes?

Hint:  they are not scans of human models.  People have been sculpting digitally
for quite some time now in software like Z-Brush and then generate from that a
less dense polygon mesh with normal mapped details.

> Saying that, you might be able to model something acceptable with blobs.

Not really.  And this:

http://tc-rtc.co.uk/imagenewdisplay/stills/389/Lilli_Marlene.html

truly shows povray's age... it would be terribly unnaceptable as a scene in a
game today, let alone in the upcoming generation of game hardware... seems like
those perfect curves don't make for quality renders...


>> Truth be told, a cornell box today doesn't take even a minute in povray...

> Funny you should say that. I spent yesterday trying to set up an
> animation of a Cornell box, and it appears that to get stable lighting,
> I'm going to have to crank the radiosity settings up to the point where
> it takes about 30 minutes per frame...

Then you're doing it wrong.  Radiosity is there to provide only smooth indirect
lighting effects, not extra shadows or something.  If it's getting too long and
still displaying lots of lame splotches, you're not using it like it was meant
to be used...

>> Scenes like that may take several hours on CPU, just like raytraced scenes in
>> the old days.

> There's a /reason/ POV-Ray does all this complicated radiosity trickery
> rather than just directly simulating the whole thing. It's to make it
> render in less than a decade. ;-) The trick, it seems, is to use the GPU
> to make it faster.

It doesn't take a decade, never did.  It takes hours or days like you were used
to, but produces on CPUs today far better results than what was possible with
povray back then.  That's not even accounting for GPUs, which can bring it down
to minutes or barely more than an hour.


>> the glow most likely is just a post-processing 2D effect.

> Can you actually do that? I mean, automatically figure out all the parts
> of the image which are bright enough to need glow?

Yes, it's a trivial 2D effect in most 2D graphical editing software.  Luxrender
comes with a few such filters.


> You didn't notice that the table is hexagonal??

oh, now that you bring it up, I see it.  Yeah, the guy evidently overlooked it,
but if it wasn't for the contrast with the bright light outside, it wouldn't be
noticeable anyway.  Hey, keeping geometry as simple as possible always helps
with render times.  I wonder how tesselation in nextgen game hardware will help
that...

Other than that, you don't note any edges on the smooth chairs and table curved
legs...


> A day is a /long/ time to wait while debugging your code to see if it
> works correctly. o_O

that's the beauty with progressive rendering:  you don't need to wait 8 hours
for the first few rows of the image to be ready and after 36 hours notice the
last few rows show horrid splotches near the table legs:  you get a rouch noisy
preview early on that already shows where every shadow is and all colors
right...


Post a reply to this message

From: Darren New
Subject: Re: CGI
Date: 28 Jun 2012 20:54:43
Message: <4fecfcd3@news.povray.org>
On 6/24/2012 23:52, Patrick Elliott wrote:
> Still, you can get a version of Wings, from here:

I must say that SketchUp was the first (and only) modeler I've ever used 
where I wasn't manipulating the modeler's data structures. I didn't have to 
know anything about how it's represented internally. You want a box with a 
circle drilled thru it? Make a box, draw a circle, push the circle out the 
other side. You're done.

-- 
Darren New, San Diego CA, USA (PST)
   "Oh no! We're out of code juice!"
   "Don't panic. There's beans and filters
    in the cabinet."


Post a reply to this message

From: Kevin Wampler
Subject: Re: CGI
Date: 28 Jun 2012 21:56:58
Message: <4fed0b6a$1@news.povray.org>
On 6/28/2012 5:54 PM, Darren New wrote:
> On 6/24/2012 23:52, Patrick Elliott wrote:
>> Still, you can get a version of Wings, from here:
>
> I must say that SketchUp was the first (and only) modeler I've ever used
> where I wasn't manipulating the modeler's data structures.

I haven't tried the most recent version, but I also felt the same way 
about Sculptris (http://www.pixologic.com/sculptris/), which is a free 
zbrush-like modelling program.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: CGI
Date: 29 Jun 2012 02:20:17
Message: <4fed4921$1@news.povray.org>
On 29/06/2012 01:54 AM, Darren New wrote:
> On 6/24/2012 23:52, Patrick Elliott wrote:
>> Still, you can get a version of Wings, from here:
>
> I must say that SketchUp was the first (and only) modeler I've ever used
> where I wasn't manipulating the modeler's data structures. I didn't have
> to know anything about how it's represented internally. You want a box
> with a circle drilled thru it? Make a box, draw a circle, push the
> circle out the other side. You're done.

...except that, eventually, you discover that your "circle" is actually 
an N-gon. Which seems an unfortunately break in the modeller's 
abstraction...


Post a reply to this message

Goto Latest 10 Messages Next 1 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.