POV-Ray : Newsgroups : povray.off-topic : Tell me it isn't so! Server Time
11 Oct 2024 13:17:39 EDT (-0400)
  Tell me it isn't so! (Message 331 to 340 of 473)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 11:40:50
Message: <4a6f1c02$1@news.povray.org>
Warp wrote:
>   The reason why languages like C++ don't come up with libraries which would
> be handy (such as graphics and sound) is complicated.

Actually, for graphics and sound and other ubiquitous stuff like that, it's 
not all that hard to come up with a decent library. Win32, X, etc all fit 
the bill.

My rant was more along the lines of higher-level stuff. Take, for example, 
an HTTP client library. You would think it's straightforward, but how do you 
allocate memory such that it's easy to free? (C++ solved this problem, 
obviously, which is one of the big wins for C++ libraries over C libraries.) 
How do you open a socket? How do you hash passwords for DIGEST 
authentication? How do you keep from blocking on DNS lookups? Each of these 
libraries has layers and layers under it, and nobody wants to redo all those 
layers using only the standard libraries.

So the solution is to make the standard libraries bigger, and you start 
getting things like Java and C#, which nobody wants to use because the 
standard libraries are too big. :-)

I did read and understand the rest of your post, but I guess I wasn't really 
clear in my late-night rant as to what I thought the problem was.


>   Also such library would be highly non-portable. For example, if the C++
> standard dictated some kind of GUI library, how would you implement it eg.
> for the iPhone? You just can't. This would mean that for such systems you
> would have to use a crippled version of the language which may or may not
> compile some existing code.

Exactly! That's what I'm getting at. If it's not in the standard library, 
then two chunks of code both wanting to do graphics are going to have 
trouble working together. I want to play a video in a web page, but the web 
client uses Qt and the video uses X, or something like that.

If it *is* in the standard library, then you either get something like "it 
won't run on an iPhone" (say, .NET) or you get one "the standard library is 
too big to fit on your hardware."

The most recent solution seems to be to put a layer of interpreter on top of 
everything, using DOM and javascript as a graphics layer for your application.

>   A standard library to play sounds would run into similar problems, 

Here I think a framework would probably be sufficient. Unlike graphics, 
programs don't seem to generate from scratch too many types of sounds. It's 
mostly format-independent stuff, like "read a file and play it", or "here's 
a gunshot that should sound like it's 20 feet away and off to the left a 
bit" or some such. When they do generate from scratch (synthesizers, sound 
editors, etc) they can use a decent open sound format and convert it when 
they're done.

The problem here is layering it on top of other libraries. How do you 
prebuffer the sound? What version of select/poll/threads/etc are you using 
if you're pulling sound off a network connection? What standard are you 
using to timestamp sounds so you can lip-sync to the video of the person 
talking?  If you want to build software to (say) play video (my current 
bugaboo), you need all those things underneath first, and since they're not 
standard, you can't mix-and-match the sound system with the video 
decompression system with the content negotiation system with the DMA system 
with the networking system.

These aren't hard problems to solve. The problem is they've been solved over 
and over, and if you're trying to put multiple pieces of code together, 
you're going to run into incompatibilities in the lower layers. Exactly like 
with the graphics: the problem isn't that windowing systems are hard, but 
that there are so many to choose from, and programs for one won't run on 
others, even tho a simple adaption layer could make it work. But then 
everyone writes their own adaption layer, and then you get adaptors between 
the adaptors, and you get layers and layers of abstraction, all of which are 
unnecessary for simple tasks but none of which you can throw away once it's 
in the mix.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: Clarence1898
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 12:00:00
Message: <web.4a6f2031ac52dfd4b533f5a90@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> Darren New wrote:
>
> > That's one solution, but not really because of the memory. The Amiga had
> > only 128K or 256K, and it handled clipping windows and saving the
> > clipped parts elsewhere just fine.
>
> Which model? I don't recall seeing one with less than 1MB RAM.

The original Amiga 1000 came with either 256K or 512K memory.  I had one of the
256K models.  A pretty amazing machine for it time.

Isaac.


Post a reply to this message

From: Darren New
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 12:13:12
Message: <4a6f2398$1@news.povray.org>
Invisible wrote:
> Which model? I don't recall seeing one with less than 1MB RAM.

Amiga 1000. From wikipedia
"Machines began shipping in September with a base configuration of 256 KB of 
RAM at the retail price of 1295 USD."

Which fits my memory. I think the 500 had 128K? It wasn't a whole lot, but 
the Amiga made efficient use of it.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

From: clipka
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 12:25:00
Message: <web.4a6f2566ac52dfd4dcf616650@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> >> I am 75% sure that the C64 would *automatically* indent your code. As
> >> in, if you wrote a FOR-NEXT loop, the loop body would automatically
> >> appear indented, and there was nothing you could do about it.
> >
> > I'm 99% sure it did *un-indent* the code no matter how hard you tried... (just
> > googled for a few C64 BASIC code snippets, and found them all not indented)
>
> Hmm. Perhaps it was the Sinclare Spectrum then...

Code snippets on the internet indicate otherwise. Next try... :)

(Hint: It wasn't the Amstrad CPC either; I don't know of any home computer that
had an auto-indent feature.)


Post a reply to this message

From: clipka
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 12:40:00
Message: <web.4a6f2988ac52dfd4dcf616650@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> That's one solution, but not really because of the memory. The Amiga had
> only 128K or 256K, and it handled clipping windows and saving the clipped
> parts elsewhere just fine. I suspect it was more a question of (a) amount of
> effort put into the graphics layer and (b) the fact that there was no
> hardware accel for Windows boxes so redrawing from scratch was probably
> close to as fast as blitting the saved window anyway.

Typically, yes. After all, in those days of Windows 3.1 (and earlier), windows
were just solid-filled rectangles with a border around, and buttons likewise.

Having a dedicated blitter chip made the approach much more worthwile. And I
guess the Amiga only had it because it was designed for GUI operation right
from the start.


Post a reply to this message

From: clipka
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 12:45:00
Message: <web.4a6f2a04ac52dfd4dcf616650@news.povray.org>
Neeum Zawan <m.n### [at] ieeeorg> wrote:
>  Ever used Fractint on DOS? _That_ program probably supported more video
> cards than any other. The amount of collaboration for that piece of
> software was truly impressive.

Why, yes, of course!

That was a freakin' awesome piece of software in any respect.


Post a reply to this message

From: clipka
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 13:00:01
Message: <web.4a6f2e6bac52dfd4dcf616650@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> > That's one solution, but not really because of the memory. The Amiga had
> > only 128K or 256K, and it handled clipping windows and saving the
> > clipped parts elsewhere just fine.
>
> Which model? I don't recall seeing one with less than 1MB RAM.

http://en.wikipedia.org/wiki/Amiga_models_and_variants

allegedly 256K on the A1000.


Post a reply to this message

From: clipka
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 13:10:00
Message: <web.4a6f2feeac52dfd4dcf616650@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> My rant was more along the lines of higher-level stuff. Take, for example,
> an HTTP client library. You would think it's straightforward, but how do you
> allocate memory such that it's easy to free? (C++ solved this problem,
> obviously, which is one of the big wins for C++ libraries over C libraries.)

Um... what's wrong with malloc() and free()??


Post a reply to this message

From: clipka
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 13:45:00
Message: <web.4a6f382eac52dfd4dcf616650@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Invisible wrote:
> > Which model? I don't recall seeing one with less than 1MB RAM.
>
> Amiga 1000. From wikipedia
> "Machines began shipping in September with a base configuration of 256 KB of
> RAM at the retail price of 1295 USD."
>
> Which fits my memory. I think the 500 had 128K? It wasn't a whole lot, but
> the Amiga made efficient use of it.

It actually had more than the original A1000 (512K), and was released later; but
by that time, memory prices must have dropped enough that they could offer it
for a significantly lower price - hence the lower type designation as it seems.


Post a reply to this message

From: Darren New
Subject: Re: Tell me it isn't so!
Date: 28 Jul 2009 13:45:01
Message: <4a6f391d$2@news.povray.org>
clipka wrote:
> Um... what's wrong with malloc() and free()??

It exposes the internals of the memory allocations to the caller. And it's 
difficult to manage if you don't have at least destructors. And it can be 
inefficient if you do it a lot. And sometimes you need certain kinds of 
memory (like DMA-accessible memory) so you wind up with multiple kinds of 
malloc and free.

But mostly because it exposes the internals of the memory allocations to the 
caller, which is usually less of a problem until you combine it with the 
other problems, like threading.

Say you're generating some audio by reading hardware-compressed samples off 
a microphone, then shipping it out over a connection that's packetized and 
interleaved with other data (RTP, for example).  You don't know how big the 
data you're reading is, and you don't know how big the data you're writing 
is. One thread is reading from the microphone and filling buffers. One is 
coordinating. One is writing the buffers to the socket. Who is responsible 
for allocating buffers? Who is responsible for freeing them? Especially 
given that the buffers you're writing might not be the right size for the 
buffers you're reading.

Sure, you can use malloc and free. But I've seen maybe 5% of the programs 
I've worked with *not* layer other memory management stuff on top, and each 
"on top" is incompatible with the other "on top"s. You can't read a buffer 
from John's DVD reader, feed it into a Python data structure, then take that 
and stuff it out to a MPEG decoder hardware interface without copying it 
three times. Heck, Qt is written in C++ and it doesn't make use of any of 
the STL *or* C strings. You know how you print out the URL stored in a 
"QURL" object?  myQurl.toAscii().constData() gives you an actual char* (or 
something like that). Because Qt didn't like STL for some reason.

Everyone builds up big abstractions to help. If you have to do it yourself, 
you're going to build abstractions suited to the program you're writing. If 
you don't have to do it yourself, you have to haul around the whole library 
from someone else and keep up with *their* changes.  I think this is a big 
part of what the .NET stuff is trying to solve with all the "component" part 
of the framework.

-- 
   Darren New, San Diego CA, USA (PST)
   "We'd like you to back-port all the changes in 2.0
    back to version 1.0."
   "We've done that already. We call it 2.0."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.