|
![](/i/fill.gif) |
On 26-3-2013 11:25, Orchid Win7 v1 wrote:
>>> The mouse was drawn as a hardware sprite overlay, so moving the mouse
>>> pointer involves poking new coordinates into some video registers. This
>>> takes a handful of compute cycles, and is easily implemented by a single
>>> interrupt handler.
>>
>> IIRC it was not even that. I don't have my hardware books lying around
>> anymore, but I think the horizontal and vertical motion of the mouse was
>> determined by two bit gray codes. These two bits were directly connected
>> to the video chip and the coordinates of the mouse were updated in
>> hardware. No interrupt required.
>
> I'm pretty much certain that's not correct.
well, they were directly connected to Denise and there were hardware
counters in Denise for X and Y.
> Under sufficiently heavy CPU load, the mouse pointer became slightly
> less responsive. There's no reason for that to happen if it's all
> implemented in hardware.
I can not find the logic circuit of Denise, so perhaps you are right and
the contents of the register had to be fetched and put in the mouse
sprite x and y.
>> At the user level it was not fully preemptive. You had to do an explicit
>> call to allow other programs to get some time too.
>
> What in the world makes you think that?
Because I have written programs at that level.
It is a great way to avoid a busy wait on an event in a high level
language. Just check if your condition is met, if not pass control back
to the scheduler. Repeat until you have something to do.
What I remember from those days is that a process could be interrupted
by a higher level process, but user programs ran until they voluntary
gave control back to the scheduler.*
> If you run two programs that
> both try to use 100% CPU, they both end up getting approximately 50%
> CPU. (And everything else becomes fairly slow.)
Because all programs you used behaved decent and passed control to the
scheduler regularly? Remember we are thinking about a multitasking
single user machine. There is no point in annoying the user by not
giving other programs a change to run. There is not even a point in
trying to get more than your fair share. But see the note below.
*) trying to google things, the wiki page on Exec_(Amiga) claims that
linus once mistakingly said that AmigaOS was cooperative. In earlier
versions of this page it *was* apparently considered cooperative,
because google quotes a text that is no longer there. Another source
says that the first version of AmigaOS was cooperative, implying that
later versions weren't. Possibly I have been programming the earlier
version and later versions indeed were fully preemptive, but the system
calls still worked, so I did not notice the change.
For what I needed cooperative multitasking was exactly what I needed.
E.g I had a multichannel dataacquisition running on a PC that was
connected to the Amiga over a parallel port. I had one channel live in a
separate screen** and a data analysis program that could use the same
parallel connection to get the full data of a part of the recording. On
the Amiga side I had several programs running, all cooperatively,
including the parallel communication program (a sort of device driver
but at user level). On the PC side I had to solve it with one big event
loop.
**) for the non-amiga users: the Amiga was able to display several
'screens' at the same time. Each with its own resolution and color
depth. Transistions between screens were always at a horizontal line and
the display line where that happened was handled by another custom chip.
You could move a screen in front of another revealing or hiding as much
as you wanted.
Post a reply to this message
|
![](/i/fill.gif) |