POV-Ray : Newsgroups : povray.off-topic : Memory-mapped I/O Server Time
24 Dec 2024 13:18:57 EST (-0500)
  Memory-mapped I/O (Message 1 to 10 of 16)  
Goto Latest 10 Messages Next 6 Messages >>>
From: Orchid Win7 v1
Subject: Memory-mapped I/O
Date: 2 Feb 2015 13:16:26
Message: <54cfbefa$1@news.povray.org>
OK, so I don't know if anybody here will actually know the answer to 
this, but I'll ask anyway...



Back in the good old days (i.e., about 30 years ago), you could program 
any hardware device connected to your computer by just poking the right 
codes into the correct memory addresses. I'm curious about to what 
extent that's still the case.

For example, if you have a C64, you can poke some codes into the VIC-II 
chip that generates the TV signal, thus convincing it to switch from 
text mode to graphics mode. (And losing the ability to see your BASIC 
listings or the command prompt, by the way. BASIC has no idea you just 
did this!) With that done, you can *literally* change the colours of 
individual pixels by just writing codes into the region of RAM occupied 
by the framebuffer.

As I sit here with my very expensive "IBM PC-compatible", I'm 
wondering... Is it even still *possible* to map the graphics card's 
framebuffer into the CPU's address space? Or do you have to do something 
more elaborate? Do you "talk to" the graphics card directly, or you to 
talk to the PCI-Express chipset and explicitly ask it to deliver your 
message packets? Clearly the data all passes through the PCIe chipset; 
what I'm asking is whether that's transparent to the CPU or not.

Similarly, if I want to ask the harddisk to do something, do I need to 
explicitly construct ATAPI commands, or does the chipset do that for me?

I'm not trying to actually *write* a device driver, I'm just curious as 
to how this stuff works these days.


Post a reply to this message

From: clipka
Subject: Re: Memory-mapped I/O
Date: 2 Feb 2015 16:47:52
Message: <54cff088$1@news.povray.org>
Am 02.02.2015 um 19:16 schrieb Orchid Win7 v1:
> Back in the good old days (i.e., about 30 years ago), you could program
> any hardware device connected to your computer by just poking the right
> codes into the correct memory addresses. I'm curious about to what
> extent that's still the case.

You're actually wrong there. Partially, that is.

It actually depended on the CPU architecture. While some - like the 
C64's 6510 - had only one single address space to mess around in, others 
- like Intel's 8080 or the related Zilog Z80 - had another dedicated 
address space for I/O controllers.

With Intel's 8086 being closely related to the 8080, it might come as no 
big surprise that PCs also have a dedicated I/O address space, and 
indeed external hardware is traditionally residing there.

That said, the I/O address space of the 8086 is as large as that of the 
8080: just 64k. Thus, stuff that required a big address window - most 
notably indeed the display adaptor's framebuffer - was made to reside in 
the memory address space. You still had to mess with I/O address space 
to access the actual display controller though, in order to e.g. switch 
the display into a different resolution, or switch from text to graphics 
mode.

The 8086 family also added DMA to the mix, which allows for serial data 
transfer from some memory address range to some I/O address (or vice 
versa) at the request of periphery without intervention of the CPU.

> As I sit here with my very expensive "IBM PC-compatible", I'm
> wondering... Is it even still *possible* to map the graphics card's
> framebuffer into the CPU's address space? Or do you have to do something
> more elaborate? Do you "talk to" the graphics card directly, or you to
> talk to the PCI-Express chipset and explicitly ask it to deliver your
> message packets? Clearly the data all passes through the PCIe chipset;
> what I'm asking is whether that's transparent to the CPU or not.

Nah, it's still all I/O and memory accesses like in good old days - 
thanks to BIOS and OS, who do all the setting-up of the PCIe bus for you 
(buzzword plug & play) so that you can talk to the periphery without 
having to bother about the bus details.

As for the framebuffer, as a matter of fact nowadays the display memory 
sometimes /is/ genuine main memory.

> Similarly, if I want to ask the harddisk to do something, do I need to
> explicitly construct ATAPI commands, or does the chipset do that for me?

That probably depends on the sophistication of the hard disk controller, 
which is the thing you'll be talking to in this case. But the actual 
data transfer will take place either via memory-mapped I/O or (more 
likely) DMA, so in either case you'll write the data to some memory 
address range.

> I'm not trying to actually *write* a device driver, I'm just curious as
> to how this stuff works these days.

Nothing much has changed since the good old days: It all boils down to 
I/O and memory accesses. Except that the C64's CPU happened to not have 
the concept of a dedicated I/O address space.


Post a reply to this message

From: scott
Subject: Re: Memory-mapped I/O
Date: 3 Feb 2015 03:30:39
Message: <54d0872f$1@news.povray.org>
> As I sit here with my very expensive "IBM PC-compatible", I'm
> wondering... Is it even still *possible* to map the graphics card's
> framebuffer into the CPU's address space? Or do you have to do something
> more elaborate? Do you "talk to" the graphics card directly, or you to
> talk to the PCI-Express chipset and explicitly ask it to deliver your
> message packets? Clearly the data all passes through the PCIe chipset;
> what I'm asking is whether that's transparent to the CPU or not.

You can see the address ranges if you go to the properties of your 
graphics card in device manager (under the "Resources" tab). I suspect 
the small IO address ranges are for sending commands to the GPU (which 
you can just use CPU store instructions to access), and the larger ones 
are a map of the RAM on the card (which you use DMA for).

Typically though I don't think you will find much documentation on *how* 
to communicate with a modern desktop GPU. You might find this 
interesting though, whilst not the same architecture as a PC it does 
give a bit of an insight on a modern GPU architecture (it's the one on 
the raspberry PI):

http://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Memory-mapped I/O
Date: 3 Feb 2015 16:55:03
Message: <54d143b7@news.povray.org>
On 02/02/2015 09:47 PM, clipka wrote:
> Am 02.02.2015 um 19:16 schrieb Orchid Win7 v1:
>> Back in the good old days (i.e., about 30 years ago), you could program
>> any hardware device connected to your computer by just poking the right
>> codes into the correct memory addresses. I'm curious about to what
>> extent that's still the case.
>
> You're actually wrong there. Partially, that is.
>
> It actually depended on the CPU architecture. While some - like the
> C64's 6510 - had only one single address space to mess around in, others
> - like Intel's 8080 or the related Zilog Z80 - had another dedicated
> address space for I/O controllers.

Ah yes, I/O-mapped I/O. I've read about this, but I've never actually 
seen it in action. Presumably it just means that you use a different 
op-code to access stuff in the I/O address space? (And that the same 
address can exist independently in both address spaces. And presumably 
the two addresses can be different widths...)

> With Intel's 8086 being closely related to the 8080, it might come as no
> big surprise that PCs also have a dedicated I/O address space, and
> indeed external hardware is traditionally residing there.

OK.

> That said, the I/O address space of the 8086 is as large as that of the
> 8080: just 64k. Thus, stuff that required a big address window - most
> notably indeed the display adaptor's framebuffer - was made to reside in
> the memory address space. You still had to mess with I/O address space
> to access the actual display controller though, in order to e.g. switch
> the display into a different resolution, or switch from text to graphics
> mode.

So I'm imagining for stuff that doesn't have many registers (e.g., the 
keyboard controller), the whole thing is I/O-mapped. Whereas for 
something like the sound card, you poke a bunch of (main) memory 
addresses into I/O-mapped registers and say "have at it!"

> The 8086 family also added DMA to the mix, which allows for serial data
> transfer from some memory address range to some I/O address (or vice
> versa) at the request of periphery without intervention of the CPU.

As I see it, a DMA controller is just another I/O device - one who's 
only job is to copy chunks of RAM around while you're not looking. Given 
the previous paragraph of speculation, I'd expect all the stuff for 
controlling DMA to be I/O-mapped, but who knows?

>> As I sit here with my very expensive "IBM PC-compatible", I'm
>> wondering... Is it even still *possible* to map the graphics card's
>> framebuffer into the CPU's address space? Or do you have to do something
>> more elaborate? Do you "talk to" the graphics card directly, or you to
>> talk to the PCI-Express chipset and explicitly ask it to deliver your
>> message packets? Clearly the data all passes through the PCIe chipset;
>> what I'm asking is whether that's transparent to the CPU or not.
>
> Nah, it's still all I/O and memory accesses like in good old days -
> thanks to BIOS and OS, who do all the setting-up of the PCIe bus for you
> (buzzword plug & play) so that you can talk to the periphery without
> having to bother about the bus details.

But remember, the BIOS and the OS are JUST PROGRAMS. ;-) They have to do 
this stuff too, eventually. But yeah, your typical application program 
will never, ever have to care; it just passes instructions to the OS, 
which then uses a dozen device drivers and protocol stacks...

Fun fact: My PC doesn't *have* a BIOS. It has an EFI. ;-)

> As for the framebuffer, as a matter of fact nowadays the display memory
> sometimes /is/ genuine main memory.

The latest Intel Core-i chips have an on-board GPU. I *presume* that if 
you utilise this, everything happens in main RAM.

By contrast, my nVidia GeForce GTX 650 has almost *one gigabyte* of RAM 
on-board. I presume the vast majority of this is for holding texture 
data, which is moot if you're only trying to run MS Word. Even so, I 
wonder - does all that RAM ever get mapped into the host address space? 
Or do you have to manually pass texture packets across the PCIe bus and 
then tell the GPU what shapers to run?

>> Similarly, if I want to ask the harddisk to do something, do I need to
>> explicitly construct ATAPI commands, or does the chipset do that for me?
>
> That probably depends on the sophistication of the hard disk controller,
> which is the thing you'll be talking to in this case. But the actual
> data transfer will take place either via memory-mapped I/O or (more
> likely) DMA, so in either case you'll write the data to some memory
> address range.

So, like I say, you poke some numbers into an I/O register somewhere, 
and a little while later you get an interrupt to say your data is now at 
the specified memory address. That's how I imagine it working, anyway.

>> I'm not trying to actually *write* a device driver, I'm just curious as
>> to how this stuff works these days.
>
> Nothing much has changed since the good old days: It all boils down to
> I/O and memory accesses. Except that the C64's CPU happened to not have
> the concept of a dedicated I/O address space.

Like I said, I'm just wondering if you talk to (say) the harddisk by 
writing a few bytes into some registers telling it to copy LBA #### into 
memory address ####. Or whether you have to do something like

* Build an ATA command packet in memory.
* Construct a PCIe bus message frame around it.
* Poke numbers into the PCIe controller telling it to transmit that 
chunk of memory.

or similar. I'd imaging talking to USB devices probably gets crazy. 
(Given, you know, the arbitrary tree topology...)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Memory-mapped I/O
Date: 3 Feb 2015 17:27:07
Message: <54d14b3b$1@news.povray.org>
> You can see the address ranges if you go to the properties of your
> graphics card in device manager (under the "Resources" tab). I suspect
> the small IO address ranges are for sending commands to the GPU (which
> you can just use CPU store instructions to access), and the larger ones
> are a map of the RAM on the card (which you use DMA for).

That would be my guess also.

> Typically though I don't think you will find much documentation on *how*
> to communicate with a modern desktop GPU. You might find this
> interesting though, whilst not the same architecture as a PC it does
> give a bit of an insight on a modern GPU architecture (it's the one on
> the raspberry PI):
>
> http://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf

I'm not sure it answers my question, but it's certainly a very 
interesting read...

Sections 8 and 9 in particular seem to be saying that there's about 
two-dozen memory-mapped registers that tell the GPU where the control 
lists are, and then it sucks those into the GPU and goes away to do it's 
thing.


Post a reply to this message

From: clipka
Subject: Re: Memory-mapped I/O
Date: 3 Feb 2015 23:58:18
Message: <54d1a6ea$1@news.povray.org>
Am 03.02.2015 um 22:55 schrieb Orchid Win7 v1:

> Ah yes, I/O-mapped I/O. I've read about this, but I've never actually
> seen it in action. Presumably it just means that you use a different
> op-code to access stuff in the I/O address space? (And that the same
> address can exist independently in both address spaces. And presumably
> the two addresses can be different widths...)

Exactly.
Examples from the home computer era would be the Amstrad CPC, and 
probably the ZX Spectrum, which both feature a Z80 processor. IIRC the 
Amstrad CPC's basic even had dedicated commands for I/O access.

> So I'm imagining for stuff that doesn't have many registers (e.g., the
> keyboard controller), the whole thing is I/O-mapped. Whereas for
> something like the sound card, you poke a bunch of (main) memory
> addresses into I/O-mapped registers and say "have at it!"

Basicaly yes. Though for memory-mapped devices like the graphics card 
you traditionally had hard-wired memory addresses so you wouldn't need 
to poke the memory addresses to any I/O address anywhere, and with PnP 
the job of poking the memory address would have been done by the BIOS or 
OS already.

As for the sound card, IIRC traditionally it was not memory-mapped. You 
see, the sound card operates sequentially on non-repeating data, which 
is exactly the mode of operation the DMA controller was designed for, so 
to save memory address space for true memory (including devices' ROMs, 
which are always memory-mapped) the card would present only two I/O 
addresses for data (one for reading, one for writing). Same for disk I/O.

> As I see it, a DMA controller is just another I/O device - one who's
> only job is to copy chunks of RAM around while you're not looking. Given
> the previous paragraph of speculation, I'd expect all the stuff for
> controlling DMA to be I/O-mapped, but who knows?

It is indeed.
The fun fact though is that it cannot only copy RAM around, but also 
between RAM and an individual I/O address, at a pace dictated by the I/O 
device, which is great for anything involving serial data processing, 
because it means you don't necessarily have to sacrifice precious memory 
address space.

On the other hand, nowadays memory address space is plenty (even on a 32 
bit machine), while DMA channels (i.e. number of concurrent DMA 
transfers) are scarce (and increasingly complicated, due to virtual 
memory mapping), so I guess today's hardware prefers memory-mapped 
buffers over DMA.

> But remember, the BIOS and the OS are JUST PROGRAMS. ;-) They have to do
> this stuff too, eventually. But yeah, your typical application program
> will never, ever have to care; it just passes instructions to the OS,
> which then uses a dozen device drivers and protocol stacks...

If you really want to go down to that level, I'd suggest getting a 
Raspberry PI and trying to write some standalone application (i.e. one 
that doesn't need an OS) to run on it. The hardware seems to be well 
enough documented to pull off that stunt.

> Fun fact: My PC doesn't *have* a BIOS. It has an EFI. ;-)

... which is a BIOS on steroids.

>> As for the framebuffer, as a matter of fact nowadays the display memory
>> sometimes /is/ genuine main memory.
>
> The latest Intel Core-i chips have an on-board GPU. I *presume* that if
> you utilise this, everything happens in main RAM.

Indeed, this feature is typical for on-chip GPUs. Though I think there 
are also external chips out there that are designed to utilize main memory.

> By contrast, my nVidia GeForce GTX 650 has almost *one gigabyte* of RAM
> on-board. I presume the vast majority of this is for holding texture
> data, which is moot if you're only trying to run MS Word. Even so, I
> wonder - does all that RAM ever get mapped into the host address space?
> Or do you have to manually pass texture packets across the PCIe bus and
> then tell the GPU what shapers to run?

Well, my graphics card (an ASUS R9 280) utilizes the following resources:

I/O range 03B0-03BB (legacy VGA)
I/O range 03C0-03DF (legacy CGA/EGA/VGA)
I/O range A000-A0FF

IRQ -5

Memory range 000A0000-000BFFFF (128k, legacy MDA/CGA/EGA/VGA)
Memory range FBA80000-FBABFFFF (256k)
Memory range D0000000-DFFFFFFF (.25 GB)

Obviously this is not nearly enough to access all 6 GB of the graphics 
card at once, but .25 GB is certainly large enough for a "window" to a 
portion of the graphics card's RAM; even traditional Super-VGA cards 
used such a window to access a much larger actual framebuffer, using I/O 
registers to choose which "page" of their RAM was mapped to the window, 
so I guess that's exactly the way it's done with today's graphics cards.

And no, you don't need to tell the PCIe controller anything about this - 
unless you're an OS developer and want to place the window somewhere 
else in the physical CPU address space.

(As for the 256k memory window, from the size of it I'd guess it's some 
kind of ROM.)

> So, like I say, you poke some numbers into an I/O register somewhere,
> and a little while later you get an interrupt to say your data is now at
> the specified memory address. That's how I imagine it working, anyway.

Absolutely.

> Like I said, I'm just wondering if you talk to (say) the harddisk by
> writing a few bytes into some registers telling it to copy LBA #### into
> memory address ####. Or whether you have to do something like
>
> * Build an ATA command packet in memory.
> * Construct a PCIe bus message frame around it.
> * Poke numbers into the PCIe controller telling it to transmit that
> chunk of memory.

My presumption is that you'd indeed build the whole ATA command packet 
in memory, but then just tell the HDD controller to transmit it to one 
particular drive. The PCIe bus is totally transparent in this respect 
(except of course for the initial setting-up at system startup). So are 
the peculiarities of the SATA bus.

> or similar. I'd imaging talking to USB devices probably gets crazy.
> (Given, you know, the arbitrary tree topology...)

I'm sure it is. You don't map individual USB devices to memory 
addresses, nor even I/O addresses for that matter. All you talk to is 
the USB host controller. But again you don't need to worry whether that 
host controller is connected via PCIe, PCI, or even classic ISA.


Post a reply to this message

From: scott
Subject: Re: Memory-mapped I/O
Date: 4 Feb 2015 03:10:43
Message: <54d1d403@news.povray.org>
> Ah yes, I/O-mapped I/O. I've read about this, but I've never actually
> seen it in action. Presumably it just means that you use a different
> op-code to access stuff in the I/O address space? (And that the same
> address can exist independently in both address spaces. And presumably
> the two addresses can be different widths...)

Just for interest the original ARMs in the Acorn machines split the 
logical address space into two. The first 32MB of address space is what 
normal user programs had access to ("user space"), and the OS mapped 
whatever was needed into this space (including the screen buffer(s)). 
Actual physical devices had addresses in the range 32MB to 64MB (the 
original ARMs although 32bit only had a 26bit address space, the other 8 
bits in the program counter were used as status and mode flags). The 
physical RAM was mapped from 32MB to 48MB, all I/O was mapped between 
48MB and 52MB, and above that if you *read* you read from the OS ROM 
chips, but if you *wrote* you wrote to various other bits of special 
hardware (video/sound controllers etc).

One pretty clever result (or reason?) of the above was that the screen 
memory was always mapped from 32MB downwards, and the physical screen 
memory was at 32MB upwards. This allowed some nifty vertical scrolling; 
code could automatically "wrap around" its graphics drawing by just 
deliberately going past the 32MB boundary into the start of physical 
RAM. Needless to say you could tell the video controller to start 
anywhere in memory (including just before the 32MB boundary) to 
physically draw the screen.

Obviously the above scheme limited physical RAM to 16MB (which wasn't an 
issue at the end of the 80s), I'll have to look up what they changed to 
overcome that. I know that somewhere between ARM3 and ARM6 they added a 
true 32bit addressing mode (which of course broke everything that 
assumed addresses were only 26 bits wide).


Post a reply to this message

From: scott
Subject: Re: Memory-mapped I/O
Date: 4 Feb 2015 04:42:36
Message: <54d1e98c$1@news.povray.org>
>> http://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf
>
> I'm not sure it answers my question, but it's certainly a very
> interesting read...
>
> Sections 8 and 9 in particular seem to be saying that there's about
> two-dozen memory-mapped registers that tell the GPU where the control
> lists are, and then it sucks those into the GPU and goes away to do it's
> thing.

As clipka said the raspberry pi gives a good opportunity to start 
tinkering around with the bare hardware without the restrictions of any 
OS. This is a good guide, in particular this page introduces 
communication with the GPU (just to create and get a pointer to a 
framebuffer):

http://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen01.html


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Memory-mapped I/O
Date: 5 Feb 2015 03:33:32
Message: <54d32adc@news.povray.org>
>> Ah yes, I/O-mapped I/O. I've read about this, but I've never actually
>> seen it in action. Presumably it just means that you use a different
>> op-code to access stuff in the I/O address space? (And that the same
>> address can exist independently in both address spaces. And presumably
>> the two addresses can be different widths...)
>
> Exactly.
> Examples from the home computer era would be the Amstrad CPC, and
> probably the ZX Spectrum, which both feature a Z80 processor. IIRC the
> Amstrad CPC's basic even had dedicated commands for I/O access.

Although I've used all of those systems, I only ever did hardware-level 
programming with the C64. (The others all had BASIC commands for 
graphics and sound; the C64 didn't. The Commodore Plus4, but contrast, 
had *excellent* commands for graphics and sound. Weird...)

>> So I'm imagining for stuff that doesn't have many registers (e.g., the
>> keyboard controller), the whole thing is I/O-mapped. Whereas for
>> something like the sound card, you poke a bunch of (main) memory
>> addresses into I/O-mapped registers and say "have at it!"
>
> Basicaly yes. Though for memory-mapped devices like the graphics card
> you traditionally had hard-wired memory addresses so you wouldn't need
> to poke the memory addresses to any I/O address anywhere, and with PnP
> the job of poking the memory address would have been done by the BIOS or
> OS already.

Right. So by the time the OS is up and running, you just have to know 
which chunk of RAM is mapped to each device.

> As for the sound card, IIRC traditionally it was not memory-mapped. You
> see, the sound card operates sequentially on non-repeating data, which
> is exactly the mode of operation the DMA controller was designed for, so
> to save memory address space for true memory (including devices' ROMs,
> which are always memory-mapped) the card would present only two I/O
> addresses for data (one for reading, one for writing). Same for disk I/O.

Intriguing... I didn't know you could do that.

>> As I see it, a DMA controller is just another I/O device - one who's
>> only job is to copy chunks of RAM around while you're not looking. Given
>> the previous paragraph of speculation, I'd expect all the stuff for
>> controlling DMA to be I/O-mapped, but who knows?
>
> It is indeed.
> The fun fact though is that it cannot only copy RAM around, but also
> between RAM and an individual I/O address, at a pace dictated by the I/O
> device, which is great for anything involving serial data processing,
> because it means you don't necessarily have to sacrifice precious memory
> address space.

I've never done hardware-level programming on a system that actually 
supports DMA. This is a possibility I wasn't aware of. I kinda assumed 
that the I/O device itself handles reading data from RAM at the right 
speed. But for purely sequential stuff like disk or sound, it makes 
perfect sense...

> On the other hand, nowadays memory address space is plenty (even on a 32
> bit machine), while DMA channels (i.e. number of concurrent DMA
> transfers) are scarce (and increasingly complicated, due to virtual
> memory mapping), so I guess today's hardware prefers memory-mapped
> buffers over DMA.

Don't look at me. ;-)

> If you really want to go down to that level, I'd suggest getting a
> Raspberry PI and trying to write some standalone application (i.e. one
> that doesn't need an OS) to run on it. The hardware seems to be well
> enough documented to pull off that stunt.

Or just, you know, the Pi emulator, which doesn't cost money. ;-)

>> Fun fact: My PC doesn't *have* a BIOS. It has an EFI. ;-)
>
> ... which is a BIOS on steroids.

It isn't. It really isn't. That's what everybody told me, but the more I 
read about it, the more it becomes vividly clear that it's a totally 
new, different system, unrelated to what went before.

(It's actually very cool. Or, the way it's *supposed* to work is very 
cool; whether anyone will implement it correctly remains to be seen. 
Currently this stuff seem incredibly buggy...)

>>> As for the framebuffer, as a matter of fact nowadays the display memory
>>> sometimes /is/ genuine main memory.
>>
>> The latest Intel Core-i chips have an on-board GPU. I *presume* that if
>> you utilise this, everything happens in main RAM.
>
> Indeed, this feature is typical for on-chip GPUs. Though I think there
> are also external chips out there that are designed to utilize main memory.

Yeah, I would think all those motherboard with on-board graphics do 
this. (Having said that, those boards are getting rare now all the CPUs 
have the GPU on the die...)

> Well, my graphics card (an ASUS R9 280) utilizes the following resources:
>
> Obviously this is not nearly enough to access all 6 GB of the graphics
> card at once, but .25 GB is certainly large enough for a "window" to a
> portion of the graphics card's RAM; even traditional Super-VGA cards
> used such a window to access a much larger actual framebuffer, using I/O
> registers to choose which "page" of their RAM was mapped to the window,
> so I guess that's exactly the way it's done with today's graphics cards.

So the CPU can only access a limited window at a time. (Mind you, .25 GB 
should be easily enough to hold the entire frame buffer. This becomes 
more of an issue if you're trying to dump 0.8 GB of texture data into 
the card before you start rendering...) And of course, if you're doing 
3D drawing, the CPU has almost nothing to do with it, it's all the GPU.

> And no, you don't need to tell the PCIe controller anything about this -
> unless you're an OS developer and want to place the window somewhere
> else in the physical CPU address space.

So aside from assigning memory mappings, PCIe is transparent to the CPU. (?)

> My presumption is that you'd indeed build the whole ATA command packet
> in memory, but then just tell the HDD controller to transmit it to one
> particular drive. The PCIe bus is totally transparent in this respect
> (except of course for the initial setting-up at system startup). So are
> the peculiarities of the SATA bus.
>
>> or similar. I'd imaging talking to USB devices probably gets crazy.
>> (Given, you know, the arbitrary tree topology...)
>
> I'm sure it is. You don't map individual USB devices to memory
> addresses, nor even I/O addresses for that matter. All you talk to is
> the USB host controller. But again you don't need to worry whether that
> host controller is connected via PCIe, PCI, or even classic ISA.

So, again, the PCIe bus from the CPU to the PATA / SATA / USB / FireWire 
/ whatever controller is transparent, but USB itself isn't transparent, 
and you still need to know how to build ATA commands or whatever. (?)


Post a reply to this message

From: clipka
Subject: Re: Memory-mapped I/O
Date: 5 Feb 2015 07:25:02
Message: <54d3611e$1@news.povray.org>
Am 05.02.2015 um 09:33 schrieb Orchid Win7 v1:

>> As for the sound card, IIRC traditionally it was not memory-mapped. You
>> see, the sound card operates sequentially on non-repeating data, which
>> is exactly the mode of operation the DMA controller was designed for, so
>> to save memory address space for true memory (including devices' ROMs,
>> which are always memory-mapped) the card would present only two I/O
>> addresses for data (one for reading, one for writing). Same for disk I/O.
>
> Intriguing... I didn't know you could do that.

It's actually a quite simple mechanism, at least in basic theory. The 
idea is that the CPU has an input telling it to stall any bus access for 
a moment; the DMA asserts this signal to take over the bus for one 
memory access and another I/O access, then releases the bus again. The 
DMA controller in turn has an input by which the device hardware can 
signal that it is ready to transfer the next byte.

This was great stuff especially in the times when memory could go a good 
deal faster than the CPU could make use of it.

Well, things have changed: Memory access is the biggest bottleneck of 
all now.

>> If you really want to go down to that level, I'd suggest getting a
>> Raspberry PI and trying to write some standalone application (i.e. one
>> that doesn't need an OS) to run on it. The hardware seems to be well
>> enough documented to pull off that stunt.
>
> Or just, you know, the Pi emulator, which doesn't cost money. ;-)

Because a Raspberry Pi is soooooo extraordinarily expensive, sure... :P

Drawback of the Pi emulator is that it doesn't come with all those neat 
I/O pins ;)

>> And no, you don't need to tell the PCIe controller anything about this -
>> unless you're an OS developer and want to place the window somewhere
>> else in the physical CPU address space.
>
> So aside from assigning memory mappings, PCIe is transparent to the CPU.
> (?)

Memory mappings, IRQ mappings, I/O mappings, assignment of number of 
PCIe lanes... But yes - that's the gist of it.

> So, again, the PCIe bus from the CPU to the PATA / SATA / USB / FireWire
> / whatever controller is transparent, but USB itself isn't transparent,
> and you still need to know how to build ATA commands or whatever. (?)

Yup.


Post a reply to this message

Goto Latest 10 Messages Next 6 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.