Personal computing discussed

Moderators: renee, mac_h8r1, Nemesis

  • 1
  • 2
  • 3
  • 4
  • 5
  • 9
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 4:24 pm

l33t-g4m3r wrote:
bitvector wrote:
You're confusing PAE and address remapping, and you're also confusing virtual and physical addresses. We're talking about physical here, not virtual, so stop posting crap about 64-bit apps, and the /3GB switch. It has NOTHING TO DO WITH THIS.

You're the one confused.
PAE is only necessary for 32-bit windows.
read the articles.
thank you.

Yes, PAE is only necessary for 32-bit windows. But as I've said several times address remapping isn't the same as PAE, and address remapping IS still necessary to see all of your 4GB even on a 64-bit OS. Again, a 64-bit OS is neither necessary nor sufficient to address this particular issue because the size of the virtual address space isn't the issue here.

Convert wrote:
Because there is a lot of information and misinformation out there and people who claim they have it all figured out never take the time to start from the beginning and explain how everything works and *explain* the things that usually get rehashed 20 thousand times.

Well, I wanted to keep it as short as I could. In my experience, the longer and more informative a post is, the less likely it is that people actually read it. In fact, I get the feeling that no one ever reads my posts that are longer than a few sentences.

Convert wrote:
The problem lies with you as well bitvector, we learn by reading this stuff and it is extremely easy to pick up bad information or tie factual information into something that it doesn't apply to.

You basically give a higher level overview of the situation and expect people to take your word for it, even when something else they read appears to contradict that.

Fine, I apologize for posting this. I just wanted to clear up some misinformation I repeatedly see and instead it just got parroted back to me in this thread. I guess I should stick to RWT in the future.
 
Convert
Grand Gerbil Poohbah
Posts: 3452
Joined: Fri Nov 14, 2003 6:47 am

Sun Jul 15, 2007 4:29 pm

bitvector wrote:
Fine, I apologize for posting this. I just wanted to clear up some misinformation I repeatedly see and instead it just got parroted back to me in this thread. I guess I should stick to RWT in the future.

There isn't a need to apologize; I am just trying to explain why it happens. It isn't like the person posting the information is to blame, the person reading it should try to verify the information, but in doing so, well you know what ends up happening.

I find that posting detailed information may make people's eyes glaze over as well. Two things usually happen though when you do: First people might just take the time to read it and ask questions and finally understand. Second they might just say screw it and just assume you are right. Of course there is always the chance they might glaze over and continue posting what they always have, but they will just get referred back to this thread and eventually fall under one of the two category’s, hopefully.

Either way it was informative so I thank you for taking time out of your day to share it.
Tachyonic Karma: Future decisions traveling backwards in time to smite you now.
 
l33t-g4m3r
Minister of Gerbil Affairs
Posts: 2059
Joined: Mon Dec 29, 2003 2:54 am

Sun Jul 15, 2007 4:45 pm

then this would be the most relevant article.
http://en.wikipedia.org/wiki/X86-64

you're talking about whether or not the hardware supports the memory.
(the article you linked to originally was about pae/software support.)
Thats not really an issue with the Athlon 64 since the memory controller is built in.

so basically this is more about old intel boards that have memory controllers/bios that weren't designed for large amounts of ram.

The percentage of 64-bit boards that have the issue you're talking about is probably very very small.

maybe 32-bit systems that use the p3 and athlonXP, but who's gonna put 4gb on something like that?
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Sun Jul 15, 2007 5:06 pm

Haven't memory remapping (or mapping, I'm getting confused) been available since protected memory came to (ie. since the 286 I believe) and is pretty heavily depended upon these days anyway? AFAIK, memory allocated to any application appears like a large chunk of continuous memory, but (usually) consists of several different fragments of the physical memory (courtesy of the memory controller) ?
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Sun Jul 15, 2007 6:27 pm

l33t-g4m3r wrote:
then this would be the most relevant article.
http://en.wikipedia.org/wiki/X86-64

you're talking about whether or not the hardware supports the memory.

No, he's not.

He's talking about whether the hardware supports splitting the physical RAM addresses into two blocks, to avoid the addresses which are reserved for I/O devices in the 3-4GB range. This is a different issue.
Nostalgia isn't what it used to be.
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Sun Jul 15, 2007 6:30 pm

FubbHead wrote:
Haven't memory remapping (or mapping, I'm getting confused) been available since protected memory came to (ie. since the 286 I believe) and is pretty heavily depended upon these days anyway? AFAIK, memory allocated to any application appears like a large chunk of continuous memory, but (usually) consists of several different fragments of the physical memory (courtesy of the memory controller) ?

Separate issue.

What you're talking about is the mapping from virtual to physical addresses.

The issue bitvector is talking about is the ability to split the physical RAM addresses into two chunks, to avoid addressing conflicts between your RAM and your I/O devices. Virtual addresses don't even enter into it.
Nostalgia isn't what it used to be.
 
SuperSpy
Minister of Gerbil Affairs
Posts: 2403
Joined: Thu Sep 12, 2002 9:34 pm
Location: TR Forums

Sun Jul 15, 2007 6:49 pm

OK, I'm gonna post this question here because I believe threads like this are made of 100% pure awsome... :D

Given a fully complient bios/motherboard, how much memory can I reasonably expect a 32-bit operating system, such as Window XP, to support (and properly use)?
Desktop: i7-4790K @4.8 GHz | 32 GB | EVGA Gefore 1060 | Windows 10 x64
Laptop: MacBook Pro 2017 2.9GHz | 16 GB | Radeon Pro 560
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Sun Jul 15, 2007 6:55 pm

SuperSpy wrote:
OK, I'm gonna post this question here because I believe threads like this are made of 100% pure awsome... :D

Given a fully complient bios/motherboard, how much memory can I reasonably expect a 32-bit operating system, such as Window XP, to support (and properly use)?

For XP Pro, I believe the hard upper limit is 4GB of physical RAM.

Individual user processes are still limited to 2GB of virtual RAM, unless the /3GB switch is used in the boot.ini file, and the application was built with the LARGE_ADDRESS_AWARE option.
Nostalgia isn't what it used to be.
 
SuperSpy
Minister of Gerbil Affairs
Posts: 2403
Joined: Thu Sep 12, 2002 9:34 pm
Location: TR Forums

Sun Jul 15, 2007 7:02 pm

just brew it! wrote:
For XP Pro, I believe the hard upper limit is 4GB of physical RAM.

Individual user processes are still limited to 2GB of virtual RAM, unless the /3GB switch is used in the boot.ini file, and the application was built with the LARGE_ADDRESS_AWARE option.

Oh I understand that, but what I was asking was how much one can reasonably expect to be able to use, I.E. minus all the crap at the upper bounds of the 4GB. Is it something that can be measured, for instance by tallying up all the space reserved by all the hardware listed in the device manager?

Even with 4GB worth of memory sticks sitting in the motherboard, Windows is still going to report something less than 4096 MB available, I assume.
Desktop: i7-4790K @4.8 GHz | 32 GB | EVGA Gefore 1060 | Windows 10 x64
Laptop: MacBook Pro 2017 2.9GHz | 16 GB | Radeon Pro 560
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 7:28 pm

just brew it! wrote:
SuperSpy wrote:
OK, I'm gonna post this question here because I believe threads like this are made of 100% pure awsome... :D

Given a fully complient bios/motherboard, how much memory can I reasonably expect a 32-bit operating system, such as Window XP, to support (and properly use)?

For XP Pro, I believe the hard upper limit is 4GB of physical RAM.
But some of that won't be available because of overlap with device address space, as per BitVector's post that began this thread.

However, the various 32bit server versions of Windows can make use of up 36bits (64GB) of physical memory via PAE, again as BitVector described. The actual limits for each version are given in this table (including the funky Datacenter version that can go up to 128GB on special hardware).

Linux, given the right distro and build options, can use PAE in 32bits to get at all 4GB (again, provided your motherboard supports the necessary remapping of memory) or more if you have a server motherboard that supports >4GB of memory.
Individual user processes are still limited to 2GB of virtual RAM, unless the /3GB switch is used in the boot.ini file, and the application was built with the LARGE_ADDRESS_AWARE option.
Note, however, that PAE is somewhat incompatible with /3GB, because /3GB squeezes the OS down to 1GB of address space and PAE requires it to grow its data structures significantly to manage the extra memory, so /3GB effectively limits a server to 16GB of physical memory, even if more is installed.

But we're getting pretty far afield here, and the bottom line is this:

If you install 4GB of memory in your system, you will not see all of it unless:
1. Your motherboard supports remapping (or whatever it happens to be called in the bios) of memory that overlaps the PCI bus region
<i>and</i>
2. Your OS is either 64bit (of any sort) , 32bit Windows Server, or a 32bit Linux build that supports PAE (and turns it on)

Note that not all devices are compatible with PAE. Devices see physical addresses, always (this is why we have this issue with physical RAM and DMA device addresses overlapping) and some of them can't handle addresses beyond the 4GB boundary (this is not a driver or OS issue, this is a "cutting corners to make cheap hardware" issue). On top of that, some otherwise-functional devices have drivers that blow up when they see addresses > 4GB. This is why Windows XP doesn't turn on PAE automatically to give you that full 4GB (even though it is using it internally to handle the xD/NX Data Execution Protection bit). The hardware compatibility list for Windows 2K3 Server is shorter than the list for Windows XP, and that's one of the reasons. That's also one of the reasons some devices don't, and will never, have 64bit drivers.

But here's the thing: even if everything works perfectly -- your hardware and drivers can handle those addresses, your motherboard supports remapping, and you're running a 32bit OS (Windows or Linux) that turns on PAE so you see the full 4GB... you still aren't making optimal use of that memory. If you've got 4GB, you should be using a 64bit OS. Virtual memory management is built around a couple of fundamental assumptions, and one of those is that there's considerably more virtual address space than physical address space (ie, at least several times as much). When you're running a 32bit OS with 4GB of RAM, they're equal. And that creates problems: you get virtual address fragmentation, collisions, and other problems that cause the OS to work harder and less efficiently. PAE is a clever solution to a problem, but it is only a solution in particular circumstances, and most of those involve servers running certain workloads. It is a bad solution to the problem of using more than 4GB of physical memory for general purpose computing. The good solution is a 64bit OS.

If you've got 4GB, go 64bit. If you're thinking of getting 4GB, make sure your mobo supports remapping, and plan to go 64bit.

(Let me also take this opportunity to thank bitvector for starting a thread to pull this all into one place and for attempting to explain the issue and the solutions as clearly as possible.)
Last edited by UberGerbil on Sun Jul 15, 2007 7:40 pm, edited 1 time in total.
 
SuperSpy
Minister of Gerbil Affairs
Posts: 2403
Joined: Thu Sep 12, 2002 9:34 pm
Location: TR Forums

Sun Jul 15, 2007 7:38 pm

So basicly, one really shouldn't be running a system with more than 2 GB on a 32-bit operating system. Even if you can hack-and-slash your way into making the OS see all of your physical memory, it's still going to be getting awfully cramped in there, and probably not operating optimally.
Desktop: i7-4790K @4.8 GHz | 32 GB | EVGA Gefore 1060 | Windows 10 x64
Laptop: MacBook Pro 2017 2.9GHz | 16 GB | Radeon Pro 560
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 7:42 pm

derFunkenstein wrote:
I also found this article about enabling PAE (which my CPU supports according to CPU-Z) and while I ran the cmd prompt as admin and entered the command line, and the message returned was "The operation completed successfully" I'm still seeing only 2047. I turned off the memory remap again and now I'm back to 3007.

Yep, derFunk... the reason you're seeing this is as follows: when you turn on remap, the system puts 0-2GB at the normal place and puts 2GB-3GB at physical addresses 4GB-5GB. That's exactly the mapping that the AMD system I posted uses (it remaps everything between 2GB-4GB rather than just 3GB-4GB or 3.5GB-4GB). So that last 1GB requires PAE to access on your 32-bit Vista. Which is fine, but it seems that whatever you did to enable PAE didn't actually work (or do anything). FWIW, other people report the same problem (force enable PAE doesn't actually do so). I don't know enough about Windows to know why this would be the case, but it makes sense why you'd see this your almost 3GB before going down to 2GB because it moved the last 1GB above the 4GB boundary.
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 7:43 pm

UberGerbil wrote:
But here's the thing: even if everything works perfectly -- your hardware and drivers can handle those addresses, your motherboard supports remapping, and you're running a 32bit OS (Windows or Linux) that turns on PAE so you see the full 4GB... you still aren't making optimal use of that memory.

Yes, I generally wouldn't recommend people futz around with this just to get the satisfaction of using >=4GB in a 32-bit OS. My purpose in starting the thread wasn't to encourage people to do so, but to point out that the typical "OMG 64-bit OS" answer doesn't really solve the problem in question. It is half of a (the best) solution, but not necessary or sufficient.

UberGerbil wrote:
Note that not all devices are compatible with PAE. Devices see physical addresses, always (this is why we have this issue with physical RAM and DMA device addresses overlapping) and some of them can't handle addresses beyond the 4GB boundary (this is not a driver or OS issue, this is a "cutting corners to make cheap hardware" issue). On top of that, some otherwise-functional devices have drivers that blow up when they see addresses > 4GB.

FWIW, a lot of OSs (like Linux) deal with devices that can't handle larger physical addresses by using IO bounce buffers in low physical memory. Of course, this artificially increases memory pressure because some physical frames are not okay for some uses. And you incur an extra copy. Legacy baggage is a bitch. IO address virtualization is a nicer solution to this (like AMD's IO MMU).
Last edited by bitvector on Sun Jul 15, 2007 7:48 pm, edited 2 times in total.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 7:47 pm

SuperSpy wrote:
So basicly, one really shouldn't be running a system with more than 2 GB on a 32-bit operating system. Even if you can hack-and-slash your way into making the OS see all of your physical memory, it's still going to be getting awfully cramped in there, and probably not operating optimally.
Exactly. Except remove the "probably."

(Linus Torvalds says it starts getting bad after 1GB, but that's because he's stuck worrying about the ugly guts of things and knows how much better things could be in theory. If you want to see his commentary on this, I linked to it here).
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 7:56 pm

UberGerbil wrote:
(Linus Torvalds says it starts getting bad after 1GB, but that's because he's stuck worrying about the ugly guts of things and knows how much better things could be in theory. If you want to see his commentary on this, I linked to it here).

Not that I disagree with him about PAE and the inanity of having a virtual address space that is smaller than the physical address space, but frequently Linus voices his opinions in very strong terms (which you noted in your linked post). Linus also said that Linux would never EVER support more than 2GB on a 32-bit platform (for the same reason: "no, get a bigger address space"). :lol:

Another one of my favorite Linus quotes (re: strong opinions) is his recent, "Solaris is ****" aphorism. :D
 
Krogoth
Emperor Gerbilius I
Posts: 6049
Joined: Tue Apr 15, 2003 3:20 pm
Location: somewhere on Core Prime
Contact:

Sun Jul 15, 2007 9:24 pm

Wow, a bunch of information and confusion.

The problem is that BIOS and older hardware-level resource addressing by default requires a reservation of memory in the x86 standard. The designers of the x86 architect place this reservation in the upper limit of 4GB. That is why this problem rears it ugly head when you try to address beyond 2GBs of memory. The same designers in 1980s thought that x86 would never realistically never reach beyond 2GB of local memory.

The current memory swapping trick outcomes this problem by forcing the memory to address outside this reservation space.

The more permanent solution would simply be complete elimination of old resource allocation kludges in the x86 standard. I think EFI might do the trick.
Gigabyte X670 AORUS-ELITE AX, Raphael 7950X, 2x16GiB of G.Skill TRIDENT DDR5-5600, Sapphire RX 6900XT, Seasonic GX-850 and Fractal Define 7 (W)
Ivy Bridge 3570K, 2x4GiB of G.Skill RIPSAW DDR3-1600, Gigabyte Z77X-UD3H, Corsair CX-750M V2, and PC-7B
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 9:32 pm

Krogoth wrote:
The more permanent solution would simply be complete elimination of old resource allocation kludges in the x86 standard. I think EFI might do the trick.

EFI doesn't fix PCI cards that can't deal with physical addresses > 4GB (and broken PCI-E/PCI-X cards that truncate 64-bit addresses). IO address virtualization can handle that particular thorn, but I have a feeling we'll just end up living with it for a long time.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 9:34 pm

bitvector wrote:
UberGerbil wrote:
But here's the thing: even if everything works perfectly -- your hardware and drivers can handle those addresses, your motherboard supports remapping, and you're running a 32bit OS (Windows or Linux) that turns on PAE so you see the full 4GB... you still aren't making optimal use of that memory.

Yes, I generally wouldn't recommend people futz around with this just to get the satisfaction of using >=4GB in a 32-bit OS. My purpose in starting the thread wasn't to encourage people to do so, but to point out that the typical "OMG 64-bit OS" answer doesn't really solve the problem in question. It is half of a (the best) solution, but not necessary or sufficient.
Yeah. I think we need to emphasize that the sufficient answer is the right mobo + 64bit OS, and just leave PAE to die. I've been guilty of bringing it up in the past -- because I have seen it work as a solution in the right circumstances, when 64bit wasn't an option -- but I think at this point, with 64bit versions of XP and Vista available, as well as Linux (and presumably OSX Real Soon Now), pushing 64bit is the way to go. People may have driver issues with 64bit, but they may have them with PAE too, and there's a much better chance the 64bit issues will get fixed. Plus on the Windows side there's no cheap way to get PAE (unless you're piggybacking on MSDN Universal or buying 2K3 Server with an educational discount or something).
UberGerbil wrote:
Note that not all devices are compatible with PAE. Devices see physical addresses, always (this is why we have this issue with physical RAM and DMA device addresses overlapping) and some of them can't handle addresses beyond the 4GB boundary (this is not a driver or OS issue, this is a "cutting corners to make cheap hardware" issue). On top of that, some otherwise-functional devices have drivers that blow up when they see addresses > 4GB.
FWIW, a lot of OSs (like Linux) deal with devices that can't handle larger physical addresses by using IO bounce buffers in low physical memory. Of course, this artificially increases memory pressure because some physical frames are not okay for some uses. Legacy baggage is a bitch. IO address virtualization is a nicer solution to this (like AMD's IO MMU).
Yeah, I thought about mentioning this but Didn't Want To Go There. Believe it or not, the NT kernel (or, technically, the HAL codebase) has actually supported an IOMMU for over a decade -- suport was added for the Alpha and MIPS, which both had it (well the MIPS R4000 did, not sure about earlier). AFAIK the code was never used on the x86 because there's never been the hardware for it. My understanding of the Linux code on AMD64 is that it reprograms the AGP GART to act as an IOMMU, which is possible because it's in the chip -- it's a great hack, but it's not a true IOMMU. Neither Intel nor AMD have seen fit to support real hardware IOMMU (and in fact when they do get around to it they'll need to do it so as to support virtualized IO on virtualized OSes as well).

The bounce buffers aren't all that bad, apparently, in terms of performance impact; I suspect the biggest issue is cache pollution.
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Sun Jul 15, 2007 9:38 pm

bitvector wrote:
derFunkenstein wrote:
I also found this article about enabling PAE (which my CPU supports according to CPU-Z) and while I ran the cmd prompt as admin and entered the command line, and the message returned was "The operation completed successfully" I'm still seeing only 2047. I turned off the memory remap again and now I'm back to 3007.

Yep, derFunk... the reason you're seeing this is as follows: when you turn on remap, the system puts 0-2GB at the normal place and puts 2GB-3GB at physical addresses 4GB-5GB. That's exactly the mapping that the AMD system I posted uses (it remaps everything between 2GB-4GB rather than just 3GB-4GB or 3.5GB-4GB). So that last 1GB requires PAE to access on your 32-bit Vista. Which is fine, but it seems that whatever you did to enable PAE didn't actually work (or do anything). FWIW, other people report the same problem (force enable PAE doesn't actually do so). I don't know enough about Windows to know why this would be the case, but it makes sense why you'd see this your almost 3GB before going down to 2GB because it moved the last 1GB above the 4GB boundary.


It was pretty simplified in my mind, but that's the more fleshed-out edition fof what I *thought* it had done and Vista just wasn't capable of doing anything about it.

I've used 3GB with XP before and never had that sort of disappearing RAM before. I'm tempted to blame Vista32 but I'm not 100% sure if it's the OS, though - it was an AMD system that I'd used with that much RAM previously and with their onboard memory controller, maybe it wrapped the memory around the "dead" space differently. I don't know. I broke down and gave MS my $20 though, so I'll turn back on memory remapping and upgrade to Vista64 and see if that does it.

IO address virtualization can handle that particular thorn, but I have a feeling we'll just end up living with it for a long time.

Eventually, though, won't every OS just have to work around it automatically (assuming memory controller support)? Linux apparently handles it differently than Vista does, at least, or enabling PAE and turning on the remap in my BIOS would have resulted in 3072MB of memory "seen" by the OS.

I guess what you mean by living with it is more on the development side dealing with segmented memory addresses (if that's the wrong terminology, let me know) rather than users always having this case of "disappearing" memory.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 9:43 pm

UberGerbil wrote:
My understanding of the Linux code on AMD64 is that it reprograms the AGP GART to act as an IOMMU, which is possible because it's in the chip -- it's a great hack, but it's not a true IOMMU. Neither Intel nor AMD have seen fit to support real hardware IOMMU (and in fact when they do get around to it they'll need to do it so as to support virtualized IO on virtualized OSes as well).

Well, I was talking about the fact that AMD has proposed a real IOMMU that they want to deliver as a complement to Pacifica. The specs are here.

On their site they clarify that some people refer to the hacked-up GART as an IOMMU, but they're trying to deliver a real full-fledged IOMMU too: "Existing AMD64 devices already include a more limited address translation facility, called a GART (Graphics Address Remapping Table), right on chip. The on-chip GART has been used for device address translation in existing systems, and is sometimes itself referred to as an IOMMU (especially in discussions of the Linux kernel), which can lead to confusion between the existing GART and the new IOMMU specification that we're discussing here."
 
Krogoth
Emperor Gerbilius I
Posts: 6049
Joined: Tue Apr 15, 2003 3:20 pm
Location: somewhere on Core Prime
Contact:

Sun Jul 15, 2007 9:44 pm

bitvector wrote:
Krogoth wrote:
The more permanent solution would simply be complete elimination of old resource allocation kludges in the x86 standard. I think EFI might do the trick.

EFI doesn't fix PCI cards that can't deal with physical addresses > 4GB (and broken PCI-E/PCI-X cards that truncate 64-bit addresses). IO address virtualization can handle that particular thorn, but I have a feeling we'll just end up living with it for a long time.


Again, those are just kludges that depend on legacy resource addressing. The sooner the industry gets rid of IRQs, DMAs and such aging nonsense. The quicker these problems will come to past.
Gigabyte X670 AORUS-ELITE AX, Raphael 7950X, 2x16GiB of G.Skill TRIDENT DDR5-5600, Sapphire RX 6900XT, Seasonic GX-850 and Fractal Define 7 (W)
Ivy Bridge 3570K, 2x4GiB of G.Skill RIPSAW DDR3-1600, Gigabyte Z77X-UD3H, Corsair CX-750M V2, and PC-7B
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 9:46 pm

Krogoth wrote:
The problem is that BIOS and older hardware-level resource addressing by default requires a reservation of memory in the x86 standard. The designers of the x86 architect place this reservation in the upper limit of 4GB. That is why this problem rears it ugly head when you try to address beyond 2GBs of memory. The same designers in 1980s thought that x86 would never realistically never reach beyond 2GB of local memory.
Considering hard drives of the era were in the 10s of MB range, yeah, even 1GB was unimaginably huge. The first IBM motherboards didn't support more than 512K (and that cost so much that hardly anybody had more than 64K).

But this isn't a BIOS issue. And it actually arrived later, with PCI. Devices that do DMA (Direct Memory Addressing) have to be able to... address memory. Directly. That means they have to live somewhere in the address space. It doesn't really matter where, so it was decided to stick them at the top of the address space where (until recently) they wouldn't interfere with real physical memory. But they have to go somewhere, and wherever that might be it's going to preclude RAM from occupying those same addresses.
The current memory swapping trick outcomes this problem by forcing the memory to address outside this reservation space.
It's not swapping. The physical addresses for RAM are relocated above the 4GB boundary, where they happily live. As long as the OS can cope with addresses above 4GB (and 32bit OSes can, through PAE) everybody is happy... well, everybody that deals with virtual addresses (because only the OS knows in physical memory those virtual addresses go). As I mentioned, devices can still blow up if they're asked to transfer data into that remapped memory because they see real physical addresses.
The more permanent solution would simply be complete elimination of old resource allocation kludges in the x86 standard. I think EFI might do the trick.
This isn't a BIOS problem., so, as Bitvector said, EFI isn't going to fix it. Anyway, EFI is only supported on 64bit, and if you have a 64bit OS your problem is solved anyway (again, assuming your mobo supports remapping).
 
Krogoth
Emperor Gerbilius I
Posts: 6049
Joined: Tue Apr 15, 2003 3:20 pm
Location: somewhere on Core Prime
Contact:

Sun Jul 15, 2007 9:51 pm

UberGerbil wrote:
Krogoth wrote:
The problem is that BIOS and older hardware-level resource addressing by default requires a reservation of memory in the x86 standard. The designers of the x86 architect place this reservation in the upper limit of 4GB. That is why this problem rears it ugly head when you try to address beyond 2GBs of memory. The same designers in 1980s thought that x86 would never realistically never reach beyond 2GB of local memory.
Considering hard drives of the era were in the 10s of MB range, yeah, even 1GB was unimaginably huge. The first IBM motherboards didn't support more than 512K (and that cost so much that hardly anybody had more than 64K).

But this isn't a BIOS issue. And it actually arrived later, with PCI. Devices that do DMA (Direct Memory Addressing) have to be able to... address memory. Directly. That means they have to live somewhere in the address space. It doesn't really matter where, so it was decided to stick them at the top of the address space where (until recently) they wouldn't interfere with real physical memory. But they have to go somewhere, and wherever that might be it's going to preclude RAM from occupying those same addresses.
The current memory swapping trick outcomes this problem by forcing the memory to address outside this reservation space.
It's not swapping. The physical addresses for RAM are relocated above the 4GB boundary, where they happily live. As long as the OS can cope with addresses above 4GB (and 32bit OSes can, through PAE) everybody is happy... well, everybody that deals with virtual addresses (because only the OS knows in physical memory those virtual addresses go). As I mentioned, devices can still blow up if they're asked to transfer data into that remapped memory because they see real physical addresses.
The more permanent solution would simply be complete elimination of old resource allocation kludges in the x86 standard. I think EFI might do the trick.
This isn't a BIOS problem., so, as Bitvector said, EFI isn't going to fix it. Anyway, EFI is only supported on 64bit, and if you have a 64bit OS your problem is solved anyway (again, assuming your mobo supports remapping).


EFI being 64-bit from ground up will fix the problem by forcing hardware guys to forsake legacy x86. I suppose that a new dynamic memory addressing scheme might be a even better solution.
Gigabyte X670 AORUS-ELITE AX, Raphael 7950X, 2x16GiB of G.Skill TRIDENT DDR5-5600, Sapphire RX 6900XT, Seasonic GX-850 and Fractal Define 7 (W)
Ivy Bridge 3570K, 2x4GiB of G.Skill RIPSAW DDR3-1600, Gigabyte Z77X-UD3H, Corsair CX-750M V2, and PC-7B
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Sun Jul 15, 2007 9:55 pm

Krogoth, I think you're looking at a sky-high pie, man. It sounds nice, but I don't see that happening in x86-based (and AMD64 has its roots firmly in x86) operating systems anytime soon. Too many consumers are concerned with backwards compatibility. Look at the stink over what doesn't run on Vista and multiply that by about 10M.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 9:55 pm

Krogoth wrote:
Again, those are just kludges that depend on legacy resource addressing. The sooner the industry gets rid of IRQs, DMAs and such aging nonsense. The quicker these problems will come to past.
Uh... you're not going to get rid of interrupts or DMA transfers. Do you really want your CPU to be polling devices and copying data manually? Have you noticed the difference between PIO and DMA access with hard drives? The legacy IRQ implementation isn't really a constraint anymore, and you're definitely not getting rid of DMA.

Now, a full hardware IO MMU with virtualization, prioritized IO, etc, would be a Good Thing, but that's essentially a separate issue.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 9:57 pm

Krogoth wrote:
EFI being 64-bit from ground up will fix the problem by forcing hardware guys to forsake legacy x86.
I don't think EFI does what you think it does.
I suppose that a new dynamic memory addressing scheme might be a even better solution.
I don't what this even means.
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Sun Jul 15, 2007 9:58 pm

UberGerbil wrote:
I don't what this even means.

I took him to mean that he wants to start over, but essentially what he's suggesting is a whole new platform.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
Krogoth
Emperor Gerbilius I
Posts: 6049
Joined: Tue Apr 15, 2003 3:20 pm
Location: somewhere on Core Prime
Contact:

Sun Jul 15, 2007 10:01 pm

UberGerbil wrote:
Krogoth wrote:
EFI being 64-bit from ground up will fix the problem by forcing hardware guys to forsake legacy x86.
I don't think EFI does what you think it does.
I suppose that a new dynamic memory addressing scheme might be a even better solution.
I don't what this even means.


Dynamic = it can change on the fly to meet the demands of the system rather than having some arbitrary static values.

derFunkenstein wrote:
I took him to mean that he wants to start over, but essentially what he's suggesting is a whole new platform.


For the most part, that is the general direction.
Gigabyte X670 AORUS-ELITE AX, Raphael 7950X, 2x16GiB of G.Skill TRIDENT DDR5-5600, Sapphire RX 6900XT, Seasonic GX-850 and Fractal Define 7 (W)
Ivy Bridge 3570K, 2x4GiB of G.Skill RIPSAW DDR3-1600, Gigabyte Z77X-UD3H, Corsair CX-750M V2, and PC-7B
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 10:07 pm

bitvector wrote:
Not that I disagree with him about PAE and the inanity of having a virtual address space that is smaller than the physical address space, but frequently Linus voices his opinions in very strong terms (which you noted in your linked post). Linus also said that Linux would never EVER support more than 2GB on a 32-bit platform (for the same reason: "no, get a bigger address space"). :lol:
Oh, yeah, which is why I linked to my post and not to the RWT discussion directly. He is very... passionate about certain things. I actually got into an argument with him about threading: he insisted the only good reason to thread was to make use of extra cores, which of course is completely reasonable position from the standpoint of the OS, but he didn't want to accept that maintaining a responsive UI, in an application with a UI, is reasonable use of separate UI and worker threads.
bitvector wrote:
Well, I was talking about the fact that AMD has proposed a real IOMMU that they want to deliver as a complement to Pacifica. The specs are here.
Yeah, sorry, we were talking at cross-purposes. I know about AMD's proposal; I just thought you were referring to what's out there being called an IOMMU right now.

I suspect it'll take a few iterations before they get it working to the point where it's a benefit, like the virtualization features or message-signaled interrupts (which have been in chipsets, in a broken form, for some time but are now supported in Vista because the hardware bugs have been ironed out). And if it's like hardware virtualization support, we'll also have two different implementations. Of course this is one area where AMD could forge ahead and become the de facto standard, if they can get MS to adopt it, much as they did with x86-64 (so it's no surprise they're out in front with the specs, if not the implementation).
 
just brew it!
Administrator
Posts: 54500
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Sun Jul 15, 2007 10:15 pm

Krogoth wrote:
Dynamic = it can change on the fly to meet the demands of the system rather than having some arbitrary static values.

Sounds like you are advocating something along the lines of another layer of memory management, underneath the current virtual memory scheme. That would be huge overkill for the problem you're trying to solve, and would also likely hurt performance.
Nostalgia isn't what it used to be.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 9

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On