Personal computing discussed

Moderators: renee, mac_h8r1, Nemesis

  • 1
  • 2
  • 3
  • 4
  • 5
  • 9
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 10:21 pm

ShadowEyez wrote:
Most motherboards that support 64-bit CPU's do have support for memory address remapping, as Intel and other tech company engineers saw this coming years ago.
Actually, until recently only server boards had the option for this. I don't know how the actual numbers stack up (it's compounded by the option not being in the BIOS even if the chipset is capable, so the percentage may change as new BIOS are released) but there are plenty of 64bit-capable mobos out there without this option. And as BitVector noted, some of those may have been built cheaply without the requisite address lines, even if the chipset is capable, so no amount of BIOS updates will make this option available.
As for PAE, while I have never personally used it, I have heard because of the way it maps virtual memory (a sort of 36 bit -> 32 bit memory address converter) it runs very slow.
This is completely wrong. What you're describing <i>might</i> be AWE, which is a completely different technology that allows applications to manually map additional memory in and out of their address space. But PAE uses exactly the same page tables (PTEs) and hardware resources that regular address translation does, it just adds another layer (and expands their size). In fact, if you're running XP SP2 on a CPU that supports hardware NX/xD (Data Execution Prevention) you're using PAE right now (it's just hidden from the rest of the system). And of course everybody complained about how slow SP2 was compared to pre-SP2.... no? Also, if you think PAE is slow, you must hate x64, since it uses one <i>more</i> layer of PTEs than PAE does.
Last edited by UberGerbil on Sun Jul 15, 2007 10:49 pm, edited 1 time in total.
 
titan
Grand Gerbil Poohbah
Posts: 3376
Joined: Mon Feb 18, 2002 7:00 pm
Location: Great Smoky Mountains
Contact:

Re: Dude, where's my 4GB?

Sun Jul 15, 2007 10:28 pm

bitvector wrote:
titan wrote:
bitvector wrote:
The 4GB limitation in virtual addressing has to do with a single application...

I think you mean 2GB, right?

No, I meant 4GB. A 32-bit application sees 2^32 bits == 4GB of virtual address space. The split of the virtual address space between user/kernel data is OS-specific: on Linux this would usually be 3GB/1GB, but even this is just one possible choice. In fact, Ingo Molnar wrote a patch (the 4G/4G patch) to allow userspace to use almost the full 4GB. It reserves only 16MBs at the top of the address space for a kernel stack and a few other things and the kernel uses almost entirely separate page table entries when you perform a syscall. Of course this has high overhead because x86 does not have a tagged TLB, so you incur a TLB flush on every syscall. But there's definitely more than one way to skin a cat: 4GB is the only general non-OS specific limit for a 32-bit address space.

One of my gripes about these threads that led me to write this is that they tend to get derailed on issues like the size of virtual address space, which has nothing to do with this particular problem. The people asking this question are going into "My Computer" properties or running some system diagnostic program and seeing how much RAM that Windows reports as total system memory.


Oh, okay, I got ya now. I didn't know you were being nonspecific on the OS. Thanks for clearing that up.
The best things in life are free.
http://www.gentoo.org
Guy 1: Surely, you will fold with me.
Guy 2: Alright, but don't call me Shirley.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 10:46 pm

derFunkenstein wrote:
I also found this article about enabling PAE (which my CPU supports according to CPU-Z) and while I ran the cmd prompt as admin and entered the command line, and the message returned was "The operation completed successfully" I'm still seeing only 2047. I turned off the memory remap again and now I'm back to 3007. That's close enough to the full 3GB that I'm not going to spend the $20 or whatever Microsoft wants for a 64-bit version of the media that I already paid for (bought Home Premium academic version at the campus bookstore thinking it'd have both 32 and 64-bit binaries...oops)
Yeah, Ultimate is the only edition with both 32bit and 64bit media in the box, AFAIK.

I'm pretty sure that forcing PAE on the non-server versions of Windows does nothing to turn it on, and this "tweak" is bogus. In reality PAE is already on -- just not exposed outside the HAL -- as long as hardware DEP is available, and I don't think Vista Home will expose it even if you try to force it. I don't have a 32bit version of Vista to test on, but I suspect that's a NoOP and it just tells you it completed because it didn't fail (hey, it <b>is</b> [still] on, right?). MS doesn't want to expose PAE for consumers because some consumer hardware can't handle it, and they have enough issues with compatibility in Vista as it is. Yeah, it would be nice if there was a "no airbags" switch that you could use with the assumption that if you blew yourself up you could handle it... and I believe that switch is called "Linux." The Windows world has it too, actually, but it's not enough to just (think you) know what you're doing: you also have to be on the Longhorn Server beta (or have the pocketbook to buy it when it ships).
derFunkenstein wrote:
IO address virtualization can handle that particular thorn, but I have a feeling we'll just end up living with it for a long time.
Eventually, though, won't every OS just have to work around it automatically (assuming memory controller support)? Linux apparently handles it differently than Vista does, at least, or enabling PAE and turning on the remap in my BIOS would have resulted in 3072MB of memory "seen" by the OS.
Well, mobos could ship with the remapping option turned on by default, and users could use a 64bit OS by fefault, and the problem goes away for everybody except the OS developers and driver writers (and even there it's not bad, once you're living in a 64bit world). An IOMMU gets rid of the problem completely for driver writers, too (which I think is the answer to Krogoth's "Dynamic Memory" suggestion). That would just leave people who insist on running 32bit for whatever reason, who'll have to get the right linux configuration or Windows Server edition to get to all their memory. But they're eventually going to be like people running Win98 or DOS.
Last edited by UberGerbil on Mon Jul 16, 2007 1:14 am, edited 1 time in total.
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Sun Jul 15, 2007 10:53 pm

UberGerbil wrote:
ShadowEyez wrote:
Most motherboards that support 64-bit CPU's do have support for memory address remapping, as Intel and other tech company engineers saw this coming years ago.
Actually, until recently only server boards had the option for this. I don't know how the actual numbers stack up (it's compounded by the option not being in the BIOS even if the chipset is capable, so it may get better as new BIOS are released) but there are plenty of 64bit-capable mobos out there without this option.

Yeah, this is one reason we keep seeing this problem on message boards. I first became cognizant of this over a year ago when the company I worked for put 4GB into a box with a EM64T-capable CPU running 64-bit Linux and could only get 3.3GB. At the time, there wasn't as much info about this problem and it led me to look into it. The chipset itself was (according to Intel) incapable of remapping. It was really infuriating because they advertise it capable of accepting 4GB of RAM, and they had to know it wouldn't be usable.

Even today, a lot of laptops with Core 2 Duos suffer from this, even some laptops with chipsets that are theoretically capable. In addition to the lazy BIOS writer explanation (which I'm sure happens sometimes), I saw another plausible hardware explanation: according to Ian Griffiths, earlier Pentium Ms supposedly only had 32 physical address lines to save power. Now, C2Ds have enough address lines, but perhaps we see some mobo manufacturers not connecting the top address lines > 32 to the northbridge for similar reasons (or cost). I don't know if his statements are accurate, but I do know earlier Pentium Ms didn't support PAE (because I've been bitten by it) so it seems at least feasible.
 
UberGerbil
Grand Admiral Gerbil
Posts: 10368
Joined: Thu Jun 19, 2003 3:11 pm

Sun Jul 15, 2007 11:34 pm

bitvector wrote:
Even today, a lot of laptops with Core 2 Duos suffer from this, even some laptops with chipsets that are theoretically capable. In addition to the lazy BIOS writer explanation (which I'm sure happens sometimes), I saw another plausible hardware explanation: according to Ian Griffiths, earlier Pentium Ms supposedly only had 32 physical address lines to save power. Now, C2Ds have enough address lines, but perhaps we see some mobo manufacturers not connecting the top address lines > 32 to the northbridge for similar reasons (or cost). I don't know if his statements are accurate, but I do know earlier Pentium Ms didn't support PAE (because I've been bitten by it) so it seems at least feasible.
Yeah, laptops are an issue. Of course, it's only been very recently that it was possible to stuff more than 2GB into a laptop, so for a lot of older machines this is going to be a purely theoretical limitation. And yeah, I believe the Pentium M pre-65nm didn't support 36bit physical addresses; Yonah got it because they were going to need it for Sossaman (remember that? Did any actually ship?) I'm sure power was an issue, but they probably left it out of the design primarily because they didn't see the need to put it in: in ~2001, when it was being designed, what madman imagined we'd need to address >4GB with a laptop processor? It's not like there were SODIMMs that made that even theoretically possible. DEP (xD bit) is a closely related issue: you don't need the actual address lines, but you do need 64bit PTEs so that you have somewhere to store the bit, so they had to add that support back in (even if they didn't take it all the way to 36bit physical addresses. So one doesn't imply the other, but if you don't have xD you're definitely not going to have PAE (etc).
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 2:58 am

just brew it! wrote:
FubbHead wrote:
Haven't memory remapping (or mapping, I'm getting confused) been available since protected memory came to (ie. since the 286 I believe) and is pretty heavily depended upon these days anyway? AFAIK, memory allocated to any application appears like a large chunk of continuous memory, but (usually) consists of several different fragments of the physical memory (courtesy of the memory controller) ?

Separate issue.

What you're talking about is the mapping from virtual to physical addresses.

The issue bitvector is talking about is the ability to split the physical RAM addresses into two chunks, to avoid addressing conflicts between your RAM and your I/O devices. Virtual addresses don't even enter into it.

As far as I can tell (and I want to strongly emphasize that :D ), it isn't that much of an seperate "issue", since it is this mapping that is making it all possible. Being able to "map" the physical memory wherever you want, just as it can "pretend" your writing to memory when you're writing to devices. and is sort of why it's basically a non-problem on 64bit systems. The memory can be "moved to" anywhere, the MMU just gives it to you on a silver plate. This splitting of memory (and the support of it by the BIOS in consumer boards) have only become needed as an option on todays 64-bit systems, but I think it have been possible to do since the 386 days. No?

The real issue here, as I can see it, is that the PCI devices can only communicate in 32-bits and need their mapped addresses to be withing this range, while at the same time the 4GB of memory occupies the whole 32-bit address range aswell. Which is why (not counting lacking support from manufacturers) it's a problem on 32-bit systems/software only.

It seems to me that this IOMMU is a good idea, which brings a bit more flexability for the future. Or something.

But to iterate, this laymen isn't an engineer, and is just speaking out of his rather general but limited knowledge. :-)
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
SnowboardingTobi
Gerbil Team Leader
Posts: 276
Joined: Wed May 21, 2003 10:56 am
Location: in your house, on the toilet, reading the newspaper
Contact:

Mon Jul 16, 2007 3:38 am

Just as an example...

I have 4GB of RAM on my Asus P5B Deluxe and running Vista 64bit Ultimate. The only way that the BIOS and Vista could see all 4GB was if I enabled the Memory Remap Feature in the BIOS. If it were disabled then both BIOS and Vista only saw 3+GB.
 
bhtooefr
Lord High Gerbil
Posts: 8198
Joined: Mon Feb 16, 2004 11:20 am
Location: Newark, OH
Contact:

Mon Jul 16, 2007 6:34 am

You know, I had an idea here.

Obviously, you'd have to ditch all the legacy cruft, and you'd end up restricting it to a Vista Second Edition, if you will, but...

Rather than working AROUND the hardware (like we have with the 640 K-1 MB and 2 GB (with remapping on, usually 3 to 3.3 without)-4 GB spaces,) why not just shove the hardware at the beginning of the address space?

Ideal memory map (not using hex addresses here, deal with it):

0 GiB

Firmware: 0-31 MiB
Hardware: 32-2047 MiB

2 GiB

Beginning of RAM

Yes, I AM making the same mistake of defining my firmware size and all, but I think 32 megs will be PLENTY for a while. At least with my addressing scheme, you can hit the disk for more (although that triggers Old Compaq Syndrome - wipe the disk, and you have no BIOS,) and not affect anything. ;) Firmware being there of course means that it'll start executing there when you clear the PC, and firmware can pass control as needed.
Image
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 7:34 am

Yeah, I was thinking the same.. you would circumvent the "issue" of having to split the memory (even though I don't think that's so much of an issue, more like lazy manufacturers). And it would get a more elegant structure. But the core problem is still there isn't enought bits. And I'm wondering if not IOMMU have an equal (if not bigger) chance as becoming an accepted complete replacement solution. If it have to change, lets make it proper. :-)

Although I don't know if IOMMU is depenedent on HyperTransport.
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
barkotron
Gerbil In Training
Posts: 3
Joined: Mon Jul 16, 2007 7:49 am

Mon Jul 16, 2007 8:12 am

bhtooefr wrote:
Ideal memory map (not using hex addresses here, deal with it):

0 GiB

Firmware: 0-31 MiB
Hardware: 32-2047 MiB

2 GiB

Beginning of RAM

Yes, I AM making the same mistake of defining my firmware size and all, but I think 32 megs will be PLENTY for a while. At least with my addressing scheme, you can hit the disk for more (although that triggers Old Compaq Syndrome - wipe the disk, and you have no BIOS,) and not affect anything. ;) Firmware being there of course means that it'll start executing there when you clear the PC, and firmware can pass control as needed.


Long time lurker, first time poster etc. Hi folks :).

Please forgive me if I've completely misunderstood any of the above, but surely this scheme is already not big enough for hardware addresses, and would be a pig (requiring constant intervention) to maintain?

As I understand it, the more RAM on a peripheral, the more address space it uses up - e.g. someone with two 8800GTXs will lose 1.5 GB of usable RAM in the non-server 32-bit Windows OS, regardless of the BIOS remapping (well, except the BIOS remapping would have them lose more, from the sounds of it).

If someone has two of the 1GB HD2900XT cards in Crossfire, that's 2 GB of address space right there. Where does their sound card RAM go? Their physics card etc etc? What happens when someone starts using 4-way crossfire? Graphics cards are probably going to be the big problem as they seem to increase the amount of RAM onboard every couple of generations at least.

This being the case, I don't see how it makes sense to carve out a defined size right at the beginning of the address space for this kind of thing. Every couple of GPU generations we'd have those willing to pay over the odds for the very high end cards demanding a differently-sized memory hole at the beginning of the address space. We'd need to update the BIOS/motherboard so it supported a larger remapped space. Does it not make more sense to once again stick all this DMA stuff right at the top of the 64-bit address space and work up to it, at the same time allowing the addressing to extend downwards as the hardware need increases? To my mind this would a) give us a lot more time before a rethink was needed (i.e. shove the hardware stuff at the top of the 128-bit space in 10-15 years' time ;)) and b) need less maintenance in the meantime.
 
Inkedsphynx
Gerbil Jedi
Posts: 1514
Joined: Fri Nov 19, 2004 9:57 am
Location: Seattle, WA

Mon Jul 16, 2007 8:58 am

This is probably the dumbest question ever, since it seems so painfully obvious to me, when most of the stuff in this thread is 4 leagues above my head, but...

Why don't they just take all the crap that has to be assigned that isn't physical memory and make it so that it stuffs all that after your physical memory, on a dynamic basis.

So if you put 2gb in, it all goes in the 3-4gb range, if you put 4gb in, it goes in the 5-6gb range, etc etc? Wouldn't that solve the problem without making it so confusing for us end-users?
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 9:10 am

barkotron wrote:
As I understand it, the more RAM on a peripheral, the more address space it uses up - e.g. someone with two 8800GTXs will lose 1.5 GB of usable RAM in the non-server 32-bit Windows OS, regardless of the BIOS remapping (well, except the BIOS remapping would have them lose more, from the sounds of it).

If someone has two of the 1GB HD2900XT cards in Crossfire, that's 2 GB of address space right there. Where does their sound card RAM go? Their physics card etc etc? What happens when someone starts using 4-way crossfire? Graphics cards are probably going to be the big problem as they seem to increase the amount of RAM onboard every couple of generations at least.

Hmm... As with other devices, you're still using system memory as usual, putting data there and ask the card where to find it and to copy it to itself, by itself (through DMA). Thus, whatever cards own memory doesn't have to be addressable directly.

When it comes to graphic adapters, I believe they've enjoyed IOMMU since AGP, called the GART (I'm betting you've heard/read that before). It handles the exchange of data between the graphics memory and the system memory, but can do a lot more by itself than standard devices. And I guessing the Aperture Size is the maximum size of RAM that the graphics IOMMU will be allowed to play with. But more importantly, it is allocated only when needed (like playing a game). Or something like that... :D

Memory mapped I/O on the other hand, which is the problem here, is when the devices registers (which is how you control them, tell them what to do) is accessable to the system through regular memory addresses, ie. its like they're part of the memory, but writing to them actually sends instructions to the device. The CPU (and you) can use the same way to instruct devices as you do reading/writing to memory.

Bottom line, you don't access devices own memory directly. These register addresses doesn't take up nearly as much space as if you would have to reserve it to address all of the memory on eg. graphics cards and such.

And again, read my disclaimer. :D
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
morphine
TR Staff
Posts: 11600
Joined: Fri Dec 27, 2002 8:51 pm
Location: Portugal (that's next to Spain)

Mon Jul 16, 2007 9:11 am

Inkedsphynx wrote:
So if you put 2gb in, it all goes in the 3-4gb range, if you put 4gb in, it goes in the 5-6gb range, etc etc? Wouldn't that solve the problem without making it so confusing for us end-users?

Becase 4GB is the maximum addressable limit with 32-bits, that's why. That's why everything needs/needed to stay addressable under 4GB.
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 9:17 am

Inkedsphynx wrote:
Why don't they just take all the crap that has to be assigned that isn't physical memory and make it so that it stuffs all that after your physical memory, on a dynamic basis.

So if you put 2gb in, it all goes in the 3-4gb range, if you put 4gb in, it goes in the 5-6gb range, etc etc? Wouldn't that solve the problem without making it so confusing for us end-users?

Well, that would be sort of how it works in a 64-bit system that runs in 64-bit mode, just that the physical memorys addresses have to be split at 3GB, and continue at 4GB (I guess the PCI specs is written in stone like that).

The problem is that in a 32-bit system, if you check the maximum value you get from 32 bits, it is 2^32 = 4294967296 = 4GB. If the addresses assigned to all the different devices is above 4GB, it cannot be addressed by the 32-bit CPU.

edit: Oh well, a little late.. :D
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
Inkedsphynx
Gerbil Jedi
Posts: 1514
Joined: Fri Nov 19, 2004 9:57 am
Location: Seattle, WA

Mon Jul 16, 2007 9:30 am

Ah, thanks. That makes sense, with 32-bit hitting the ceiling at 4gb.

So then, why not make it so 64-bit CPU systems dynamically move everything? Including the PCI assignments? Or would that require the PCI stuff to be re-built to take into affect the fact that it would have to dynamically search for whatever it is that is stored in the 3-4gb range?
 
barkotron
Gerbil In Training
Posts: 3
Joined: Mon Jul 16, 2007 7:49 am

Mon Jul 16, 2007 9:58 am

FubbHead wrote:
barkotron wrote:
As I understand it, the more RAM on a peripheral, the more address space it uses up - e.g. someone with two 8800GTXs will lose 1.5 GB of usable RAM in the non-server 32-bit Windows OS, regardless of the BIOS remapping (well, except the BIOS remapping would have them lose more, from the sounds of it).

If someone has two of the 1GB HD2900XT cards in Crossfire, that's 2 GB of address space right there. Where does their sound card RAM go? Their physics card etc etc? What happens when someone starts using 4-way crossfire? Graphics cards are probably going to be the big problem as they seem to increase the amount of RAM onboard every couple of generations at least.

Hmm... As with other devices, you're still using system memory as usual, putting data there and ask the card where to find it and to copy it to itself, by itself (through DMA). Thus, whatever cards own memory doesn't have to be addressable directly.

When it comes to graphic adapters, I believe they've enjoyed IOMMU since AGP, called the GART (I'm betting you've heard/read that before). It handles the exchange of data between the graphics memory and the system memory, but can do a lot more by itself than standard devices. And I guessing the Aperture Size is the maximum size of RAM that the graphics IOMMU will be allowed to play with. But more importantly, it is allocated only when needed (like playing a game). Or something like that... :D

Memory mapped I/O on the other hand, which is the problem here, is when the devices registers (which is how you control them, tell them what to do) is accessable to the system through regular memory addresses, ie. its like they're part of the memory, but writing to them actually sends instructions to the device. The CPU (and you) can use the same way to instruct devices as you do reading/writing to memory.

Bottom line, you don't access devices own memory directly. These register addresses doesn't take up nearly as much space as if you would have to reserve it to address all of the memory on eg. graphics cards and such.

And again, read my disclaimer. :D


I'm not sure that's right to be honest. It wouldn't explain why the Dan's Data article someone mentioned earlier shows a screenshot of 256MB of address space being set aside for his 7800GT for instance. Which does seem to suggest that things on the PCI-E bus need address space for all of their RAM as well as just the control registers.

EDIT: Actually, found an HP document which states exactly that (direct link to PDF, so look out) h20331.www2.hp.com/Hpsub/downloads/3_plus_GB_RAM_w-Windows_08Ap07.pdf
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 10:19 am

Well, the PCI bus address space (I guess you can say the bits available going to its registers) is also only 32-bit wide, so that's the main reason why those addresses has to stay below 4GB to be read from/written to.

64-bit system is only 64-bit systems if they're actually running in 64-bit mode on a 64-bit OS. And if it is, there really isn't a problem apart from some manufacturers leaving out the Remap Memory option. The memory manager in the CPU can easily cope with having the physical memory split in two. It has to translate between virtual and physical addresses anyways.

I also don't know how aware a CPU really is about exactly what it is actually reading from and writing to.
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 10:50 am

barkotron wrote:
I'm not sure that's right to be honest. It wouldn't explain why the Dan's Data article someone mentioned earlier shows a screenshot of 256MB of address space being set aside for his 7800GT for instance. Which does seem to suggest that things on the PCI-E bus need address space for all of their RAM as well as just the control registers.

EDIT: Actually, found an HP document which states exactly that (direct link to PDF, so look out) h20331.www2.hp.com/Hpsub/downloads/3_plus_GB_RAM_w-Windows_08Ap07.pdf

I still have problems believing you have direct access to the video memory through the PCI address range. Rather, I believe it's the aperture memory that is located there, which is system memory additional to the video memory, which the device can access directly. With AGP you could change the aperture size since the GART was part of the chipset. But AFAIK with PCIe, it is handled by the device/driver, and I don't know how much it reserves.

But hey, I may very well be completely wrong. :lol:

edit: But yeah, either way it will eat up memory. But just not necesserily the amount of RAM each device have. Eg. graphics adapters with lots of video ram locally, is a lot less dependent on using system memory.
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
barkotron
Gerbil In Training
Posts: 3
Joined: Mon Jul 16, 2007 7:49 am

Mon Jul 16, 2007 10:54 am

No, fair enough, some kind of remapping appears to be a) necessary and b) not a strain on the system: all I was objecting to was the idea of setting aside a defined area at the beginning of the address space that would automatically be used.

EDIT: Argh. You posted while I was doing something else and hadn't read your interim post :). The HP article does seem to imply (well, explicitly state, nearly) that graphics cards are going to map all of their local memory into the PCI address space:

"The PCI address range is used to manage much of the computer’s components including the BIOS, IO cards, networking, PCI hubs, bus bridges, PCI-Express, and today’s high performance video/graphics cards (including their video memory)."

"3.0 to 3.4 GB is the typical range. The more IO cards and graphics cards with large amounts of video memory, the lower this limit will be."
Last edited by barkotron on Mon Jul 16, 2007 11:04 am, edited 2 times in total.
 
FubbHead
Grand Gerbil Poohbah
Posts: 3482
Joined: Mon Apr 08, 2002 9:04 pm
Location: Borås, Sweden

Mon Jul 16, 2007 11:01 am

Some day I think I am going to try find out what exactly the PCI address range consists of. 0,7-1GB is a lot of registers. :-)
Amiga 1200, 68020@28MHz, 4MB+2MB RAM, Conner 80MB harddrive
 
nishto75
Gerbil
Posts: 26
Joined: Thu Aug 18, 2005 6:04 pm
Location: Los Angeles

Mon Jul 16, 2007 12:09 pm

Sorry to drop in late like this, but I'm having this problem with a friends video editing rig we just put together.

XP Pro 32-bit with PAE enabled, 4GB RAM. When remapping is disabled in the BIOS, only 3GB are read on POST and Windows reports just under 3GB. With remapping enabled, the BIOS sees all 4GB, but Windows only reports 2GB. Now, maybe I'm misunderstanding, but I thought that with PAE enabled and remapping enabled in the BIOS, XP should be able to see the memory that was mapped above 4GB, or is anything that gets mapped above 4GB lost to Windows entirely?
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Mon Jul 16, 2007 1:10 pm

nishto75 wrote:
XP Pro 32-bit with PAE enabled, 4GB RAM. When remapping is disabled in the BIOS, only 3GB are read on POST and Windows reports just under 3GB. With remapping enabled, the BIOS sees all 4GB, but Windows only reports 2GB.

This is exactly what happened to me back on page 1 of this thread. My solution is going to be trying a 64-bit version of Windows. bitvector said (in much more detail than this) that it is probably that the BIOS is mapping the additional memory beyond 2GB outside the 32-bit address range, so a 32-bit OS doesn't see it.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
bhtooefr
Lord High Gerbil
Posts: 8198
Joined: Mon Feb 16, 2004 11:20 am
Location: Newark, OH
Contact:

Mon Jul 16, 2007 1:29 pm

I was under the impression from this thread that XP Pro didn't support PAE (meaning that it only supported a 32-bit address space, not the full 36-bit address space of a system that supports PAE.)

Try Windows Server 2003.
Image
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Mon Jul 16, 2007 1:43 pm

bhtooefr wrote:
I was under the impression from this thread that XP Pro didn't support PAE (meaning that it only supported a 32-bit address space, not the full 36-bit address space of a system that supports PAE.)

Try Windows Server 2003.

http://www.microsoft.com/whdc/system/pl ... AEmem.mspx

Support for PAE is provided under Windows 2000 and 32-bit versions of Windows XP and Windows Server 2003.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
nishto75
Gerbil
Posts: 26
Joined: Thu Aug 18, 2005 6:04 pm
Location: Los Angeles

Mon Jul 16, 2007 1:45 pm

derFunkenstein,

Yes, it was your post that made me jump in here. In fact, my friend is using the same mobo as you.

I guess I'm still not understanding. I thought that the whole point of PAE was to allow a 32-bit OS to see the memory that is mapped beyond the 32-bit address range.

Looking at this MS page confuses me:
http://www.microsoft.com/whdc/system/pl ... AEdrv.mspx
"* Total physical address space is limited to 4 GB on these versions of Windows."
Does this mean that even though you can enable PAE to support more than 4GB of physical memory, you'll not be able to use it because the address space is limited to 4GB?
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Mon Jul 16, 2007 1:53 pm

nishto75 wrote:
Does this mean that even though you can enable PAE to support more than 4GB of physical memory, you'll not be able to use it because the address space is limited to 4GB?

Well, like UberGerbil said earlier... Microsoft seems to have made enabling PAE actually do nothing on most of their consumer OSs, like XP SP2 and Vista. So it's turned "on" and it claims to be PAE, but it still clips everything to 32-bit addresses to prevent problems with drivers and such. So you get zero benefit and still lose any memory mapped above the 4GB line.
 
derFunkenstein
Gerbil God
Posts: 25427
Joined: Fri Feb 21, 2003 9:13 pm
Location: Comin' to you directly from the Mothership

Mon Jul 16, 2007 1:58 pm

nishto75 wrote:
Does this mean that even though you can enable PAE to support more than 4GB of physical memory, you'll not be able to use it because the address space is limited to 4GB?

That's what I'm starting to think. Which is why I broke down and ordered the 64-bit media for Vista, so I can answer it once and for all. All of my hardware has signed 64-bit Windows drivers, so unless I'm using some 16-bit app I don't know about, it shouldn't hurt me at all.
I do not understand what I do. For what I want to do I do not do, but what I hate I do.
Twittering away the day at @TVsBen
 
bhtooefr
Lord High Gerbil
Posts: 8198
Joined: Mon Feb 16, 2004 11:20 am
Location: Newark, OH
Contact:

Mon Jul 16, 2007 2:01 pm

Now I'm confused as hell.

So PAE turns on... but the address space is still confined to 32-bit, meaning that PAE does nothing at all?

Can that really be called PAE support?
Image
 
bitvector
Grand Gerbil Poohbah
Topic Author
Posts: 3293
Joined: Wed Jun 22, 2005 4:39 pm
Location: San Francisco, CA

Mon Jul 16, 2007 2:11 pm

According to MS:
"To constrain compatibility issues, Windows XP Service Pack 2 includes hardware abstraction layer (HAL) changes that mimic the 32-bit HAL DMA behavior. The altered HAL grants unlimited map registers when the system is running in PAE mode. In addition, the kernel memory manager ignores any physical address above 4 GB. Any system RAM beyond the 4 GB barrier would be made unaddressable by Windows and be unusable in the system."

So yes, PAE is on, but you still can't access physical addresses above 4GB. Why is it like this? For compatibility reasons. Why is PAE on then? For NX/XD support (DEP).
 
nishto75
Gerbil
Posts: 26
Joined: Thu Aug 18, 2005 6:04 pm
Location: Los Angeles

Mon Jul 16, 2007 2:30 pm

bitvector wrote:
According to MS:
"To constrain compatibility issues, Windows XP Service Pack 2 includes hardware abstraction layer (HAL) changes that mimic the 32-bit HAL DMA behavior. The altered HAL grants unlimited map registers when the system is running in PAE mode. In addition, the kernel memory manager ignores any physical address above 4 GB. Any system RAM beyond the 4 GB barrier would be made unaddressable by Windows and be unusable in the system."

So yes, PAE is on, but you still can't access physical addresses above 4GB. Why is it like this? For compatibility reasons. Why is PAE on then? For NX/XD support (DEP).


Crap. OK, well that's good to know.

Great thread, by the way. One of the most enlightening I've seen in quite awhile.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 9

Who is online

Users browsing this forum: No registered users and 1 guest
GZIP: On