by: Helge, published: Jan 9, 2008, updated: Mar 13, 2021, in

Windows x64 Part 1: Virtual Memory

I will start the new year with a small series on Windows x64 in which I will explain why 64-bit computing is not only necessary but inevitable. I will then go on to explain in detail where Windows x64 differs from the 32-bit versions and what that means for all those who are responsible for the design, operation, and support of 64-bit systems. All the while I will be focusing on terminal servers, but most facts and conclusions are valid for other system types, too.

It is not by chance that Microsoft announced that Windows Server 2008 would be their last server operating system released in 32-bit versions, too. The 32-bit architecture has, like any other architecture, inherent limitations. That by itself would not be so bad. The fact that we are reaching these boundaries today is.

In order to understand why we need to take a look at the prevailing 32-bit x86 architecture.

Where are we today?

It is called “32-bit” because the processor registers that are used to store memory addresses are 32 bits wide. Since 4,294,967,296 is the largest number that fits into 32 bits, 32-bit processors can only talk to that many memory cells. And yes, 4,294,967,296 bytes are of course equivalent to 4 GB. Thus x86 processors can only address 4 GB of RAM – more only with tricks like PAE.

Unfortunately, around 20% of those 4 GB cannot even be accessed on most systems. Ask those who plugged 4 GB of RAM into their computers and were quite disappointed when only around 3.2 GB showed up in task manager as physical RAM. The upper 800 MB are usually masked by memory-mapped devices such as PCI cards. In general, only server computers are equipped with both a BIOS and device drivers that support remapping this memory to an area above the 4 GB boundary, making all physical RAM accessible to the operating system. You might need to enable PAE in order for this to work, though.

Virtualization? Old hat!

So much for physical RAM. Let us move on to the much more interesting realm of virtual memory. On x86 systems, every process has its own separate virtual address space of 4 GB. This area is, however, divided into two equal parts: 2 GB are reserved for the kernel; only the remaining 2 GB can be used by the process. It is crucial to note that, while each process has its own private 2 GB of virtual address space, the kernel’s 2 GB are per system. Just imagine the kernel as just another program with 2 GB of usable address space.

I did not explain yet, what virtual memory even is. Well, here it comes. As I told you, every process has its own private 2 GB of virtual memory to be used in whatever way pleases the programmer. It would, of course, be a great waste of resources to actually allocate every bit requested by an application in physical RAM. Furthermore, memory management is much easier if every program has its own private address space without needing to worry which memory segment might be used by which process. But the real reason for virtual memory is security – only by guaranteeing that processes cannot directly write to any physical memory address, but only to segments allocated to them, can multi-tasking systems be reliable. Errors in one application do not affect other applications, because each application only has access its own virtual memory. This is the oldest virtualization technology I know of in the x86-world.

Virtual memory works by mapping segments, called pages, of virtual memory into physical (or paged) memory in chunks of 4 KB. Large amounts of physical memory can be economized if several processes need the same page in physical memory for read-only access. This is the case with system DLLs that are used by many if not all running processes. By mapping the same page from physical into multiple processes’ virtual memory every process has read-only access to the DLL’s code. But what happens if one process wants to modify a page shared with other processes? In such a case the operating system uses a technique called “copy on write”. When one process tries to write to a shared page, Windows copies that page and makes the copy exclusively available to the process trying to write. All other processes still share the original read-only version of the memory page.

Another important benefit of virtual memory is the ability to move pages from physical memory to a special area on disk, the paging file. And back, of course. By moving rarely used pages to the page file, the operating system tries to make the most of the scarce resource RAM. If an application needs to access a memory area that has been paged out, that corresponding data is read from disk and stored in a free page in physical memory which is then mapped to the original address of the application’s virtual memory. All that takes time, of course. Reading from the hard disk is about 1 million times slower than reading from the RAM. That is the reason why computers slow down so much if they are not equipped with enough RAM to satisfy the needs of the running applications.

But even if enough free RAM is available, a system may still suffer from memory exhaustion. To understand why we need to dig into the kernel. In the next article in this mini-series I will explain how the kernel uses memory and why this easily leads to memory shortages, especially on terminal servers.

Previous Article Why Vista's System Restore is Dangerous and What to do About it
Next Article Windows x64 Part 2: Kernel Memory, /3GB, PTEs, (Non-) Paged Pool