Tech Support Forum banner
Status
Not open for further replies.
1 - 20 of 25 Posts

·
Registered
Joined
·
173 Posts
Discussion Starter · #1 ·
I've heard 2 different explanations regarding the page file and I'm wondering if someone could clarify this for me. One says: when physical memory is used up, pages of info start getting transferred to the page file. Second one says: that a full copy of whats stored in ram is maintained in the page file "all the time" and that information isn't "madly" transferred out of ram when it reaches its limit because it's already there. The information is just dropped out of ram. Which is correct? Thanks everyone!
 

·
TSF Team, Emeritus
Joined
·
2,619 Posts
Both are partly true.

The pagefile is used to store infrequently accessed data, thus leaving more RAM for more important uses. This is an ongoing process and the system does not wait until RAM is full before beginning. The time required for copying data to disk is essentially free as other activities can take place while this is happening. Note that this is for infrequently accessed data only. If data is frequently accessed it will probably never be copied to the pagefile. Also note that the data is copied to the pagefile and the data remains in RAM. In the event that the RAM is needed for important purposes it can be immediately reused as the data is safely stored in the pagefile. If the data thus displaced from RAM is later needed it will be loaded from the pagefile.

The pagefile isn't normally used to store program code. There is no need to do this as there is already a perfectly good copy available in the original files they were loaded from.

The guiding principle behind all of this is too keep frequently accessed data and code in RAM, with the remainder in the pagefile or executable files files. The memory manager also strives for full memory utilization all the time. Free memory is wasted memory.
 

·
Administrator, Manager, Microsoft Support, MVP
Joined
·
34,403 Posts
If the system needs more physical memory (RAM) than is available, it will use virtual memory (page file). The page file contains mapping of physical and virtual address as well. Windows 7 will allocate the base size of the page file = RAM + 300 MB.

RAM, Virtual Memory, Pagefile and all that stuff

Virtual Memory - Mark Russinovich - TechNet Blogs

Physical Memory - Mark Russinovich - TechNet Blogs

To check your system's virtual memory usage, run one of the following WMI apps -

HTML output - IE will open w/ output - WMI - "Recoveros" and Page File Settings (HTML)

Text file output - Notepad will open w/ output - WMI - "Recoveros" and Page File Settings (TEXT)

Regards. . .

jcgriff2

`
 

·
Registered
Joined
·
173 Posts
Discussion Starter · #5 ·
One more thing- for those out there that run without a page file, what happens when their ram becomes full? Does the memory manager just dump less used info anyway? Thanks!
 

·
TSF Team, Emeritus
Joined
·
2,619 Posts
Running without a pagefile is usually a bad idea and will impair performance. This means that all modified data must remain in RAM at all times, even if it has not been accessed for a long time, even if it may never be needed again. Many people think that this is a good thing but it isn't. When the memory manager needs memory and there is not enough free (a common situation, even with 4 GB RAM) it must reassign memory that is in use. Instead of reusing RAM holding rarely used data (which it cannot do because there is nowhere else to store it) it must do this with more frequently accessed code. This RAM can always be reused because there is already a copy in the original files. This usually leads to more paging and worse performance.

Remember that "Available" memory in XP and the upper portion of the memory graph in Vista and Windows 7 is mostly in use - not free. It contains code or data that is next in line to be reused when necessary.
 

·
Visiting BSOD Expert, Microsoft Support Team
Joined
·
781 Posts
One more thing- for those out there that run without a page file, what happens when their ram becomes full? Does the memory manager just dump less used info anyway? Thanks!
That depends on the load of the system. In general, the NT memory manager will keep track of memory pages, and when an application doesn't "touch" a page allocated to it that it hasn't marked as non-pageable nor private for a specific amount of time, that page goes onto what is called the "Standby list" (private pages go to another list called the Modified page list). This list is a listing of memory pages that are not in active use, and *can* be paged to the paging file as necessary - however, they are left in RAM just in case the process requests that data at a later time, and are left in RAM as long as the system load doesn't dictate that those pages be re-allocated to other processes. Also note that this is where you see a "soft" page fault - a "soft page fault" is when a process has requested one of it's pages that was moved to the standby list, and it is "faulted" back into the process' working set from the standby list, in RAM; this is in comparison to when you would see a "hard fault", which is a page fault of a page back into the working set from disk, the paging file. FYI. Note at this point, if you're running with *no* paging file, you run the risk of a bugcheck, because (without a registry change) Windows will attempt to page out portions of paged pool and the system executive - when this fails, you bugcheck. Knowing system load and performance monitoring/analyzing load patterns is pretty important if you're going to run without a paging file, for this reason.

Back on topic, if the system starts to get really busy and the need for RAM increases to the point where the memory manager no longer has many free pages on the Free list, nor many pages available from the Zero page table list (you'll have to read Mark Russinovich's Windows Internals book or chat further with someone who is MCTS Windows Internals certified if you want to go deeper into what these are - that's more than can be posted on a simple forum :wink:), it starts to cannibalize pages from the Standby list - it "pages out" the contents of the standby page, and then allocates that page to the requestor. If even the Standby list is exhausted, Windows can start using pages from the Modified page list, but at that point you most likely will have other performance problems and are very likely seeing horrible performance, aka "running that system into the ground". It's pretty obvious when this symptom starts to happen.

As to how the paging file is used by Windows, any allocation that is not memory mapped and/or marked as not pageable does have a "backing page" reserved for it in the paging file. However, data is not written to the paging file at that time, but in a lazy fashion (and only if it is really required, for the most part). There's a whole set of "onion layers" that you can peel back if you're really interested in how it works by reading Windows Internals (4th or 5th editions), but for what you're asking the above should be sufficient to answer your questions. You can also see why adding more RAM, especially on an x64 system, is so beneficial to Windows performance. The larger the cache is allowed to be in RAM, the more pages stay in RAM and off of the paging file. There will still be paging, yes, but it should be fairly infrequent (and you can even shrink your paging file, or not run one at all, given sufficient RAM - now you know some of the reasons why this is possible).
 

·
TSF Team, Emeritus
Joined
·
2,619 Posts
Disabling the pagefile is generally a bad idea because it restricts the memory managers options and forces it to make suboptimal decisions about what should be paged out. Whether or not this will make a noticeable difference in performance depends on how much RAM you have and the system workload. With a truly large amount of RAM (8 GB or more on a 64 bit system) and a light workload it would probably make no real difference. Under some unusual situations it may actually improve performance, but that would be a very unusual situation.

My standard recommendation regarding pagefile configuration: Unless you have a very specific need, and you understand what you are doing, leave pagefile configuration on default settings. This will usually be optimum or as close as to make no difference.

Be aware that there is an enormous amount of confusion on the Internet concerning the pagefile.
 

·
Visiting BSOD Expert, Microsoft Support Team
Joined
·
781 Posts
I would agree - however, setting it to something small (if you know what you're doing) is fine. With 8GB or more of RAM, a paging file of 1GB or less is fine if you've tested your system under normal load with perfmon and know what you're looking for. I personally run with a 512MB paging file, as this is sufficient (and I set DisablePagingExecutive) on my 8 and 16GB systems. Knowing that the memory manager isn't going to have to page means I don't have to worry about it, but I'm not running Photoshop or CAD programs either, just regular programs and virtual machines (which you don't want to page anyway).
 

·
TSF Team, Emeritus
Joined
·
2,619 Posts
There is one more result of disabling the pagefile that should be mentioned. This will cause a major reduction in the commit limit, usually by 50% or more. This is rather difficult to explain but the practical results of exceeding that limit can be severe. The result is often an application failure with no opportunity to save you work or even a BSOD. With adequate RAM this will not be an issue.

I have no problem with people who reduce the pagefile size or even disable it if they understand what they are doing. Most people don't even know what proper testing means, let alone how to do it. My issue is with people who blindly follow Internet "tweaking guides" with no understanding of the consequences.
 

·
Visiting BSOD Expert, Microsoft Support Team
Joined
·
781 Posts
Correct, because the paging file is considered part of your commit limit - hence what I said before, if you hit a situation where you *need* to page during load and you have no paging file, you bugcheck. I mentioned that already:
Note at this point, if you're running with *no* paging file, you run the risk of a bugcheck, because (without a registry change) Windows will attempt to page out portions of paged pool and the system executive - when this fails, you bugcheck.
As someone who does this for a living and has the MCTS cert, I can say with some authority that it is safe to disable the paging file (or reduce it to a small size), but only if you've done your homework beforehand and verified you have enough RAM for the system load. I would never recommend it in a production server environment, but on client desktops this can be done in production. I still recommend 512MB at the low end, "just in case", but in general you have to set DisablePagingExecutive on x64 systems for xperf and xbootmgr to run, so this is part of a standard build anyway.
 

·
Visiting BSOD Expert, Microsoft Support Team
Joined
·
781 Posts
I have no problem with people who reduce the pagefile size or even disable it if they understand what they are doing. Most people don't even know what proper testing means, let alone how to do it. My issue is with people who blindly follow Internet "tweaking guides" with no understanding of the consequences.
Oh, and I agree with you on that last bit. People "tweaking" Windows 7 tend to do much more harm than good in the long run. Shotgun -> foot, trigger pull most often.
 

·
Registered
Joined
·
173 Posts
Discussion Starter · #16 ·

·
Registered
Joined
·
173 Posts
Discussion Starter · #17 ·
"A frequently asked question is how big should I make the pagefile? There is no single answer to this question because it depends how much RAM is installed and how much virtual memory that workload requires. If there is no other information available, the typical recommendation of 1.5 times the amount of RAM that is in the computer. On server systems, a common objective is to have enough RAM so that there is never a shortage and so that the pagefile is essentially not used. On these systems, having a very large pagefile may serve no useful purpose. On the other hand, disk space is usually plentiful, so having a large pagefile (for example, 1.5 times the installed RAM) does not cause a problem and eliminates the concern about how large to make it." - From Microsoft Support

Link: RAM, Virtual Memory, Pagefile and all that stuff
 

·
Moderator , - Microsoft Support
Joined
·
7,755 Posts
Hi, we could be here for a week, what has been posted, to most people is as clear as mud. This is not a criticism it is just to bring it back to where most users should be. For a home computer you should leave the pagefile as the default setting has it. To move the pagefile or to have it set to 0 will create problems for home users, windows dumps require a pagefile this can be fixed through registry mods, my point with Vista and Seven for most of us, leave the system to manage this.
 

·
Visiting BSOD Expert, Microsoft Support Team
Joined
·
781 Posts
I'm capable of running a live debugger, and as such I don't need the .dmp - I have a crash cart available. Also, most kernel dumps (even on x64) tend to fit fine into about 400 or so MB dump files.
 
1 - 20 of 25 Posts
Status
Not open for further replies.
Top