Search Results

Search found 99 results on 4 pages for 'pagefile'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Does having your page file on decrease the life expectancy of your hard drive?

    - by user695874
    If I have my page file turned on in Windows as opposed to having it turned off as shown below: Would having the page file turned on decrease the life expectancy on my Hard Drive? If so, how much would the life decrease say with regular use? (4 hours a day) I'm thinking it would decrease some just because there would be more writing to the hard drive, but I wasn't sure if it would be too negligible to even matter.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • How can I get read-ahead bytes?

    - by Bruno Martinez
    Operating systems read from disk more than what a program actually requests, because a program is likely to need nearby information in the future. In my application, when I fetch an item from disk, I would like to show an interval of information around the element. There's a trade off between how much information I request and show, and speed. However, since the OS already reads more than what I requested, accessing these bytes already in memory is free. What API can I use to find out what's in the OS caches? Alternatively, I could use memory mapped files. In that case, the problem reduces to finding out whether a page is swapped to disk or not. Can this be done in any common OS?

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Windows 7: How to place SuperFetch cache on an SSD?

    - by Ian Boyd
    I'm thinking of adding a solid state drive (SSD) to my existing Windows 7 installation. I know I can (and should) move my paging file to the SSD: Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1, Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB. Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size. In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD. What I don't know is if I even can put a SuperFetch cache (i.e. ReadyBoost cache) on the solid state drive. I want to get the benefit of Windows being able to cache gigabytes of frequently accessed data on a relativly small (e.g. 30GB) solid state drive. This is exactly what SuperFetch+ReadyBoost (or SuperFetch+ReadyDrive) was designed for. Will Windows offer (or let) me place a ReadyBoost cache on a solid state flash drive connected via SATA? A problem with the ReadyBoost cache over the ReadyDrive cache is that the ReadyBoost cache does not survive between reboots. The cache is encrypted with a per-session key, making its existing contents unusable during boot and SuperFetch pre-fetching during login. Update One I know that Windows Vista limited you to only one ReadyBoost.sfcache file (I do not know if Windows 7 removed that limitation): Q: Can use use multiple devices for EMDs? A: Nope. We've limited Vista to one ReadyBoost per machine Q: Why just one device? A: Time and quality. Since this is the first revision of the feature, we decided to focus on making the single device exceptional, without the difficulties of managing multiple caches. We like the idea, though, and it's under consideration for future versions. I also know that the 4GB limit on the cache file was a limitation of the FAT filesystem used on most USB sticks - an SSD drive would be formatted with NTFS: Q: What's the largest amount of flash that I can use for ReadyBoost? A: You can use up to 4GB of flash for ReadyBoost (which turns out to be 8GB of cache w/ the compression) Q: Why can't I use more than 4GB of flash? A: The FAT32 filesystem limits our ReadyBoost.sfcache file to 4GB Can a ReadyBoost cache on an NTFS volume be larger than 4GB? Update Two The ReadyBoost cache is encrypted with a per-boot session key. This means that the cache has to be re-built after each boot, and cannot be used to help speed boot times, or latency from login to usable. Windows ReadyDrive technology takes advantage of non-volatile (NV) memory (i.e. flash) that is incorporated with some hybrid hard drives. This flash cache can be used to help Windows boot, or resume from hibernate faster. Will Windows 7 use an internal SSD drive as a ReadyBoost/*ReadyDrive*/SuperFetch cache? Is it possible to make Windows store a SuperFetch cache (i.e. ReadyBoost) on a non-removable SSD? Is it possible to not encrypt the ReadyBoost cache, and if so will Windows 7 use the cache at boot time? See also SuperUser.com: ReadyBoost + SSD = ? Windows 7 - ReadyBoost & SSD drives? Support and Q&A for Solid-State Drives Using SDD as a cache for HDD, is there a solution? Performance increase using SSD for paging/fetch/cache or ReadyBoost? (Win7) Windows 7 To Boost SSD Performance How to Disable Nonvolatile Caching

    Read the article

  • How to place SuperFetch cache on an SSD?

    - by Ian Boyd
    I'm thinking of adding a solid state drive (SSD) to my existing Windows 7 installation. I know I can (and should) move my paging file to the SSD: Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1, Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB. Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size. In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD. What I don't know is if I even can put a SuperFetch cache (i.e. ReadyBoost cache) on the solid state drive. I want to get the benefit of Windows being able to cache gigabytes of frequently accessed data on a relativly small (e.g. 30GB) solid state drive. This is exactly what SuperFetch+ReadyBoost (or SuperFetch+ReadyDrive) was designed for. Will Windows offer (or let) me place a ReadyBoost cache on a solid state flash drive connected via SATA? A problem with the ReadyBoost cache over the ReadyDrive cache is that the ReadyBoost cache does not survive between reboots. The cache is encrypted with a per-session key, making its existing contents unusable during boot and SuperFetch pre-fetching during login. Update One I know that Windows Vista limited you to only one ReadyBoost.sfcache file (I do not know if Windows 7 removed that limitation): Q: Can use use multiple devices for EMDs? A: Nope. We've limited Vista to one ReadyBoost per machine Q: Why just one device? A: Time and quality. Since this is the first revision of the feature, we decided to focus on making the single device exceptional, without the difficulties of managing multiple caches. We like the idea, though, and it's under consideration for future versions. I also know that the 4GB limit on the cache file was a limitation of the FAT filesystem used on most USB sticks - an SSD drive would be formatted with NTFS: Q: What's the largest amount of flash that I can use for ReadyBoost? A: You can use up to 4GB of flash for ReadyBoost (which turns out to be 8GB of cache w/ the compression) Q: Why can't I use more than 4GB of flash? A: The FAT32 filesystem limits our ReadyBoost.sfcache file to 4GB Can a ReadyBoost cache on an NTFS volume be larger than 4GB? Update Two The ReadyBoost cache is encrypted with a per-boot session key. This means that the cache has to be re-built after each boot, and cannot be used to help speed boot times, or latency from login to usable. Windows ReadyDrive technology takes advantage of non-volatile (NV) memory (i.e. flash) that is incorporated with some hybrid hard drives. This flash cache can be used to help Windows boot, or resume from hibernate faster. Will Windows 7 use an internal SSD drive as a ReadyBoost/*ReadyDrive*/SuperFetch cache? Is it possible to make Windows store a SuperFetch cache (i.e. ReadyBoost) on a non-removable SSD? Is it possible to not encrypt the ReadyBoost cache, and if so will Windows 7 use the cache at boot time? See also SuperUser.com: ReadyBoost + SSD = ? Windows 7 - ReadyBoost & SSD drives? Support and Q&A for Solid-State Drives Using SDD as a cache for HDD, is there a solution? Performance increase using SSD for paging/fetch/cache or ReadyBoost? (Win7) Windows 7 To Boost SSD Performance How to Disable Nonvolatile Caching

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Will using FAT32 provide better pagefile performance than NTFS?

    - by llazzaro
    Hello, I was discussing with my others personalities, and came up with a conflict. In http://technet.microsoft.com/en-us/library/cc938440.aspx , says that FAT32 is faster when using smaller volumes. Ok separate disk, will give more performance than same disk. But did anyone test this? Scenario 1 : Separate hard disk FAT32 (small volume) Scenario 2 : Separate hard disk NTFS which one will win? minimum gain?

    Read the article

  • Windows 8.1 Update 1 Disk Usage 100%

    - by Gookjin Jeong
    Background Information / Computer Specs I have a 14-inch Samsung Series 5 Ultra. Core i5 CPU, 750GB HDD, 8GB RAM, Intel HD Graphics 4000. I've had the computer for about 1.5 years with no major problems. Problem The issue appeared at the beginning of April this year, when I updated the OS to Windows 8.1 Update 1 (not from 8 to 8.1). After being on continually (except for at night, when I put it on sleep mode) for about 48 hours, the disk usage as seen by Task Manager hits 100%. When this happens, everything from opening/closing applications to typing and even bringing up the start screen by pressing the Windows key becomes extremely slow. The only way to make the disk usage decrease is to restart the computer. Then the problem repeats. I've used my current laptop (as well as my previous laptops) this way -- putting it on sleep mode at night and restarting it only when Windows needs to install updates -- for a long time. So I know the 100% disk usage is not due to the way I use the computer. The thing that causes the spike varies. Sometimes it's System, sometimes it's one of the various applications I installed (e.g. Chrome, Evernote, Spotify, Wunderlist, iTunes, etc.), and sometimes it's Antimalware Service Executable, etc. Tried Solutions I think I tried almost every solution out there for this problem: Running the check disk command (chkdsk /b /f /v /scan c:) from Admin Command Prompt Running Windows Memory Diagnostic Disabling Superfetch and Windows Search from services.msc Running "Fix problems with Windows Update" from Control Panel -- Troubleshooting Updating and rolling back the graphics driver (Intel HD 4000) Disabling "Use hardware acceleration when available" from Chrome settings Disabling Intel Rapid Storage Technology Running the SFC /SCANNOW command as recommended here Running a quick scan & a full scan from Windows Defender (no threats found) Taking the hard drive out and putting it back Refreshing the computer, from the Update and recovery -- Recovery option in Windows settings NONE of the above worked for me. I was about to give up but then noticed that one of the main culprits of the disk usage spike, as shown in the "Disk Activity" section of the Resource Monitor, was C:\System (pagefile.sys). I googled around and found that one of the recommended solutions was to disable pagefile. I then went to **Control Panel -- System and Security -- System -- Advanced system settings -- Advanced tab -- Performance settings -- Advanced tab -- "Change" under Virtual memory and discovered that the number for "Currently allocated" at the bottom was 1280MB, although the number for "Recommended" was 4533MB. I immediately changed it to 4533MB and checked my family members' computers to see what the numbers were like. All of theirs had a currently allocated space that was only slightly smaller than the recommended space. See screenshot below: This might fix the problem. I'll have to wait a couple more days.But if it doesn't, what in the world should I do next? I'm guessing the hard drive isn't failing because This computer is less than 2 years old; and Speccy says that the status of the HDD is good. Update 5/27/2014 The "4533MB" solution did not work. I had to reboot the computer about 30 minutes ago because the disk usage again hit 100%. When I opened Resource Monitor the C:\System (pagefile.sys) again was shown to be the culprit. I have now disabled pagefile entirely via the same window shown above in the screenshot. The number for "currently allocated" is now 0MB. Will update again in a couple days, or if the problem occurs again, whichever comes sooner.

    Read the article

  • differencing disk opinions

    - by troth
    I've read about the performance issues with dfferencing disks but I still think there is a solid place for them and thats the os boot partition. If I'm going to have 20 vms on a csv based volume I don't won't to waste the 20+ gigs per guest just for the os boot. If I get a good base disk with all of the most used applications installed and have the pagefile located somewhere else I don't think the delta's would be that great thus it should not create a performance issue. Also in a SAN based csv volumes does it make any sense in having the pagefile go to a seperate csv volume? Any opinions on this? thanks

    Read the article

  • windows server backup 2008 R2 - what is generating all the change data?

    - by bobjandal
    We have a small relatively idle windows server 2008 R2 installation that does basic filesharing and exchange for about 10 not very active users. When running a windows server backup, the incremental data daily is about 20GB. This is not coming from users shared files, nor from changes in their mailbox sizes. The total size of the installation is 249GB, which is mostly old files. Where is all this data coming from, and how can I reduce it ? Using online backup of the vhd file from the backup is taking a while because of this daily change. Is there some way I can at least see what files are changing and contributing to this data ? Options I can think of but am not sure about: 1) pagefile churning - altho the backup does not include the pagefile, perhaps the changed blocks left behind are included ? 2) logs or something ? but the installation size stays the same every day 3) should I zero free space using sdelete before backing up perhaps ?

    Read the article

  • Memory mapped files causes low physical memory

    - by harik
    I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc. How do I trigger or tell the system to swap the memory to pagefile and free physical memory? I'm using Windows XP. If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system. To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less. Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :( Any help is appreciated. Thanks.

    Read the article

  • Problems with XP, Office, and PC in general - any ideas?

    - by molecule
    Hi all This may not make a whole lot of sense so pls bear with me... I am about to perform a routine check on one of my user's PC. Some background - the PC has a Xeon processor and 4Gb of RAM and running XP SP3 He has 2xHDD and pagefile is hosted on the secondary HDD (D:) and min/max values are set to 4096. NO pagefile on C: This user has 6 monitors so he has an NVIDIA Quadro NVS440 hosting 4xmonitors and an NVIDIA Quadro NVS290 hosting 2xmonitors. There is a video card driver from NVIDIA which is compatible with both NVS440 and NVS290 and he is on the latest version of that driver. (Note: Make of video cards are different - one is from leadtek and the other from Nvidia) He is a heavy Bloomberg, Outlook, Word, and Excel user and runs two Citrix applications. Other apps are FoxIt PDF and IE. Problems - Outlook and Excel frequently crashes - I am going to perform an Outlook and Excel repair and also check/remove unnecessary addins - will he lose any customizations if I repaired and chose "Restore my shortcuts while repairing" and do not select "Discard my customized settings and restore default settings". Does repair really repair anything? FYI - It stopped crashing ever since i moved a large spreadsheet he has open to his local HDD instead of over the network. This spreadsheet "refreshes" constantly as it is pulling live data to update cells and I suspect it was auto-saving so frequently that it caused crashes if saving over the network. At times, his right click completely fails to respond. His left click works fine but he can't right click on anything in any Window and even on the desktop. Sometimes, he needs to start to close certain applications such as Adobe and the right click will start functioning again. I removed Adobe and installed FoxIt as I figured it was a resource issue but I do not think so as he does have sufficient resources when the problem is happening. Sometimes he can't bring task manager up until he kills certain apps. Definitely sounds like a resource issue but I am not confident that is the root cause. Also not sure if this is related to one of the apps installed but his Start bar flickers (does not completely disappear) intermittently from time to time. The taskbar icons which are hidden appear and then get hidden again as if it was having "fits". I have performed reg scans, malware scans etc but problems do not go away. I am planning to perform sfc /scannow and office repair but would like to know if anyone has any other suggestions. What about setting a "small" pagefile on C:. I have heard that this is recommended and may be the reason why a minidmp file was not generated when he encountered a blue screen. Also, any feedback on his video cards? Do you think different models would cause problems? The drivers seem to work but he only has 2.5Gb out of 4Gb available RAM as I believe the video card chomped up a portion of this. I have recommended creating a new profile for him but due to the amount of customisations he has and the amount of time and effort it will take to get him up and running again, he prefers to bear with the problem than to go down that path. However, at least once a week, his PC acts up and I can't think of any other tools or techniques to rectify his problems. I guess we are at a stage where we just want to "stabilize" things so he won't encounter issues that frequently. Any feedback is very much appreciated.

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • Why is my browser using so much memory?

    - by Steve
    Hi. I've recently had problems with Firefox running very slowly when I have many tabs open; say 20 tabs. My whole system would slow down. I decided to give Google Chrome a try, and it started out fine. But lately I am finding that it too, slows down my whole system. Looking at Task Manager, chrome.exe is using about 250MB of memory in about 6 different entries in task manager. However, when I shut Chrome down, memory usage is reduced by about 600MB. How can this be? (shows drop in memory usage after ending Chrome.) When my system locks up with Chrome having many tabs open, it takes 10 seconds to load the Start Menu, 10 seconds to expand All Programs, and each folder and subfolder, and 30 seconds for the program to be highlighted under my mouse. It also takes 10 seconds to switch to Notepad. Why is Chrome appearing to use so much more memory than Task Manager indicates? Why is my pagefile being used when I have around 1.1GB of memory? Can I set Chrome to run in RAM and not in the pagefile? How can 20 tabs use 600MB? That's 30MB per tab. Thanks for your help.

    Read the article

  • check RAM,page file, /PAE, /3GB, SQL server memory using powershell

    - by Manjot
    I am a powershell novice. After days of searching.... I have put together a small powershell script (as below) to check page file, /PAE switch, /3GB switch, SQL server max RAM, min RAM. I am running this on 1 server. If I want to run it on many servers (from a .txt) file, How can I change it ? How can I change it to search boot.ini file's contents for a given server? clear $strComputer="." $PageFile=Get-WmiObject Win32_PageFile -ComputerName $strComputer Write-Host "Page File Size in MB: " ($PageFile.Filesize/(1024*1024)) $colItems=Get-WmiObject Win32_PhysicalMemory -Namespace root\CIMv2 -ComputerName $strComputer $total=0 foreach ($objItem in $colItems) { $total=$total+ $objItem.Capacity } $isPAEEnabled =Get-WmiObject Win32_OperatingSystem -ComputerName $strComputer Write-Host "Is PAE Enabled: " $isPAEEnabled.PAEEnabled Write-Host "Is /3GB Enabled: " | Get-Content C:\boot.ini | Select-String "/3GB" -Quiet # how can I change to search boot.ini file's contents on $strComputer $smo = new-object('Microsoft.SqlServer.Management.Smo.Server') $strSQLServer $memSrv = $smo.information.physicalMemory $memMin = $smo.configuration.minServerMemory.runValue $memMax = $smo.configuration.maxServerMemory.runValue ## DBMS Write-Host "Server RAM available: " -noNewLine Write-Host "$memSrv MB" -fore "blue" Write-Host "SQL memory Min: " -noNewLine Write-Host "$memMin MB " Write-Host "SQL memory Max: " -noNewLine Write-Host "$memMax MB" Any comments how this can be improved? Thanks in advance

    Read the article

  • Not able to Defrag my drive for shrink even using PerfectDisk on Windows 7

    - by Mithun Sasidharan
    I want to partition my c drive which has over 450gb capacity of which hardly 30gb is being used. I deleted the pagefile.sys and also disabled hibernate and cache memory. I then defragmented and consolidated free space using PerfectDisk 12 and also run a boot time defragmented. Now what remains is Metadata files that preventing me from shrinking the volume beyond half the size if disk. Please tell me what to do?????

    Read the article

  • Windows 2003 X64 Std page file usage

    - by duhaas
    Just trying to understand why I'm seeing what I'm seeing on this system. Pagefile performance counters are telling me i'm @ about 1.5% used with my page file, settings for the file are 2GB-4GB, but task manager was showing 13GB usage: Oddly enough, it just sunk down: This machine has IBM DB2 9.5 workgroup edition running on it. Thoughts??? Actually, just learned the developer had just stopped DB2, hence the huge drop, just not understand the difference in the PF usage in task manager vs perf counters?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning Recommendation for HP Laptop OEM

    - by Denja
    Hi Linux Community, I find my self struggling with the ever slow and buggy windows OS once again. It's Time for me to go with the Ubuntu/Linux way for a better and faster Operating System. As a Computer technician i want to learn and use both Systems but possibly introduce New users to more affordable Linux Based Systems. For now, Im in the process of creating dual-boot or even triple boot layouts on my laptop machine Here's the layout in use now: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make. * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows data partition (user files) NTFS - 60GB(Extended or Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Primary)(sda3) * Linux swap swap- RAM size, 3GB (sda4) * Linux home Ext4- 164,9GB (Extended)(sda5) Question 1: Based on my layout what is your suggestion for a Triple Boot layout for an additional Linux OS (Like Puppy)? Thank you in advance for your advises and suggestions.

    Read the article

  • Should you disable page file with SSD?

    - by Pyrolistical
    I've been reading http://serverfault.com/questions/23621/any-benefit-or-detriment-from-removing-a-pagefile-on-an-8gb-ram-machine, and it has a lot of great information. But assuming you have more than enough ram, I think page file should be disabled on SSD to extend the life time. I know you would lose the core dump on crash, but not many people need that information. From my understand without a page file as you reach the limit of your ram that might trigger thrashing on disk. But for SSDs there is no concept of thrashing, reads are fast. What do you guys think?

    Read the article

  • 256 SSD / 2TB Internal drive, i7, 16GB RAM.. explorer.exe still slow

    - by web_dvlp_sd
    OK, so I just got done tweaking my brand new PC and explorer.exe still manages to be sluggish not as snappy as I'd like or expected at all. Specs are i7 4th gen processor, 16GB RAM, windows 7 64 home premium on 256GB SSD, secondary internal 2TB drive. I assumed system would be lightning fast, been doing a bunch of researching and switched out a lot of different settings including enabling/disabling indexing, turning pagefile on/off or placing it on separate drive, turning folder optimization options on/off.. changes have been minimal or none at all. Any further advice or anything I might be missing, besides a new/repair install? PC is literally 3 days old I don't feel like I need a new install at all

    Read the article

  • Is computer's DRAM size not as important once we get a Solid State Drive?

    - by Jian Lin
    I am thinking of getting a Dell X11 netbook, and it can go up to 8GB of DRAM, together with a 256GB Solid State Drive. So in that case, it can handle quite a bit of Virtual PC running Linux, and Win XP, etc. But is the 8GB of RAM not so important any more. Won't 2GB or 4GB be quite good if a Solid State Hard drive is used? I think the most worried thing is that the memory is not enough and the less often used data is swapped to the pagefile on hard disk and it will become really slow, but with SDD drive, the problem is a lot less of a concerned? Is there a comparison as to, if DRAM speed is n, then SDD drive speed is how many n and hard disk speed is how many n just as a ball park comparison?

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

< Previous Page | 1 2 3 4  | Next Page >