Search Results

Search found 13619 results on 545 pages for 'memory mapped'.

Page 101/545 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • Performance affects of compressing Program Files on Windows / NTFS

    - by SRobertJames
    What are the performance affects of compressing Program Files on Windows NTFS? On a fast, multicore machine, the overhead of decompression is minimal. Machines are generally disk bound, and if you can reduce the disk load by compression, you often speed things up. (Microsoft says that the built in compression of Windows Search indexes actually improves speed for this reason.) On the other hand, Windows' virtual memory is complicated. Perhaps if files are compressed, they can't be paged out simply. And there may be other issues. In short: On a fast, multicore machine with a relatively slow disk, what performance affects will compressing Program Files have?

    Read the article

  • Why does installing NVidia 9600GT graphics card, take 1GB of RAM away from Windows?

    - by Nick G
    Hi, I've changed graphics cards in my PC and now Windows 7 (32bit) is reporting that I have a whole gigabyte less physical RAM in my PC. Why is this? Firstly, the machine has 4GB of physical RAM. The old card was an ATI 2600XT with 256MB and the new card is an NVidia 9600GT with 512MB. With the ATI card windows sees 3326MB. With the NVidia card, windows sees 2558MB. I realise that due to address space restrictions I will not see all 4GB with 32bit windows, but why is there such a massive loss of RAM when simply changing cards (bearing in mind BOTH cards have their own RAM and borrow no main memory like some built on chipsets do). Would using 64 bit windows solve this? Thanks Nick.

    Read the article

  • How do I open a 60M png image on OSX

    - by Topener
    Alright, so I've been looking around on this site on how to open a big PNG image. The question I found was about a 10M png. Xee apparently did the job. So, I downloaded Xee for my 60M file, but it crashes. So does iPhoto, Pixelmator and previewer. In the Pixelmator and Xee case, I actually had to kill the computer, and restart it. It crashed so 'hard' I couldn't get it to respond again. How do I open this file? (and zoom it) Specs: newly acquired macbook pro: 4gb memory, 2.3ghz i7 58M png image approx: 15000x30000 pixels

    Read the article

  • Why does the Java VM process eat up more RAM then specified in -Xmx parameter?

    - by evilpenguin
    I have multiple servers running CentOS 5.4 and only one application running on Java VM. I've configured the Java VM with the following arguments: java -Xmx4500M -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1024m -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true The machines I'm running the VM on has 6 GB RAM and no other applications running. After a while, the java process starts to hit the swap space really hard, I get this info out of the top command: 7658 root 25 0 11.7g 3.9g 4796 S 39.4 67.3 543:54.17 java On the other hand, if I connect via JConsole, it reports the Java VM has 2.6 GB used, 4.6 GB commited and 4.6 Gb max. java -version returns: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Why is the Java VM expanding so much past it's allocated heap size? And where does that memory go, if it's not reported in JConsole?

    Read the article

  • How to choose size for a cloud server (rackspace)

    - by Emil
    We're going to test the rackspace cloud next week to see how it's working with our web app. It's a LAMP environment with a lot of MySQL databases. How do I choose the "right" server size? On Rackspace I can choose slices with the memory of 256, 512, 1024, 2048, 4096 etc. Right now we don't have a lot of traffic (approx. 1000 visitors/day) but I thought the whole "cloud" idea was to not be limited and auto scale. Update: What I'm looking for is now a specification of what I need. I know it's too complex. I'm looking for examples, case studies etc. It would be interesting to hear something like "Yes we're serving 10 000 daily requests without spikes on a LAMP stack with only one slice on with 2 GB RAM".

    Read the article

  • Ubuntu 9.1 Only Sees 244 MB RAM, while BIOS and Windows Sees 1.5 GB

    - by nicorellius
    I have 1.5 GB of RAM installed on an older Dell, Pentium 4. I just installed Ubuntu 9.1 and the system is only seeing 244 MB of RAM, even though there is 1.5 GB on the system. The BIOS sees all of it. I ran a Knoppix disc and it only saw 25 MB upon booting. I made no particular changes to the installation taht would affect this. I looked through the BIOS and the only setting I could see was the AGP aperture. Not even sure what this is. Anyone know where I went wrong? I also tried moving the memory modules around on the board. Booted with the 1 GB stick, still saw 244 MB.

    Read the article

  • Are there pitfalls to using incompatible RAM (frequencies) in motherboards?

    - by osij2is
    I'd like to use 2 x 4GB DDR3 1600 dimms in a motherboard capable of only DDR3 1066. The DDR3 1600 is on sale and the cost is identical to 1066 dimms. It'd be nice to have these faster sticks around should i upgrade the motherboard. I assume the RAM can under clock itself or be changed in the BIOS. While obviously it's less than ideal situation, I don't know if there are other unintended consequences in terms of stability, performance and longevity of the board and said RAM. Am I doing any damage to the memory controller or RAM? I've always bought RAM at the max speed specified for the motherboard and I've never gone over so I'm not sure if there any caveats to this at all. Edit: I intend to use the RAM in pairs. I know that mixing RAM speeds is just a bad idea.

    Read the article

  • Is computer's DRAM size not as important once we get a Solid State Drive?

    - by Jian Lin
    I am thinking of getting a Dell X11 netbook, and it can go up to 8GB of DRAM, together with a 256GB Solid State Drive. So in that case, it can handle quite a bit of Virtual PC running Linux, and Win XP, etc. But is the 8GB of RAM not so important any more. Won't 2GB or 4GB be quite good if a Solid State Hard drive is used? I think the most worried thing is that the memory is not enough and the less often used data is swapped to the pagefile on hard disk and it will become really slow, but with SDD drive, the problem is a lot less of a concerned? Is there a comparison as to, if DRAM speed is n, then SDD drive speed is how many n and hard disk speed is how many n just as a ball park comparison?

    Read the article

  • What is Peformance Monitor telling me when my page faults / second are high?

    - by David Robison
    I have a Windows 7 x64 computer that is having performance issues. After some investigation, I have discovered that the page faults / second on it, as reported by Performance Monitor, are really high. Everything else seems to be normal. Resource Monitor reports no hard faults and lots of available memory. Is this a potential cause for problems, or is it a red herring? If it is something that could be causing problems, what should I do next to figure out what is causing it? Here is a screen shot of the Performance Monitor. Notice that the average page faults / second is 75,887. On another computer that does not have problems, this number is closer to 3,000. Here is a screen shot of the Resource Monitor, sorted by hard faults / second, which is currently 0 for all processes.

    Read the article

  • Only some motherboards can support faster RAM?

    - by Wesley
    This is a question relating to one of my builds. Here are the specs: ECS P4VXASD2+ V5.0 motherboard Intel P4 Northwood 2.8 GHz (533 MHz FSB, 512 KB L2) 2x 1GB PC3200 DDR RAM Maxtor 300GB IDE HDD 16 MB NVIDIA TNT2 Pro AGP OKIA 300W ATX PSU Gigabyte 52x CD-ROM The issue right now is that I'm trying to install Windows XP from the CD drive but the computer randomly restarts partway through installation. My other build was BSODing due to RAM latency errors. This ECS board manual states that memory modules "up to 333 MHz" (i.e. PC2700) is supported. However, I am running PC3200 modules, which is clearly faster than PC2700. Would this be causing the computer from randomly restarting? EDIT 1: I also wanted to mention that my Emachines T2482 is actually running 2x 512MB PC3200 DDR RAM when it should only be supporting PC1600 and PC2100 DDR RAM. Yet there are no issues with it.

    Read the article

  • Hypertransport sync flood error

    - by Carl B
    What is it? And what causes it? Is it only for uncorrectable DIMM Errors(Troubleshooting DIMM errors)? When an UCE occurs, the memory controller causes an immediate reboot of the system. During reboot, the BIOS checks the Machine Check registers and determines that the previous reboot was due to an UCE, then reports this in POST after the memtest stage: A Hypertransport Sync Flood occurred on last boot 3 BIOS reports this event in the service processor’s system event log (SEL) as shown in the sample IPMItool output There are what seems to be some suggested answers to include Bad Caps Bios verisons (happens in one version not the other) Graphics card issues Lack of power to the CPU The list of possible generators seems to target everything but the computer case. System Specs: Windows Home Premium 64 Motherboard - MSI790FX-GD70 (MS7577) / Bios v 1.9 (American Megatrensa Inc) Ram - Patriot G Series ‘Sector 5’ Edition 4GB DDR3 1600 CPU - AMD Phenom II X2 555 Black Edition Callisto 3.2GHz Socket AM3 80W (Note: unlocked 2 cores CPU Z ids it as phenom II x4 B55) Graphics - 2 x Radeon 5750 in crossfire PSU - ABS 900w HDDs - 2 Seagate 1.5 TB Sata SSD - 1 OCZ 120 GB Vertex Plus R2

    Read the article

  • Memeory Leak in Windows Page file when calling a shell command

    - by Arno
    I have an issue on our Windows 2003 x64 Build Server when invoking shell commands from a script. Each call causes a "memory leak" in the page file so it grows quite rapidly until it reaches the maximum and the machine stops working. I can reproduce the problem very nicely by running a perl script like for ($count=1; $count<5000; $count++) { system "echo huhu"; } It is independent of he scripting language as the same happens with lua: for i=1,5000 do os.execute("echo huhu") end I found somebody describing the same issue with php at http://www.issociate.de/board/post/454835/Memory_leak_occurs_when_exec%28%29_function_is_used_on_Windows_platform.html His solution: Firewall/Virus Scanner does not apply, neither are running on the machine. We can also reproduce the issue on other Developer Machines running XP 64, but not on XP 32 Bit. The guilty guy for the allocation is C:\WINDOWS\System32\svchost.exe -k netsvcs which runs all the basic Windows services. Does anybody know the issue and how to resolve it ?

    Read the article

  • Why is my Asus P5PL2-E using 640MB of RAM (out of 4GB) ? [closed]

    - by Tom
    Possible Duplicate: Windows XP and RAM 3.5GB+ I've recently installed 4GB of RAM on my server, which is running Windows XP SP2 32-bit and My Computer showed that only 3.37GB were installed. After digging Google for a bit, I couldn't find anything helpful, but I do remember reading a post in a forum regarding the motherboard using 640MB of RAM. Digging in my own BIOS, I've also found that my motherboard has also reserved that amount for its self. Why does my motherboard reserve this memory and how can I tune it down to say 128MB?

    Read the article

  • Tuning OpenVZ Containers to work better with Java?

    - by Daniel
    I have a 8 GB RAM Server (Dedicated) and currently have KVM Virtual Machines running on there (successfully) however i'm considering moving to OpenVZ as KVM seems a bit overkill with a lot of overhead for what i use it for. In the past i have used OpenVZ Containers, hosted by myself and from other providers and Java doesn't seem to work well with them.. One example is that if i give a container 2 GB RAM ( No burst) (with or without vswap doesn't matter) a java instance can only be tuned to use at very most 1500 MB of that RAM (-Xmx, -Xms). Ideally, i wish to be able to create "Mini" containers with about 256MB, 512MB, 768 RAM and run some java instances in them. My question is: I'm trying to find an ideal way to tune a OpenVZ container configuration to work better with Java memory. Please, don't suggest anything related to Java settings, i'm looking for OpenVZ specific answers.. Though i welcome any suggestion if you feel it may help me. Much Appreciated, Daniel

    Read the article

  • Making the most of dual channel DDR2: how do I arrange the sticks?

    - by andrz_001
    I've searched online, but I can't find anything definitive that will put me at ease. I turn to superuser. This is how I have the RAM sticks arranged now: To make the most of the RAM and the dual channel capability, it occurred to me that perhaps I have the sticks arranged incorrectly. Should I move the stick in the DDR2_2 slot one over--to the adjacent, red slot? I appreciate any suggestions. (BTW, can something in BIOS tell me whether I'm running at "optimal" memory speeds??) edit: I'm running Windows XP SP3, 32-bit. Thanks.

    Read the article

  • PC with 2 RAMs of different frequencies (1GB 533MHz + 2GB 667MHz) showing only 2GB RAM

    - by srinivas
    I have 6 years old PC with 1.25GB (1GB + 256MB) of DDR2 533MHz RAM. As computer has become very slow and memory usage reaching its limits most of the times, i decided to upgrade RAM. But unfortunately, im not finding retailers selling DDR2 533Mhz RAM near my area. Since I was not sure mixing up RAMs of different frequencies, i searched in web. Most results said they can be mixed but computer would run at lower frequency of the 2 RAMs. So I bought 2GB DDR2 667Mhz RAM expecting my pc to have 3GB RAM running at 533 MHz. But computer properties page shows only 2GB RAM. BIOS settings too show the slot with 2GB 667MHz as ACTIVE and other slot with 1GB 533MHz as "NOT INSTALLED". When I removed 2GB RAM, they computer shows only 1GB RAM. So for some reason, 1GB RAM is not getting detected/used. What is the problem here? thanks in advance.

    Read the article

  • Apache freezing, How to detect which virtual host is getting hit?

    - by mr-euro
    I have a production server that in the last 24 hours has been hard rebooted 4 times due to freezes. Ping is fine but all other services time-out (Apache, SSHd, etc). I have now diagnosed it to Apache running out of memory due to an exorbitant amount of child processes forking suddenly within seconds of starting Apache. Stopping Apache just after rebooting keeps the server stable again. My two questions are: Is there a way to detect which of the vhosts is being suddenly hammered without looking into each vhost's access log one by one? Is there a way to quickly enable/disable vhosts without commenting (#) them all out in httpd.conf?

    Read the article

  • Chipset GPU causes a massive slowdown

    - by zyboxenterprises
    My AMD Radeon HD 7700 recently broke (fan stopped working and GPU overheated), and now I'm running on internal chipset graphics, and it causes a massive slowdown of the whole PC. I've changed the graphics memory from 32MB (minimum) to 256MB (highest), and it hasn't made any difference whatsoever. I'm using Windows Aero, and disabling it should have made a small difference, but it didn't; the whole PC is still slow. I know that it's not the computer build, because I built it myself, and it was a lot faster when it had the AMD Radeon HD 7700 in it, which is the reason why I believe it's the internal chipset graphics that are causing the problem. Is this behavior normal? I don't have the cash right now to go out and buy a new dedicated GPU. I'm using an ASRock N68C-GS FX motherboard with an AMD FX 4100 (overclocked to 4.3GHZ), with 4GB RAM. The overclock was an attempt to resolve this issue, and it isn't related to this issue that the integrated graphics is causing a slowdown.

    Read the article

  • What are the best possible ways to benchmark RAM (no-ECC) under linux / arm?

    - by moul
    I want to test integrity and global performances of no-ECC memory chips on a custom board Are there some tools that run under linux so I can monitor system and global temperature in the same time ? Are there some no-ECC specific tests to do in general ? EDIT 1: I already know how to monitor temperature (I use a special platform feature /sys/devices/platform/......../temp1_input). For now : wazoox : it works but I've to code my own tests Jason Huntley : ramspeed : does not work on arm stream benchmark : it works and is very fast, so I'll look if it's accurate and complete memtest : I'll try later, since it does not run directly from linux stress for fedora : I'll try later too, it's too problematic for me to install fedora now I found this distribution : http://www.stresslinux.org/sl/ I'll continue to check tools that run directly under linux without too big dependencies, after I'll maybe give a try to solutions like stresslinux, memtest, stress for fedora. Thanks for you answers, I'll continue to investigate

    Read the article

  • Need help upgrading MacBookPro3,1 RAM to 4GB.

    - by Fantomas
    My questions are: 1) Where to buy it and what to buy? I have heard that this RAM is generic enough and it does not have to come from Apple. 2) Can I reuse my existing stick(s)? Would I have a single 2GB module, or 2 x 1GB modules? 3) If I have 2GB already, is it a good idea to have one old stick and one new one? Which one is better placed at the top and which one at the bottom? Let me know what questions you have. My computer's info: Hardware Overview: Model Name: MacBook Pro Model Identifier: MacBookPro3,1 Processor Name: Intel Core 2 Duo Processor Speed: 2.4 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache: 4 MB Memory: 2 GB Bus Speed: 800 MHz Boot ROM Version: MBP31.0070.B07 SMC Version (system): 1.16f11

    Read the article

  • Is a computer's DRAM size not as important once we get a Solid State Drive?

    - by Jian Lin
    I am thinking of getting a Dell X11 netbook, and it can go up to 8GB of DRAM, together with a 256GB Solid State Drive. So in that case, it can handle quite a bit of Virtual PC running Linux, and Win XP, etc. But is the 8GB of RAM not so important any more? Won't 2GB or 4GB be quite good if a Solid State Drive is used? I think the most worrying thing is that the memory is not enough and the less often used data is swapped to the pagefile on hard disk and it will become really slow, but with an SSD drive, the problem is a lot less of a concern? Is there a comparison as to, if DRAM speed is n, then SSD drive speed is how many n and hard disk speed is how many n just as a ball park comparison?

    Read the article

  • Apache conf for high trafic CMS with backend users?

    - by Annan
    I'm in the situation where a website is going to have a high number of web users and a few backend webmasters. Webmasters will upload images (+other high mem tasks) and this bumps up the memory allocation of the httpd child processes to 100-150mb. In order to stop swapping I'm currently setting MaxClients in httpd.conf to 20. However this lowers maximum simultaneous requests. Will this be a problem when the website goes live? What is the best configuration? Info: Drupal 6, PHP 5, Apache 2.2 (Prefork atm) I'm thinking about Worker MPM, two apache instances or low MaxRequestsPerChild.

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • The cache is getting at full level so fast

    - by CompilingCyborg
    Please, the memory and the cache are getting to the full level quite quickly under my linux mint 9 - isadora system. I used Ubuntu and Debian before, and it was not causing this issue at all. At the current time i typing the following command frequently to empty the cache "echo 3 /proc/sys/vm/drop_caches". Please any way around this? or do you know what's going wrong? | I am only programming on this machine; no graphics, no games nothing. Thanks in advance for your help!

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >