Search Results

Search found 7118 results on 285 pages for 'sam ram san'.

Page 131/285 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • Why does Mass Effect 1 run so slow on my machine if I have an XFX NVidia 9400GT video card? [closed]

    - by Papuccino1
    I so sick and tired of having my components pass the minimum requirements of a game and then I get 15 FPS on the game on everything low. Should't PC developers say 'use at least this video card for a smooth 30 FPS'? Here are my specs: Windows 7 2GB DDR2 RAM XFX Nvidia 9400gt Intel Pentium D Dual Core 2.8ghz I should be at LEAST getting 30 FPS on everything low right? Please tell me what I can do to make games run as they should, or is my video card not good for these games? Here are the recommended requirements from the official site: Recommended System Requirements for Mass Effect on the PC Operating System: Windows XP or Vista Processor: 2.6+GHZ Intel or 2.4+GHZ AMD Memory: 2 Gigabyte Ram Video Card: NVIDIA GeForce 7900 GTX or higher. ATI X1800 XL series or higher Hard Drive Space: 12 Gigabytes Sound Card: DirectX 9.0c compatible sound card and drivers – 5.1 sound card recommended My videocard is 9400GT, how is that worse than a 7900GTX? :S Edit 2: I should note, that I get poor frames when running the game in absolute BOTTOM specs. lowest resolution, no particles, etc. etc. Absolute ZERO and getting poor framerates.

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • How to find virtualization performance bottlenecks?

    - by Martin
    We have recently started moving our C++ build server(s) from real machines into VMs. (MS Hyper-V) We have some performance issues that I've currently no idea how to address. We have: Test-Box - this is a piece of desktop workstation hardware my co-worker used to set up the VM before we moved it to the actual server hardware Srv-Box - this is the server hardware Test-Box-Real - This is Windows running directly on the Test-Box HW Test-Box-VM - This is Windows in a Hyper-V VM on the Test-Box HW Srv-Box-Real- This is Server2008R2 running on the Srv-Box HW. Srv-Box-VM- This is Windows running in a Hyper-V VM on the Srv-Box HW, i.e. on Srv-Box-Real. Now, the problem is that we compared Build times between Test-Box-Real and Test-Box-VM and they were basically equal (within about 2%). Then we moved the VM to the Srv-Box machine and what we saw there is that we have a significant performance degradation between Srv-Box-Real and Srv-Box-VM, that is, where we saw no differences on the Test HW we now do see major differences in performance on the actual Server HW. (Builds about ~~ 50% slower inside the VM.) I should add that both the Test-Box and the Srv-Box are only running this one single VM and doing nothing else. I should also note that the "Real" OS is Win2008R2(64bit) and the VM hosted OS is Wind2003R2(32bit). Hardware specs: Srv-Box: Intel XEON E5640 @ 2.67Ghz (This means 8 cores with hyperthreading on the Real system and "only" 4 cores on the VM, since Hyper-V doesn't allow for hyperthreading, but number of cores doesn't seem to explain the problem here.) 16GB RAM (we have 4GB assigned to the VM) Virtual DELL RAID 1 (2x 450GB HUS156045VLS600 Hitachi 15k SAS drives) Test-Box: Intel XEON E31245 @ 3.3GHz 16GB RAM WD VelociRaptor 600GB 10k RPM SATA Note again that I'm only concerned with the differences between Srv-Box-Real and Srv-Box-VM (high) vs. the differences seen btw. Test-Box-Real and Test-Box-VM (low). Why would one machine have parity when comparing VM vs Real performance and the other (server grade HW no less) would have a large disparity? (Both being XEON CPUs ...)

    Read the article

  • .Net Framework corrupted

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • .Net Framework currputed

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • Running emacs in GNU Screen overrides .emacs settings for [home] key binding in FreeBSD 8.2

    - by javanix
    If I use the following .emacs file, I am able to go to the beginning/end of the current line using the home/end keys as I would expect. (keyboard-translate ?\C-h ?\C-?) (add-to-list 'load-path "/home/sam/programs/go/go/misc/emacs/" t) (require 'go-mode-load) (global-set-key [kp-home] 'beginning-of-line) ; [Home] (global-set-key [home] 'beginning-of-line) ; [Home] (global-set-key [kp-end] 'end-of-line) ; [End] (global-set-key [end] 'end-of-line) ; [End] However, if I open up a screen session it does not function like this (the [home] key still brings me to the beginning of the buffer for some reason). Here is my .screenrc file if anyone can spot anything funky in there: term xterm defutf8 on defflow off startup_message off # terminfo and termcap for nice 256 color terminal # allow bold colors - necessary for some reason attrcolor b ".I" # tell screen how to set colors. AB = background, AF=foreground termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' #use bash as the default login shell defshell -bash

    Read the article

  • Determine which active directory object is referred to by GUID in a Directory Synchronization Error Report

    - by Michael Shimmins
    We have active directory syncronization setup between our on-premises AD server and Microsoft hosted Exchange (business productivity online services). I've started getting a daily error report, which details an error for a specific AD user, however it references it by GUID. I can't find any info on how to translate that object guid into something meaningful so I can find and fix the problem. The error is reported as: Error 005: Unable to set the alias for this object in Microsoft Online Services because either the primary SMTP address, the e-mail nickname, or the SAM account name in the local Active Directory contains an invalid character. in reference to the Object GUID: CN={8443cbb4-5199-49f0-9529-ce965430dca6} How can I translate that object guid into a friendly object name?

    Read the article

  • Intel Core i7 QuadCore on HP Pavilion dv7 Overheating Issues

    - by kellax
    I bought a brand new HP notebook: HP Pavilion dv7-6b21em BeatsAudio edition. The notebook is about 2 months old and has pretty nasty overheating problem. I mainly use it for development however i do play some games. The disturbing thing is that the computer is loud on pretty simple tasks. Here are the specs: CPU: Intel Core i7-2670QM QuadCore ( 8 threads ) @ 2.20 GHz RAM: ( 8GB ) 2x 4GB @ 1066 HDD: 1TB 7200 GPU: ATI Radeon HD 6770M 1GB Dedicated DDR OS: Windows 7 64bit Enterprise I have an external monitor runing on VGA port an 22' Samsung SyncMaster S24B300 CPU Heat Statistics Platform: rPGA 988B (Socket G2) Frequency: cca. 3000 Mhz VID: 1.1809 - 1.2059 v Revision: D2 CPUID: 0x206A7 TDP: 45.0 Wats, Lithographu: 32 nm Heat: Tj. Max: 100*C, Power 4.5 - 5.9 Wats Core #0: 63*C Load on all is about 0 to 2% Core #1: 65*C Core #2: 66*C Core #3: 67*C I opened the notebook the fan is working fine there is no dust but still right now the fan is pretty loud even tho all i have open is FireFox. When i run a game the heat jumps to whopping 90-97*C. It has not shut down due to overheating yet but the loud fan is pretty annoying considering I'm not really doing anything stressfull. Is there anything i can do to fix this is it maybe a BIOS issue ? I have all drivers updated tho to the latest. I have very few background processes running consuming bare 2GB of RAM and about 2% of CPU. I had it serviced they said there is nothing wrong with it. But i feel that a Notebook that costs 1.2k Euros cant be like this.

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • ATI radeon graphics card and screen freeze problem

    - by Thomas
    recently i upgrade my machine with new hardware component. my mother board is Gigabyte, processor Intel i3 3.6 ghz, ram 4 gb, graphics card ATI radeon 4350 1 GB. my OS installed is windows XP. when i am trying to play call of duty black ops then screen gets freeze and when i try to play other game like medal of honour then suddenly game closed suddenly after 15 or 20 minutes. i am not being able to find out the problem. whether i have problem in RAM or Graphics card. i asked few hardware person and one of them told me that i should installed windows 7 rather than windows xp. is it true. please help me to understand the problem and also tell me what should i do to fix this problem. please discuss in detail. thanks in advance. Update: yes i already install lates driver for ATI radeon 4350 but still the problem persist. do i need to install windows 7 instead of win xp because my processor is intel i3.

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • Ripping CD Audio simultaneously from 2 drives on one PC via USB or PATA - rip accuracy preserved?

    - by Rob
    I'm considering ripping audio (reading audio) from CDs using 2 drives simultaneously to speed up the process of ripping the CDs - i.e. 2 at a time rather than 1. Are there any issues with achieving maximum rip accuracy? In general I wondered if people have tried this and if the simultaneous streams from both rip activities would overload the host machine and cause packet loss or read retries resulting in a sub-standard CD-DA Audio CD rip? If it just means the rip is slightly slower (but still faster than sequentially doing one rip followed by another) but still of maximum accuracy then that is OK for me. I will be using dbPowerAmp to rip the CDs and converting to FLAC lossless format. Specific examples: There are 2 machines I intend to do it on: A Toshiba NB100 1.6Ghz Atom netbook, 2Gb RAM, running Windows XP Home with 1 external LG DVD/CD burner and external 1 LG Blu-ray burner attached via USB 2.0, ripping to the machine's 5400rpm internal hard drive. This rips from one CD drive very well, more than adequate, it is a nippy, fast little machine for its specification. A Desktop PC running Windows 7 Home Premium with MSI P4M900M2-L/ MS-7255v2.0 motherboard and 1.86Ghz Intel Core 2 Duo E6320, 7200rpm hard drive and 2Gb RAM, with an internal LG PATA DVD/CD burner (master) and a Philips DVD/CD burner (slave) on the same PATA bus (perhaps separate buses would be another option to consider here). Thoughts?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >