Search Results

Search found 7359 results on 295 pages for 'triple channel ram'.

Page 103/295 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Remoting connection over TCP

    - by Voyager Systems
    I've got a remoting server and client built. The server is set up as such: BinaryServerFormatterSinkProvider serverProv = new BinaryServerFormatterSinkProvider(); serverProv.TypeFilterLevel = TypeFilterLevel.Full; BinaryClientFormatterSinkProvider clientProv = new BinaryClientFormatterSinkProvider(); IDictionary props = new Hashtable(); props["port"] = port; TcpChannel channel = new TcpChannel( props, clientProv, serverProv ); ChannelServices.RegisterChannel( channel, false ); RemotingConfiguration.RegisterWellKnownServiceType( typeof( Controller ), "Controller", WellKnownObjectMode.Singleton ); The client is set up as such: ChannelServices.RegisterChannel( new TcpChannel( 0 ), false ); m_Controller = (Controller)RemotingServices.Connect( typeof( Controller ), "tcp://" + ip + ":2594/Controller" ); When I try to connect to the server from the same computer with the IP specified as 'localhost,' it connects fine. However, when I try to connect from a remote computer, not on the LAN, given the server's IP address, I receive the following exception on the client after a long wait: A connection attempt failed because the connected party did not properly respond after a period of time How can I resolve this? I'm sure it's a configuration problem because the ports are forwarded properly on the server and there is no firewall active. Thanks

    Read the article

  • Detecting TCP dropout over an unreliable network

    - by yx
    I am doing some experimentation over an unreliable radio network (home brewed) using very rudimentary java socket programming to transfer messages back and forth between the end nodes. The setup is as follows: Node A --- Relay Node --- Node B One problem I am constantly running into is that somehow the connection drops out and neither Node A or B knows that the link is dead, and yet continues to transmit data. The TCP connection does not time out either. I have added in a heartbeat message that causes a timeout after a while, but I still would like to know what is the underlying cause of why TCP does not time out. Here are the options I am enabling when setting up a socket: channel.socket().setKeepAlive(false); channel.socket().setTrafficClass(0x08); // for max throughput This behavior is strange since it is totally different than when I have a wired network. On a wired network, I can simulate a disconnected connection by pulling out the ethernet cord, however, once I plug the cord back in, the connection becomes restablished and messages begin to be passed through once more. On the radio network, the connection is never reestablished and once it silently dies, the messages never resume. Is there some other unknown java implentation or setting for a socket that I can use, also, why am I seeing this behavior in the first place? And yes, before anyone says anything, I know TCP is not the preffered choice over an unreliable network, but in this case I wanted to ensure no packet loss.

    Read the article

  • Best way to run multiple queries per second on database, performance wise?

    - by Michael Joell
    I am currently using Java to insert and update data multiple times per second. Never having used databases with Java, I am not sure what is required, and how to get the best performance. I currently have a method for each type of query I need to do (for example, update a row in a database). I also have a method to create the database connection. Below is my simplified code. public static void addOneForUserInChannel(String channel, String username) throws SQLException { Connection dbConnection = null; PreparedStatement ps = null; String updateSQL = "UPDATE " + channel + "_count SET messages = messages + 1 WHERE username = ?"; try { dbConnection = getDBConnection(); ps = dbConnection.prepareStatement(updateSQL); ps.setString(1, username); ps.executeUpdate(); } catch(SQLException e) { System.out.println(e.getMessage()); } finally { if(ps != null) { ps.close(); } if(dbConnection != null) { dbConnection.close(); } } } And my DB connection private static Connection getDBConnection() { Connection dbConnection = null; try { Class.forName(DB_DRIVER); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } try { dbConnection = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD); return dbConnection; } catch (SQLException e) { System.out.println(e.getMessage()); } return dbConnection; } This seems to be working fine for now, with about 1-2 queries per second, but I am worried that once I expand and it is running many more, I might have some issues. My questions: Is there a way to have a persistent database connection throughout the entire run time of the process? If so, should I do this? Are there any other optimizations that I should do to help with performance? Thanks

    Read the article

  • cocos2d - how to draw a bottle sprite with dynamically changing water level

    - by Oliver
    I am trying to draw a (2d) sprite in cocos2d showing a bottle. The bottle shall be able to have a dynamic water level (i.e. the amount of water in the bottle can change over the lifetime of the sprite). I am wondering how to do this. I currently have a PNG file of the empty bottle. I adjusted the alpha channel of that PNG so when rendering the sprite I can draw a blue rectangle and render the bottle texture over it. That will give the impression of the water being inside the bottle. However, the bottle's shape is not a rectangle itself of course, so the water can be seen out of the bounds of the bottle. I can change the bottle image in a way that only the bottle itself is transparent and set the "outside world" to an opaque color & alpha channel value, but that again prevents the "world background" to be visible in that area. I simply don't have a clue how to realize this in a sane manner. Do I really have to read every pixel of the bottle image, identify which pixel is "inside" of the bottle and then draw the water pixel by pixel? There must be an easier way, right? ;) Any best practices for these kinds of tasks? edit: see picture below, to make somewhat clearer, what I am talking about ;) http://i47.tinypic.com/10rqww0.png

    Read the article

  • How to reduce the Number of threads running at instance in jetty server ?

    - by Thirst for Excellence
    i would like to reduce the live threads on server to reduce the bandwidth consumption for data(data pull while application launching time) transfer from my application to clients in my application. i did setting like is this setting enough to reduce the bandwidth consumption on jetty server ? Please help me any one 1) in Jetty.xml: <Set name="ThreadPool"> <New class="org.eclipse.jetty.util.thread.QueuedThreadPool"> <name="minThreads"> 1 > <Set name="maxThreads" value=50> 2: services-config.xml channel-definition id="my-longpolling-amf" class="mx.messaging.channels.AMFChannel" endpoint url="http://MyIp:8400/blazeds/messagebroker/amflongpolling" class="flex.messaging.endpoints.AMFEndpoint" properties <polling-enabled>true</polling-enabled> <polling-interval-seconds>1</polling-interval-seconds> <wait-interval-millis>60000</wait-interval-millis> <client-wait-interval-millis>1</client-wait-interval-millis> <max-waiting-poll-requests>50</max-waiting-poll-requests> </properties> </channel-definition>

    Read the article

  • Gmail Sync on Android phone

    - by sunocky
    Android has the Gmail push features, which means the new message arrives in the mailbox without checking or refreshing the mailbox. As I understand, the sync processes are like these: 1) User turns on the sync 2) There will be a alert msg and the sync flag in the Gmail DB of this device will be True 3) When a new email reach the Gmail Server, it will check if the device sync value, if it's True then send the email OK, here, I don't quite understand how exactly does it work, For a WiFi and cell signal connection, does the phone has a TCP socket open keep listening to the Gmail Server, or when a new email arrives the Server send a SMS alert to the phone, and then it will open the data channel to fetch the email? Are the two ways of connections have different approaches? And second question is which method is the priority one? Say when you are in the middle of receiving data(emails), and suddenly the phone connect to a wireless network, will the data socket be closed and then reopened for the WiFi one? What's the behavior for the case when carrier's data channel and WiFi flips? I have also downloaded the source code, anyone knows which part should I be looking into in order to solves my questions? I found a folder called "email" inside the folder "package", should I be looking at its code? I know I asked quite some questions here, I'd appreciate if you know the answer for any of them, thanks very much!

    Read the article

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • Why does Mass Effect 1 run so slow on my machine if I have an XFX NVidia 9400GT video card? [closed]

    - by Papuccino1
    I so sick and tired of having my components pass the minimum requirements of a game and then I get 15 FPS on the game on everything low. Should't PC developers say 'use at least this video card for a smooth 30 FPS'? Here are my specs: Windows 7 2GB DDR2 RAM XFX Nvidia 9400gt Intel Pentium D Dual Core 2.8ghz I should be at LEAST getting 30 FPS on everything low right? Please tell me what I can do to make games run as they should, or is my video card not good for these games? Here are the recommended requirements from the official site: Recommended System Requirements for Mass Effect on the PC Operating System: Windows XP or Vista Processor: 2.6+GHZ Intel or 2.4+GHZ AMD Memory: 2 Gigabyte Ram Video Card: NVIDIA GeForce 7900 GTX or higher. ATI X1800 XL series or higher Hard Drive Space: 12 Gigabytes Sound Card: DirectX 9.0c compatible sound card and drivers – 5.1 sound card recommended My videocard is 9400GT, how is that worse than a 7900GTX? :S Edit 2: I should note, that I get poor frames when running the game in absolute BOTTOM specs. lowest resolution, no particles, etc. etc. Absolute ZERO and getting poor framerates.

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • How to find virtualization performance bottlenecks?

    - by Martin
    We have recently started moving our C++ build server(s) from real machines into VMs. (MS Hyper-V) We have some performance issues that I've currently no idea how to address. We have: Test-Box - this is a piece of desktop workstation hardware my co-worker used to set up the VM before we moved it to the actual server hardware Srv-Box - this is the server hardware Test-Box-Real - This is Windows running directly on the Test-Box HW Test-Box-VM - This is Windows in a Hyper-V VM on the Test-Box HW Srv-Box-Real- This is Server2008R2 running on the Srv-Box HW. Srv-Box-VM- This is Windows running in a Hyper-V VM on the Srv-Box HW, i.e. on Srv-Box-Real. Now, the problem is that we compared Build times between Test-Box-Real and Test-Box-VM and they were basically equal (within about 2%). Then we moved the VM to the Srv-Box machine and what we saw there is that we have a significant performance degradation between Srv-Box-Real and Srv-Box-VM, that is, where we saw no differences on the Test HW we now do see major differences in performance on the actual Server HW. (Builds about ~~ 50% slower inside the VM.) I should add that both the Test-Box and the Srv-Box are only running this one single VM and doing nothing else. I should also note that the "Real" OS is Win2008R2(64bit) and the VM hosted OS is Wind2003R2(32bit). Hardware specs: Srv-Box: Intel XEON E5640 @ 2.67Ghz (This means 8 cores with hyperthreading on the Real system and "only" 4 cores on the VM, since Hyper-V doesn't allow for hyperthreading, but number of cores doesn't seem to explain the problem here.) 16GB RAM (we have 4GB assigned to the VM) Virtual DELL RAID 1 (2x 450GB HUS156045VLS600 Hitachi 15k SAS drives) Test-Box: Intel XEON E31245 @ 3.3GHz 16GB RAM WD VelociRaptor 600GB 10k RPM SATA Note again that I'm only concerned with the differences between Srv-Box-Real and Srv-Box-VM (high) vs. the differences seen btw. Test-Box-Real and Test-Box-VM (low). Why would one machine have parity when comparing VM vs Real performance and the other (server grade HW no less) would have a large disparity? (Both being XEON CPUs ...)

    Read the article

  • DPM 2010 RC Mailbox Recovery Fails

    - by ITGuy24
    I am testing DPM 2010 with Exchange 2010. I am attempting to restore a single mailbox from a previous backup to a Recovery Database. I created and mounting the Recovery Database on the Exchange 2010 server and set the overwrite property. When I run the restore for the mailbox and point it to the recovery databse I get the following error. The recovery jobs for Exchange Mailbox Database MailboxDatabase01 that started at with the destination of EXCHANGE2010.domain.com, have completed. Most or all jobs failed to recover the requested data. (ID 3111) DPM encountered an error while performaing an operation for E:\DatabaseFiles\MailboxDatabase01.edb on EXCHANGE2010.domain.com (ID 2033 Details: The process cannot access the file because it is being used by another process (0x80070020)) MailboxDatabase01 is one of our MDBs and not the RDB I setup for the recovery. I am confused why it is even trying to access this as I have triple checked that the recovery is pointed to the RDB. Any idea what I am doing wrong?

    Read the article

  • Intel Core i7 QuadCore on HP Pavilion dv7 Overheating Issues

    - by kellax
    I bought a brand new HP notebook: HP Pavilion dv7-6b21em BeatsAudio edition. The notebook is about 2 months old and has pretty nasty overheating problem. I mainly use it for development however i do play some games. The disturbing thing is that the computer is loud on pretty simple tasks. Here are the specs: CPU: Intel Core i7-2670QM QuadCore ( 8 threads ) @ 2.20 GHz RAM: ( 8GB ) 2x 4GB @ 1066 HDD: 1TB 7200 GPU: ATI Radeon HD 6770M 1GB Dedicated DDR OS: Windows 7 64bit Enterprise I have an external monitor runing on VGA port an 22' Samsung SyncMaster S24B300 CPU Heat Statistics Platform: rPGA 988B (Socket G2) Frequency: cca. 3000 Mhz VID: 1.1809 - 1.2059 v Revision: D2 CPUID: 0x206A7 TDP: 45.0 Wats, Lithographu: 32 nm Heat: Tj. Max: 100*C, Power 4.5 - 5.9 Wats Core #0: 63*C Load on all is about 0 to 2% Core #1: 65*C Core #2: 66*C Core #3: 67*C I opened the notebook the fan is working fine there is no dust but still right now the fan is pretty loud even tho all i have open is FireFox. When i run a game the heat jumps to whopping 90-97*C. It has not shut down due to overheating yet but the loud fan is pretty annoying considering I'm not really doing anything stressfull. Is there anything i can do to fix this is it maybe a BIOS issue ? I have all drivers updated tho to the latest. I have very few background processes running consuming bare 2GB of RAM and about 2% of CPU. I had it serviced they said there is nothing wrong with it. But i feel that a Notebook that costs 1.2k Euros cant be like this.

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • SketchUp: Weird regions in sketch

    - by Josh M.
    I'm using SketchUp 8.0.15158. My sketch has weird regions/lines in it that I can't explain or get rid of. I've redrawn almost the whole thing and the lines persist! See here, highlighted in yellow: These dotted lines only show up when I triple-click the back face to select the whole object. Here's what the sketch looks like without selections: What are these lines and how can I get rid of them? They are causing weird issues that I can't seem to get around.

    Read the article

  • ATI radeon graphics card and screen freeze problem

    - by Thomas
    recently i upgrade my machine with new hardware component. my mother board is Gigabyte, processor Intel i3 3.6 ghz, ram 4 gb, graphics card ATI radeon 4350 1 GB. my OS installed is windows XP. when i am trying to play call of duty black ops then screen gets freeze and when i try to play other game like medal of honour then suddenly game closed suddenly after 15 or 20 minutes. i am not being able to find out the problem. whether i have problem in RAM or Graphics card. i asked few hardware person and one of them told me that i should installed windows 7 rather than windows xp. is it true. please help me to understand the problem and also tell me what should i do to fix this problem. please discuss in detail. thanks in advance. Update: yes i already install lates driver for ATI radeon 4350 but still the problem persist. do i need to install windows 7 instead of win xp because my processor is intel i3.

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >