Search Results

Search found 440 results on 18 pages for 'spike williams'.

Page 15/18 | < Previous Page | 11 12 13 14 15 16 17 18  | Next Page >

  • Temporary boot problem after thunder storm - likely causes?

    - by alastairs
    The village where I live was sat under a thunder cloud for most of Friday, and we suffered a few power fluctuations (specifically, what seemed to be split-second outages). When I got back home from work, I found that my PCs had shut down during one of these outages. When I went to boot one of them back up, I couldn't get anything to display on screen, nor did the boot seem to complete correctly. I tried a number of things - unplugging different bits of hardware, swapping graphics adaptors, etc. - to no avail. I thought I was looking at a fried motherboard or CPU. Power seemed to be distributed correctly to the peripherals (the drives all appeared to be working) so I figured it couldn't be the PSU. Eventually I unplugged it from the mains and left it overnight (approx 12hrs unplugged). I tried it again this morning, and it booted up correctly. Woo-hoo! I have all my equipment protected by surge-protected power strips, so I don't think a spike caused these problems. Obviously it has something to do with the power fluctuations, and maybe the PSU in the problem machine got itself confused somehow. The questions are, for future reference and to help people with similar problems: What are the likely causes of the boot failure I experienced? Is a UPS a simple and cost-effective solution, or might other things help prevent this happening in future? What UPS can you recommend (my budget is limited)?

    Read the article

  • Tomcat performing terribly for no apparent reason

    - by John
    We're running a game application .WAR on Tomcat 6 on an Amazon EC2 server, 8 core processor, 7gb RAM. The application uses a MySQL database hosted on Amazon RDS. This Facebook application takes ages to access when a mere 20-30 users are playing it. Big difference from 1-2 users. The entire .WAR is ~4mb, all static content hosted elsewhere. The server has never been close to running out of RAM. The CPU utilization has never been higher than 13.5-14%. Even with ~500 users that completely slowed everything to a standstill. The thread count or threadpools isn't close to being maxed out. I heightened maxthreads but it didn't make a noticeable difference. My theory is that Tomcat can only use one processor core, which would explain why it was slowed to a halt even though CPU usage was stably at 13-14% at the activity spike. But I'm struggling to understand why it would only use one CPU core. There is no processor cap in server.xml. The app contains several servlets (4 or 5). There is no mention of SingleThreadModel in the Java code. WHAT could be causing the application to run extremely slowly? If there is only 1-5 people on the application it runs fine. With 20-30 people it's barely contactable.

    Read the article

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • I am under DDoS. What can I do?

    - by Falcon Momot
    This is a Canonical Question about DoS and DDoS mitigation. I found a massive traffic spike on a website that I host today; I am getting thousands of connections a second and I see I'm using all 100Mbps of my available bandwidth. Nobody can access my site because all the requests time out, and I can't even log into the server because SSH times out too! This has happened a couple times before, and each time it's lasted a couple hours and gone away on its own. Occasionally, my website has another distinct but related problem: my server's load average (which is usually around .25) rockets up to 20 or more and nobody can access my site just the same as the other case. It also goes away after a few hours. Restarting my server doesn't help; what can I do to make my site accessible again, and what is happening? Relatedly, I found once that for a day or two, every time I started my service, it got a connection from a particular IP address and then crashed. As soon as I started it up again, this happened again and it crashed again. How is that similar, and what can I do about it?

    Read the article

  • Help with memory usage issues on VPS

    - by Niall Collins
    Hi there, I am running a VPS server with 6 .net web sites/applications running on it. I am having issues with performance on the server, mainly it running out of memory. I contacted the company that lease the server to me and they told me it was because I also had sql server 2008 express also running on the server. So I went ahead and removed this, uninstalled etc. However I still seem to be having issues. For example at present, looking at resource consumption, the virtual memory is: ID: vprvmem Current Use: 894,328,832 bytes Limit: 1,073,741,824 bytes This means useage of ~80%. Is there any way I can check out exactly that applications, web sites, software is taking up most of the servers memory, so I can look at rectifying it. I feel that 80% is much to high to allow for contingency for a spike in traffic. I have got extra memory resources added to the box recently, but I would prefer finding the source of the problem rather than throwing extra memory at it. Maybe these levels are correct and alls running ok, but would like to investigate it to make sure. My knowledge of hardware is limited as I mostly deal in the spectrum of software. So any tools out there that can help me or any pertient advice.

    Read the article

  • Windows 2008 64bit: applications and explorer always hang

    - by Phil Farthing
    I setup a couple of Window 2008 64bit systems about 5 months ago. Initially all seemed well. Now however, for no apparent reason, things are dog slow, apps hang, explorer hangs, just clicking on something can cause a CPU spike of 100%, and often it's explorer that is eating it up. As I have two on identical hardware, and they experience the same problem, it doesn't seem related to addon software. The only thing these have in common is Kaspersky and I've tried disabling/uninstalling to no avail. There are no useful error messages in the event logs. Actually, the system never even reports app hangs. Sometimes, it similar to what I've seen on Windows 7 systems where the screen goes milky and asks if I want to trouble shoot, that's only when get impatient and click happy. The really odd thing, is that it will NOT do this for a few minutes at a time, and then starts up again. Like I will click on the start menu and browse for the Admin Tools, the start menu will hang at some point and I'll have to wait about a minute, then it's OK. The next time I do this, a few seconds later, it's fine. Every click seems to hang the first time around, then be ok the second time if I do the exact same thing. If anyone has any suggestions, please PLEASE let me know! thanks =)

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Looking for comprehensive computer monitoring software

    - by cornjuliox
    Summer in this country is insanely hot. So hot in fact, that I think we just lost a machine due to overheating (last recorded CPU/GPU temp was close to 100 C, now it wont start and lets out a long series of short beeps on power up), but that's not my concern here. Since heat is such a problem for me, I use several different pieces of software to monitor temperatures in my machine. I use MSI Afterburner to monitor GPU temp, control GPU fan speed and for some light overclocking, and then I use Speccy and SpeedFan to monitor the rest of the system, CPU temp and everything. The setup works fine for now, but I want to consolidate all this into one program so I'm not juggling several windows at once. Is there any program out there that will let me monitor the following from a single window: CPU Temp and Fan Speed CPU clock GPU Temp and Fan Speed GPU clock, both core and memory Additionally, mobo temp (Speccy lists both CPU and Motherboard temp, I assume that the latter is referring to North and Southbridge temperatures. I'm also looking for the ability to chart these data points on a graph over time, basically to see just how high the temperatures spike under load and for general analysis. It'd be nice if it could handle overclocking of both CPU and GPU in real time too. Any suggestions? Edit: I forgot to mention that I'm on Windows 7, 64-bit

    Read the article

  • mysql thread count

    - by Ryan M.
    We have a web application that uses apache and mysql. Generally (according to Munin) our MySQL thread count sits between 2 and 4 at all times. The other day, our server almost came to a halt. HTTP requests were slow or wouldn't go through at all, SSH would work, but would take 30+ seconds to register keystrokes, etc.. So we pull up Munin and the only thing that's out of normal boundaries is the Mysql thread count. CPU usage was under 1%, load was under 1.0, plenty of available RAM. As mentioned before, the thread count floats around 2 to 4. At the time of our slow downs it had spiked to 14. So I start poking around the Internet and I see that in most cases, you'll start to see a higher thread count when you start running into slow queries. If I understand it correctly, the request comes in that takes a while to process, in the mean time other requests are coming in, so a new thread will be created to work on the request (yes?). But at the time of the slow down, we had 0 slow queries. My question is: What else can cause mysql to create additional threads. And would this sudden spike in threads possibly cause the server to slow down? To fix the issue, we restarted apache and everything went back to it's beautiful, normal self. Considering the the Server Vitals (CPU, RAM, Network, etc) were all ideal, and the thread count was the only thing out of place, this seems like the most logical thing to pursue as the possible cause. If it matters, we're on Mysql 5.1.40. Server is FreeBSD 7.2 and the server in question is inside a jail.

    Read the article

  • Cisco Access switch is dropping large amount of end points

    - by user135458
    This afternoon, with no changes to the network, a switch suddenly started dropping off lots of connections. These connections would come back up a few minutes later, then another area connected to the switch would drop off. This is an older 4006 chassis switch which could in and of itself be a problem but I'm looking to see what else you all would look for in trying to find a root cause. Switch is connected via ports 1/1 and 1/2 in an etherchannel to a VSS core 1/1/42 and 2/1/42. Both sides are up and working however the CPU on the switch will spike up to 99% and that's when CRC errors start to hit the VSS core on one of those interfaces and end points start dropping off. We tried new transceivers and SFP's on each side of the link, same result. When we tried swapping the fiber patch cables on the access switch the CRC errors did not follow the fiber cables they stayed with port 1/2 on the access switch. So port 1/2 on the supervisor module looks like the culprit. We actually tried to create a new member of the ethernet channel by taking a fiber media converter to cat5 and make that a member of the port-channel but when we plugged it in you couldn't even reach the switch. I'm guessing that's unrelated and a problem with the media converter. As of right now we have left it in a state of only one fiber cable running to one side of the VSS core (1/1 Access Switch -- 2/1/42). I've sent some info into TAC and they are looking into the situation but does anyone else have any commands I could run or some troubleshooting I could look into in the meantime?

    Read the article

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Netgear CG3000D new Modem/Router - Random High Ping

    - by justin.chmura
    Cox just recently came out and looked at my internet and decided that the modem I had was causing high latency issue. The speed was fine but the ping would spike to around 100 and over when gaming or putting a load more than browsing on the line. After they replaced it, it seems like I get better latency, but when it spikes, I get upwards of over 300 ping with like 500 jitter. I figured I would hit the serverfault universe before sending another email to Cox. I opted not to do the Cox setup as it was an extra $20 which I thought would have just setup the wireless (which I can handle). Is it a setting or something that I missed that needs to be setup? The firmware for the CG3000D is awful and not fun to use. I did change some hidden settings on the RgServices.asp page (I'll attach a screenshot). I've also heard that the Router/Modem combos are awful and that I should go back and just ask for a modem stand-alone. Any input is helpful. All screenshots: http://imgur.com/a/JX6qu#0

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • computer fails to boot during/after POST for five or six boots, then works

    - by N13
    For the last few days, my computer has had issues booting. I've seen two different behaviors: The screen displays the graphics card information, then begins to list the RAM, hard drives, etc. At different points in this process (after the graphics info), the computer shuts off. After five or six attempts, it then boots normally. In roughly the same time frame, the computer freezes, and fails to boot. I think it boots successfully on the next attempt. I've also noticed that in some instances, the computer freezes on shutdown. It gets right to the point where it should shut off, but doesn't. I recently combined the best parts of two different machines into this one. I'm booting to GRUB, with Ubuntu 12.04, Linux Mint 11 and Windows Vista (unfortunately) as my OS options. It has an Enermax Modu82+ 525W power supply, and I've used an online calculator to determine that my load shouldn't exceed 400W. I even unplugged a hard drive, but that didn't help. I found the latest BIOS, patched it and checked the settings, but that didn't fix it. I'm fairly certain this issue didn't exist at first, but might have started when the power at my new apartment dropped for a second. The machine is plugged into a surge protector strip, but it's old and I've heard they lose effectiveness with age. Is a power dip as damaging as a spike? If something were damaged, why would it boot successfully after five or six attempts? It's almost like the BIOS or PSU need to be primed. The trouble with debugging is that there seems to be a "grace period" after shutdown where the issue doesn't present itself again. What should I try next?

    Read the article

  • PC in POST loop

    - by Antony Scott
    Hi, I have a custom built PC using a Gigabyte GA-EP35-DS3P motherboard with a Q6600 CPU. For the last 2 days it has got itself stuck into a POST loop. Saying that, I don't think it actually got in to the BIOS. It repeatedly lit up the LEDs and then not much more. Sometimes I could see the CPU fan twitch. Today I re-seated the DIMMs and it powered up straight away. Could this be a sign of an impending hardware failure? The PC is hooked up to a UPS, so I don't think it's a power spike or anything like that, as I have 2 other PCs on the same UPS and they're both fine. Yesterday, the first time this happened, I was getting a message which I think said "Scanning BIOS image on hard drive". I've been building and using PCs for well over 25 years and that's a new one on me! I don't think it's an over heating problem, as when the PC does finally boot up the CPU is running at 35-40C. Any help or suggestions would be greatly appreciated.

    Read the article

  • Thousands of visits a day from untraceable traffic to website - Serious issue

    - by kel
    At the end of January we noticed a spike in traffic to what JetPack stats says was home/archive page and what Google was classifying as going to /gaming/ which is an archive list in WordPress. This started off as ~3,000 unique visitors and jumped up to 65,000 unique visitors in one day, again all to the "home" page. This happened over a course of a couple of weeks and we thought we were getting attacked. The traffic then dropped off for a few days but then came back but came back as only about ~15,000 uniques a day and has been like that every day since. We came to the conclusion that something wasn't tracking right somewhere and this is legitimate traffic and brushed it off. Now here comes the problem, Google AdSense has just disabled our account for "invalid clicks". We are trying to figure out where this traffic is coming from and stop it if it's not legitimate or figure out a way to track it correctly. Specs for the site: Dedicated server running CentOS 6 with nginx, php-fpm and MySQL. The site is built in WordPress and we use CloudFlare and W3 Total Cache. Analytics being used are Google Analytics, Quantcast, Alexa and Compete. Any kind of help would be awesome. UPDATE: I'm finding more people with the same type of problem and there doesn't seem to be a solution. http://netmeg.com/bot-attack/ http://stkywll.com/2012/03/02/annoying-cyborgs-attach-distort-analytics/ After looking at the access logs I noticed they were all CloudFlare IP's. I looked into that and found out CloudFlare acts as a proxy and there was a way to fix the logs in nginx. They are coming from many different ISP's in the US. They are going to /games/ or /gaming/ (/games/ redirects to /gaming/) and all seem to have the same user agent of Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0).

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • What is the fall off of subsecond throughput on Ethernet Network Interfaces

    - by Kyle Brandt
    On a network interface, speeds are given in term of data over time, in particular, they are bits per second. However, in the uber-fast world of computing -- a second is kind of a really long time. So for example, given a linear falloff. A 1 GBit per second interface would do 500MBit per half second, 250Mbit per quarter second etc. I imagine at certain units of time, this is no longer linear. Perhaps this is set by ethernet frequencies, system clock speeds, interrupt timers etc. I am sure this varies depending on the system -- but does anyone have more information or whitepapers on this? One of the main reasons I am curious is to understand output drops on interfaces. Even if the speed per second is much lower than the interface can handle -- perhaps there are spikes that cause drops for only small numbers of milliseconds. Perhaps various coalescing would hide this effect -- or perhaps increase it on the receiving interface? Do queues make a difference here? Example: So given if this is linear down to the MS we would have 1Mbit/MS, and if Wireshark isn't distorting what I see, should I see drops when I have a spike beyond 1Mbit?

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • Games on windows 8 in bootcamp lag even on lowest graphics

    - by Jackson Gariety
    I've been playing Crysis 2 and Skyrim on my Retina MacBookPro (10,1) for months now. The two games used to run super smoothly even on nearly maxed out settings. This laptop has an Nvidia GeForce GT 650M graphics card inside, it runs great. But I recently replaced my Windows 8 consumer preview with the retail copy, and since then, 3D games lag in this odd way, no matter what the graphics settings. Every second Skyrim and Crysis alternates between running smoothly and lagging. It's a cyclical lag that comes and goes like clockwork. I can turn the graphics down to 800x600 with no antialiasing and low texture quality, and it runs much smoother on the "up" motion of the cycle, but every second it moves back into this lag spike. I've tried installing beta graphics drivers, re installing the operating system, re installing the bootcamp support software, and freeing up space (I have about 20 GB free). I can't figure out what suddenly caused this other than some obscure difference between the consumer preview and the retail version. What can I try? Is my video card failing? Are there some other drivers I can install? This isn't normal lag from maxing out the card, it

    Read the article

  • Load Balancing is unusual in Apache in round-robin mode when one of the tomcat is brought down

    - by srayker
    We are facing a unusual behavior with round-robin load balancing on apache when one of the tomcat server is brought down. Our Setup: we have 2 apache web servers on the front end using mod_jk module for load balancing using round robin for load distribution. We also have enabled session stickyness. This is followed by 4 tomcat servers on which the applications are running. Sometimes under heavy load, if there is a slowness in our DB tier we find that eventually one of the tomcat goes into a hung state and would need a restart. The moment we bounce the tomcat we see a spike in requests in one of the server and this would also go into hung state and need a restart. Eventually all the server will face similar condition. What beats me is why does the Apache transfer the whole load to one server instead of distributing the load. We are now trying the worker.balancer.method=B to see if this helps to resolve our issue. Any thoughts on resolving the issue is greatly appreciated. regards, srayker

    Read the article

  • Come see us at JavaU at JavaOne!

    - by tmcginn
    In just a little under a month, JavaOne will be in full swing (no pun intended) and thousands of Java developers will gather to hear the latest Java news, immerse themselves in Java technology and learn some new things. This year, I am fortunate enough to be able to attend, along with my Java curriculum development colleagues Matt Heimer and Mike Williams. We start our week at JavaOne teaching a one-day session at JavaU on Sunday morning. If you have never attended a training session through JavaU, you should check it out. There are some terrific sessions this year, and it might help to justify your trip to JavaOne if you can say it was for training! This year I am teaching a one day session on Java SE 7 New Features - a great session for anyone interested in the specific details of what is new in Java SE 7. Matt is teaching a one-day session on Developing Portable Java EE applications with the Enterprise JavaBeans 3.1 API and Java Persistence 2.0 API  EJB, and Mike is doing a one-day session on developing Rich Client applications with Java SE 7 using Java FX 2. I asked Matt and Mike to tell me what developers can expect from their sessions. Matt: "My session will get you up to speed on everything you need to know to create portable Java EE 6 applications using EJB 3.1 and JPA 2. I am going to cover why everyone can benefit from using EJBs (and why developers should relearn them if they haven't looked at them for years). Students who attend my session will see JPA examples showcasing how to use relational databases in an enterprise applications without programming to JDBC and without writing SQL statements. EJB and JPA benefit from being paired together, so I will also show how transaction management is easier in a container. I encourage students to bring a laptop and code as they learn!" Mike: "My session covers how to develop a rich client application using Java FX 2. Starting with the basic concepts of JavaFX, students will see how a JavaFX application is built from its layout, to its controls, to its data structures. In addition, more advanced controls like charts, smart tables, and transitions will be added to the application. Finally, a quick review of JavaFX concurrency and data binding is included. Blended with the core concepts the session will include some of the latest JavaFX technology. This includes using Scene Builder to create a JavaFX UI and connecting your XML UI definition to Java code.  In addition, packaging of the JavaFX application will be covered with some examples of the new native packaging features." As I mentioned, my session covers the changes in the Java for SE 7, including the  language changes that were voted into Java SE 7 from Project Coin. I will also look at how you can take advantage if the the new I/O library (NIO.2) for writing applications that work with files, directories and file systems. We will also look at the changes in Asynchronous I/O that are a part of the changes in NIO/2. We will spend some time looking at the changes to the Java Virtual Machine as well, including support for dynamically typed languages (JSR-292). We will spend some time looking at the Java Concurrency enhancements (JSR-166), including the new Fork/Join framework. And we'll round out the day with a look at changes in Swing, XML and a number of smaller changes in the API's. And, if these topics aren't grabbing your interest, take a look at the other 10 sessions that range from topics on architecture to how to pass the Oracle Certified Programmer I and II exams. See you soon!

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18  | Next Page >