Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 202/491 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Amazon EC2- many micro-instances vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Amazon EC2- micro-instance vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • laptop suddenly became very slow

    - by Ieyasu Sawada
    I have a compaq laptop. And I've been using it 2 years now. Then this day, it suddenly became very slow. It almost took 5 minutes from turning on to the log on screen. I click my username and it took 3 minutes to show the desktop. I click on my computer then properties to see if it is still showing 2 Ghz for the Core 2 duo processor. And 2gb for the memory. It took 10 minutes for all the information to show up. And its still 2gb and 2Ghz. So I may conclude that this is not a problem in memory or cpu. Its still running perfectly last night. And I have not seen signs of it failing. Things I already tried: Rebooting Shutdown then turn on again So how do I determine what causes this problem? How do I fix it?

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • Animated HTTP request visualisation on Apache

    - by Simon Bennett
    This is more a question to appease my memory in trying to remember what it was I saw a while ago. I remember being introduced to a realtime server visualisation tool that showed the current requests that Apache was handling in a kind of fireworks effect on screen. Each request/group of requests would be shot across the screen in varying colours. I can't for the life for me remember what is was called and hunting around here and Google has left me empty handed. Just wondering if anybody else was able to plug this gem from the memory and ease my pain! Thanks

    Read the article

  • How can I remove LibUSB on Windows 7?

    - by Cokegod
    I've installed LibUSB, a software that modifies your USB driver so that programs can access it easily. I didn't know that in Windows 7 you have to run it in compatibility mode for Windows XP, so now my Windows 7 can't access any USB port (including keyboard and mouse). Now I can't access my computer at all, and I don't have any PS2 connection in my computer. I tried to run it in Safe Mode but it didn't change anything. I tried to run a system restore from before the installation of LibUSB, but all the restore points give me an error: "The instruction at 0x73888f18 referenced memory at 0x00000004. The memory could not be read." I also tried to run Hiren's Boot CD and remove all the files associated with LibUSB, but that didn't change anything. So how can I remove LibUSB and access my computer again without formatting my hard drive? (I have important things on it)

    Read the article

  • Alternatives to Crashplan for VPS?

    - by Chloe
    I use SFTP Net Drive to mount a remote VPS so I can back it up. However, it's taken over 3+ days to scan! I ran 'ls -lR' from my desktop over the mounted network drive and it only took about 5m to list all the files! There are only about 5000 files and 2 GB. I know Crashplan can run headless on the VPS itself, but that sounds like a pain to set up, and it takes so much memory on the server. The VPS doesn't have a lot of memory to spare - it's less than my desktop. Is there another program that can communicate with a Crashplan backup protocol and has a command line interface? backup /home

    Read the article

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • Leave Windows Session Logged On

    - by Kyle Brandt
    Is a bad idea for any reason to leave accounts logged onto Windows remote desktop sessions? So instead of logging off, just closing the session so it locks. In this case, the limited number of remote desktop connections is not an issue. I am just wondering if anyone has seen sessions leak memory over time or maybe security issues with doing this, etc... I could see if programs were left open they might suck up and or leak memory, but has anyone seen this with Microsoft software such as Control Panels, Management Consoles, and Exchange System Administrator?

    Read the article

  • VPS hang when one Virtual CPU usage is 100%

    - by garconcn
    We are using Xen Center to manage all of our cPanel VPS servers. The hardware has two CPUs(Intel(R) Xeon(R) CPU [email protected]) and 32GB memory. Each hardware has 4 cPanel VPS and each VPS has 8GB memory and 4 Virtual CPUS. Every one or two months, one of the VPS server will hang because one Virtual CPU usage is 100% and it couldn't release the CPU unless we use force reboot. We have 10 similar hardware, and this cause our server down almost every day. We have tried to avoid the Statistics Processing and Fantastico update during the night, but the problem still happens randomly. I can not find anything in the server log when it hangs. Any clue? Thank you.

    Read the article

  • Windows doesn't get access to internet though linux easily does

    - by flashnik
    We have a very interesting problem. The network is configured in this way: internet is connected to Trendnet switch TS DHCP server at 192.168.0.1 running on Ubuntu (S) is connected to internet switch DNS is also configured on 192.168.0.1 on S D-Link Wi-Fi boosters are connected to switch TS PCs use D-Link PCI-E Wi-Fi cards to get access to network PCs have both Ubuntu and Windows 7 There are about 40 PCs. When PC is booted to Ubuntu it easily gets access to internet. But when it's booted to Windows 7, it gets a valid IP-address, but doesn't get access to internet. The address, mask, DNS, GW-address are totally the same as when it's booted under Ubuntu. The S is reacheble and pingable. Sometimes when we are lucky the PC gets access to Internet, but after rebooting it can lose it. When PC under Windows has access, it has totally the same settings as when it doesn't. What can be done? UPDATE I shared a dropbox with 2 captures of traffic. Ping.pcap is a capture of pinging 8.8.8.8. And google-browser.pcap is a capture of opening a google.com in a browser, both of them are in tcpdump formats and made by Wireshark on Win PC. The MAC of Win PC ends on b7:63 and IP is 192.168.0.130. UPDATE2 This is ifconfig output from Ubuntu Server eth0 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8d inet addr:193.200.211.74 Bcast:193.200.211.78 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe13:d58d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:196284 errors:0 dropped:44 overruns:0 frame:0 TX packets:190682 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:158032255 (158.0 MB) TX bytes:156441225 (156.4 MB) Interrupt:19 Memory:c1400000-c1420000 eth0:2 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8d inet addr:192.168.0.1 Bcast:192.168.0.254 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Memory:c1400000-c1420000 eth1 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Memory:c1300000-c1320000 nslookup from Win results in DNS request timeout, nbtstat in 'not found'.

    Read the article

  • building a new pc - no display, no beeps

    - by Adam
    Hi Building a new pc using this motherboard: GA-MA785GMT-UD2H and a 500W power supply (1 x 20 pin & 1 x 4 pin connectors). The CPU fan, hard drive and power supply all spin up but no display on the monitor and no beeps. Have tried: taking out all of the memory and still no beeps used a different power supply and still no display I only have the Motherboard, memory, CPU, heat sink & fan & power supply connected. Any ideas? Do I have a faulty motherboard?

    Read the article

  • Server not responding to SSH and HTTP but ping works

    - by yes123
    Hello guys, I requested an hard reboot because none of ssh and http worked. Ping worked normally. Which logs should i check to understand what was the problem? Thanks! (debian 6 on lamp) Edit: my memory and swap: Mem: 4040068k total, 1114920k used, 2925148k free, 109212k buffers Swap: 1051384k total, 0k used, 1051384k free, 283820k cached 4 GB ram (and more than 1TB of HDD) The cause is from 2 days ago: look how the usage of swap goes +60% in less than 10hours My control panel reports this as top 5 memory usage process: If every apache2 process is 190MB large that sux because IF i do TOP i have 262 sleeping process most of them are apache2! My apache mpm_prefork settings are: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 ServerLimit 1500 MaxClients 1500 MaxRequestsPerChild 2000 </IfModule> KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4

    Read the article

  • What are the most likely bottlenecks determining the performance of CamStudio screen recording?

    - by Steve314
    When doing screen recording, I can get a frame rate of maybe 15 frames per second for the full screen on my 1080p monitor using the XVID codec. I can increase the speed a bit by recording a region, changing screen modes, and tweaking other settings, but I'm curious what hardware upgrades might give me the biggest bang for my buck. My PC is budget, but modern... Athlon 2 X4 645 (3.1GHz, quad core, limited cache) processor. 4GB single channel DDR3 1066 RAM. ASRock motherboard with NVidia GeForce 7025/nForce 630a Chipset. ATI Radeon HD 5450 graphics card - 512MB on board, not configured to steal system RAM. I dual-boot Windows XP and Windows 7. For the moment, XP is my bigger performance concern as it's still my getting-things-done O/S as opposed to my browser-host O/S. My goal is to make a few programming-related tutorials. For a lot of that I don't need screen recording - I can make up some slides, record audio with the PC switched off, yada yada. When I do need screen recording, I'll mostly be recording Notepad++, Visual Studio or a command prompt. Occasionally, I may be recording some kind of graphics or diagram program and using my pre-Bamboo cheap Wacom tablet - I have the CS2 versions of Photoshop and Illustrator, but I'd much more likely be using Microsoft Paint. Basically, what I'll be recording won't be making huge demands on the machine - but recording a fair number of pixels (720p preferred) will be useful. What's particularly wierd - not so long ago I still had a five-year-old Pentium 4 based PC. And (with the same 1080p monitor) it could record at not far from the same frame rate. So clearly the performance issues are more subtle than just throw-money-at-it. My first guess would be that the main bottleneck is the bandwidth for transferring data to/from the graphics card. Is that likely to be correct? In support of that, see this [Radeon HD 5450 review][1] - the memory bandwidth is only 12.8 GB/s. If you can't get data out of graphics memory quickly, you can't transfer it back to the system memory quickly. Apparently, that's slower than some top-end cards in 2002.

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • VMWare workstation: guest OS becomes sluggish after being idle for 12+ hours?

    - by GenEric35
    Hi, My VM becomes sluggish after a few hours(~12 hours or so) of being idle, there is no impact on the host, just the gueste. The guest OS becomes sluggish. It has lots of RAM, runs on Raid 0, quad core i5 750, everything is defragged, but the only way I found to keep it's responsiveness optimal is to shutdown(dumps the memory) and the start; a restart of the guest OS doesnt dump the memory so I need to be able to do a stop of the VM, and then a start. Coming from Hyper-V I had to learn VMWare and after a few months of fine tunning it I'm quite impressed with how configurable VMWare is. This is the only small issue I haven't been able to fix, has anyone encountered this?

    Read the article

  • Will adding extra RAM in my computer speed it up?

    - by Harry Simpson
    I have a 5 year old Dell Inspiron 530 desktop computer which is slowly grinding to a halt. Someone told me if i put extra RAM in itll speed it up. Inside the computer there are four slots for memory but only two has memory in them and they are 1GB each. if i bought another 2no. 1GB and put them in the free slots would it speed the computer up (would it be twice as fast?) and is it as simple as just putting them in or is there other things i need to do?

    Read the article

  • How do I make subsonic (media server) work with SSL?

    - by John Baber
    The roughly out-of-the-box setup as a regular user works fine (meaning the site appears at http://myserver.com:4040). From ps aux java -Xmx100m -Dsubsonic.home=/var/subsonic -Dsubsonic.host=0.0.0.0 -Dsubsonic.port=4040 -Dsubsonic.httpsPort=0 -Dsubsonic.contextPath=/ -Dsubsonic.defaultMusicFolder=/var/music -Dsubsonic.defaultPodcastFolder=/var/music/Podcast -Dsubsonic.defaultPlaylistFolder=/var/playlists -Djava.awt.headless=true -verbose:gc -jar subsonic-booter-jar-with-dependencies.jar but just giving an https port java -Xmx100m -Dsubsonic.home=/var/subsonic -Dsubsonic.host=0.0.0.0 -Dsubsonic.port=4040 -Dsubsonic.httpsPort=6060 -Dsubsonic.contextPath=/ -Dsubsonic.defaultMusicFolder=/var/music -Dsubsonic.defaultPodcastFolder=/var/music/Podcast -Dsubsonic.defaultPlaylistFolder=/var/playlists -Djava.awt.headless=true -verbose:gc -jar subsonic-booter-jar-with-dependencies.jar makes http://myserver.com:4040 say HTTP ERROR: 404 NOT_FOUND RequestURI=/index.view Powered by jetty:// and https://myserver.com:6060 say Unable to connect I'm only making the change by doing # SUBSONIC_ARGS="--port=80 --https-port=443 --max-memory=120" SUBSONIC_ARGS="--max-memory=100 --https-port=6060" in /etc/default/subsonic and issuing a sudo service subsonic restart (this is Ubuntu Oneiric)

    Read the article

  • Running 32 bit SQL Server 2005 on 64 bit Windows Server?

    - by TooFat
    If I have a 32 bit version of SQL Server 2005 running on a 64 bit Windows Server does the max amount of memory avail. the SQL Server process increase from 2gb to 4gb. In reading this blog entry by Mark Russinovich in which he states that "All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag" and "Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory." which leads me to believe that the answer is "yes" but I not totally confident.

    Read the article

  • Applications starts very slowly from a network path

    - by Snowfox
    Hi We have a windows 2008 server which hosts the network share \\srvcompany\lib. This share contains several applications needed for the daily business. Every client/user (all win xp) has shortcuts on the desktop to these apps. We have the problem that at several (but not all) clients the apps starts very slowly. If I copy the application's programm files to a local folder then they'll start fastly. When I watch the memory usage in the task manager on such a "slow" machine while an applications starts I notice that the memory usage grows much slowier than when I start the app from a "fast" machine. But when I copy files with Windows Explorer from this share, the speed is nearly the same. I've also checked the network driver, both tested clients have the same network card with the same driver version. Has anyone an idea where or what I should check next to solve this problem? Thanks for answers.

    Read the article

  • Several devices in Device Manager have Code 3 error

    - by John Straka
    One of our users' machines (Dell Optiplex 380 running Windows 7 32-bit) is having a weird driver issue. A bunch of devices have Error Code 3: The driver for this device might be corrupted, or your system may be running low on memory or other resources. (Code 3) The system isn't low on memory (only 1GB in use out of whatever the max for 32-bit is.) Here's a screenshot of the affected devices: I tried reinstalling the chipset and audio drivers to no avail. I have no idea what prompted it, and I'm not sure how the basic Windows Generic PnP Monitor driver could even be corrupted. What might be causing this error?

    Read the article

  • My new Xbox 360 drive doesn't show up

    - by RobbieGee
    I bought an Xbox 360 Arcade version a while ago and today I got a 120GB drive from a shop that had a closedown sale. I put the drive on the side as per the picture on the back side of the drive. When I go to settings and look at memory, it only finds the built in memory chip, not the harddrive. Am I doing it correct? I have almost never used my Xbox so I'm not sure if there's anything more to it. I don't think I fitted the drive wrong either, it seems pretty much impossible to do it wrong. The box came with only the drive, it doesn't have any transfer kit or the likes.

    Read the article

  • Terminal Server Spoolsv.exe error

    - by Voyager
    We are having a terminal server ibm x3650 with 8 gb of RAM. On many occasions, at least once in a day, we get the error "The instruction at "0x7c8199b2" refrenced memory at "0x9ddc2ade". The memory could not be "read". Click OK to terminate the program. Click on CANCEL to debug the program. I have surfed very many sites, microsoft included, but none of them have been able to give conclusive solution for ending this problem. When we press on ok or cancel, then our ERP application (VB-MS SQL) starts to work normally. till such time the message is there, all our reports are hanged (Business Objects reports). We have already installed all the drivers of printers on the TS. Can anyone help?

    Read the article

  • Excel 2010 -Excel cannot complete this task with available resources

    - by Jestep
    Getting this error when trying to sort a document (Excel cannot complete this task with available resources). Document isn't particularly large, about 4,000 lines. Can't seem to figure out why this would start on this. I can sort this same file fine on everything back to Excel 2000 on older crappy computers. Computer is running Win 7 x64, 16 Gb RAM, and another 16 Gb of virtual. There's no possible way that all of the memory is actually getting exhausted when I can perform this on an older XP machine with 512 Mb of RAM, unless 2010's memory usage is inconceivably poorly designed. I found a few posts on forums stating that there might be a security update related bug. Any suggestions would be appreciated.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >