Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 743/920 | < Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Svchost.exe connecting to different IPs with remote port 445

    - by Coll911
    Im using Windows XP Professional SP2. Whenever I start my Windows, svchost.exe starts connecting to all the possible IPs on LAN like from 192.168.1.2 to 192.168.1.200. The local port ranges from 1000-1099 and the remote port being 445. After it's done with the local IPs, it starts connecting to other random IPs. I tried blocking connections to the port 445 using the local security polices but it didn't work. Is there any possible way I could prevent svchost from connecting to these IPs without involving any firewall installed? My PC slows down due to the load. I scanned my PC with MalwareBytes and found out it was infected with a worm, it's deleted now but still svchost is connecting to the IPs. I also found out that in my Windows Firewall settings, under Internet Control Message Protocol (ICMP), there's a tick on "allow incoming echo request" (usually disabled) which is locked and I can't disable it. Its description is as follows Messages sent to this computer will be repeated back to the sender. This is used for trouble shooting for e.g to ping a machine. Requests of this type are automatically allowed if TCP port 445 is enabled. Any solutions? I can't bear going with the reinstalling Windows phase again.

    Read the article

  • lamp -- edit PHP file but doesn't change web output -- including die()

    - by Reid W
    Server is standard Linux server on Amazon Web Services. Cent OS 5/Apache/PHP 5.3. No APC. It's worked fine for over a year, but now when I edit some but not all PHP files on the server using vi, the changes don't affect the web output. For example, I edit myfile.php and put a die() at the top, but when I load the page in my web browser, instead of the die() I see the content that would show up if the die() weren't there. svn updating the file in question doesn't help either. Files are on an Amazon EBS partition symlinked to /var/www/html. Just to reiterate -- this has worked fine for a long time. Restarting apache didn't help, nor did rebooting the server. What's weird is that it's just some of the files but not all. File ownership/permissions are the same for the "good" and "problem" files. I'm not a Linux newbie but am at a complete loss with this, and couldn't find anything on Google either. Any hints would be much appreciated!

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • Working with different PHP version at the same time, php_value extension_dir not working?

    - by Gremo
    I need both PHP 5.4.7 and 5.3.17 running on Windows 7 x64 with Apache 2.2.23. This is my virtual host configuration: <VirtualHost *:80> DocumentRoot "C:/WAMP/Apache/htdocs/php54" ServerName php54.local PHPIniDir "C:/WAMP/PHP54" LoadModule php5_module "C:/WAMP/PHP54/php5apache2_2.dll" php_value extension_dir "C:/WAMP/PHP54/ext" <Directory "C:/WAMP/Apache/htdocs/php54"> Order allow,deny Allow from all </Directory> </VirtualHost> The PHPIniDir and LoadModule directives work fine and using phpinfo() inside my script prints the right PHP version. But I need to load extensions, and this is where it fails. php_value extension_dir should be C:/WAMP/PHP54/ext but it's (default one) C:/php. What I'm missing here? EDIT: Of course I can set this value directly in C:/WAMP/PHP54/php.ini, but I prefer passing it using vhost configuration: ; Directory in which the loadable extensions (modules) reside. ; http://php.net/extension-dir ; extension_dir = "./" ; On windows: extension_dir = "C:/WAMP/PHP54/ext"

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • Network bandwidth usage dashboard?

    - by SkippyFlipjack
    I have a couple of wifi access points hooked up to my home network, one of which I keep unsecured for some development I do; there are only a couple other homes within range and they've got their own wifi so it's not a big concern. I also have a Sonos system, Tivo, Roku, a couple laptops, a couple phones, an iPad and a desktop machine, all of which are internet-smart. So when my internet bandwidth tanks and it takes five minutes to load a YouTube video, I want to know what's going on, and there are many potential culprits. I'd like to be able to plug my MacBook into the primary router and see a nice little dashboard of the units on the network and what kind of bandwidth each is using at that moment. I could figure this out from WireShark or tcpdump but figure there has to be an easier way. I've tried a few different commercial products but none really presented the right info. Suggestions? (This may be a question for superuser since my Apple Time Capsule's SNMP capabilities are limited, but I figure admins of small business networks would have dealt w/ the same issue..)

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • How do you permanently disable the 'This Connection is Untrusted' page on Firefox

    - by TheIronChef9
    I'm going insane. Can someone please help me to COMPLETELY DISABLE the 'This Connection is Untrusted' page on Firefox. Facts: I am running Firefox 23.0 on an Ubuntu machine (downloaded and installed ubuntu today) It is a work computer and I have to use my employer's proxy While visiting Webpages/webapps like Gmail or Google brings up the 'This Connection is Untrusted' page and I have to go through the whole tedious task of selecting 'I understand the Risks' and add Exceptions, etc. etc. The fact is, I don't care about the risks. I would rather this computer melt into the ground than have to see that page ever again. I want to dance naked in untrusted pages and not give a damn about the consequences. I just never want to see that page again. Ever. For some sites (eg. wikipedia), the css doesn't load and I end up seeing them in plain text. As a result these sites are completely useless. Wasted hours trying to solve this for stackoverflow.com. These issues happen on the Firefox on my Windows XP machine as well (also using the same proxy). I don't want to export/import certificates or create exceptions for every site that shows this bloody page. I just want this page gone. I don't want Firefox to tell me what's safe and what's not. Also, my system time and date are correct. I've also tried the lies on this page too with no good results. Edit: I've also tried the whole going into the Advance-Certificates-validation setup page and unchecked 'Use the Online Certificate Status Protocol (OCSP) to confirm the current validity of certificates' checkbox. Nothing happened even after restarting firefox or rebooting. I need help. Thanks.

    Read the article

  • Change the background color of selected text in Google Docs to increase readability [migrated]

    - by gene_wood
    How can I override or change the background color of text selected in Google Docs? It is difficult for me to see the difference and I would like to increase the contrast or difference. After Google restyled Google Docs last year (or earlier this year), I've been unable to see selected text. It's possible this is a visual deficiency with my eyes. In Google Docs, under both Google Chrome (17.0.963.83 (Official Build 127885) m) and Firefox (11.0), when I select text inside a Google Doc, the selected text has a background of color #d6e0f5. Compare this to the default browser background color of #2f65c0. (I determined the color of the selected text background by taking a screenshot and using the color picker tool in Photoshop). I've tested this using a brand new Firefox profile as well as google chrome profile. Here's a section of a screenshot showing the selected text : I've tried using a userscript to override the CSS to go back to the default text selection color using the "Stylish" plugin with this css : ::selection { background:#2f65c0; color:#ffffff; } ::-moz-selection { background:#2f65c0; color:#ffffff; } ::-webkit-selection { background:#2f65c0; color:#ffffff; } This code works on other sites, but I'm unable to get it to work on Google Docs. (I tested on other sites but applying the userscript to a different domain and using bright yellow instead of the default dark blue #2f65c0.) When you use Google Docs, do you have the same color background for selected text or something different? (To test this, browse to docs.google.com , create a document, type text into the document, select the text with the mouse by dragging over it, take a screenshot, load the screenshot up in an image editor and determine the background color of the selected text.) This color differential (between light blue #d6e0f5 and white #fffff) may be easy to see for others and the problem lies with my eyes.

    Read the article

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • Why is my browser using so much memory?

    - by Steve
    Hi. I've recently had problems with Firefox running very slowly when I have many tabs open; say 20 tabs. My whole system would slow down. I decided to give Google Chrome a try, and it started out fine. But lately I am finding that it too, slows down my whole system. Looking at Task Manager, chrome.exe is using about 250MB of memory in about 6 different entries in task manager. However, when I shut Chrome down, memory usage is reduced by about 600MB. How can this be? (shows drop in memory usage after ending Chrome.) When my system locks up with Chrome having many tabs open, it takes 10 seconds to load the Start Menu, 10 seconds to expand All Programs, and each folder and subfolder, and 30 seconds for the program to be highlighted under my mouse. It also takes 10 seconds to switch to Notepad. Why is Chrome appearing to use so much more memory than Task Manager indicates? Why is my pagefile being used when I have around 1.1GB of memory? Can I set Chrome to run in RAM and not in the pagefile? How can 20 tabs use 600MB? That's 30MB per tab. Thanks for your help.

    Read the article

  • troubleshooting postifx -> exchange connection issues

    - by Systemspoet
    I have three linux-based mail routers that run postfix and relay mail to our on-premise exchange server as well as to outlook.com, splitting the mail based on ldap atttributes. What I've observed sporadically since upgrading this spring from Exchange 2007 to 2010 is that all three of the mail relays will, for about 20 minutes, fail to connect to exchange. Postfix logs it as "lost connection with exchange.contosso.edu" ; this problem almost always occurs to all three mail relays at the same time, and lasts for slightly under 20 minutes. If I can catch it while it's occuring, and I manually do "telnet exchange.contosso.edu 25" from one mail relay and force a message through (helo, mail from, rcpt to, data, etc), then it clears that relay up. The exchange "server" is actually two machines with the HT role on them, load balanced via windows NLB. I've worked pretty hard to figure out what's happening from the postfix side and I can't see any evidence of any misbehavior. My question is, how do I attack the problem from the exchange side? Is there a connection log, or a debug setting, or something I can do to log all of the inbound connections and tell me what's causing exchange to drop them?

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • memory usage setting

    - by user127610
    everybody,the memory usage is too much,what can i do? top - 12:54:37 up 7 days, 4:38, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 18 total, 2 running, 16 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048800k total, 917424k used, 131376k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2840 1364 1204 S 0.0 0.1 0:02.17 init 1161 root 14 -4 2320 600 420 S 0.0 0.1 0:00.00 udevd 1391 root 18 0 35512 1288 948 S 0.0 0.1 0:03.53 rsyslogd 1409 root 15 0 8432 1164 700 S 0.0 0.1 0:03.87 sshd 1416 root 18 0 3156 868 692 S 0.0 0.1 0:00.00 xinetd 1423 root 18 0 8672 716 292 S 0.0 0.1 0:00.00 saslauthd 1424 root 18 0 8672 488 64 S 0.0 0.0 0:00.00 saslauthd 1431 root 15 0 7020 1168 616 S 0.0 0.1 0:00.99 crond 1450 root 25 0 6236 1444 1228 S 0.0 0.1 0:00.05 sh 3328 mysql 15 0 799m 42m 4892 S 0.0 4.1 0:02.07 mysqld 15479 root 15 0 11304 3332 2688 R 0.0 0.3 0:00.06 sshd 15482 root 15 0 6372 1688 1404 S 0.0 0.2 0:00.00 bash 15497 root 15 0 2536 1044 864 R 0.0 0.1 0:00.00 top 20137 www 15 0 20672 14m 864 S 0.0 1.4 0:00.87 nginx 22351 www 16 0 52324 26m 9244 S 0.0 2.6 0:13.94 php-fpm 24231 www 16 0 51928 25m 9260 S 0.0 2.5 0:13.52 php-fpm 32682 root 15 0 35832 3228 864 S 0.0 0.3 0:02.18 php-fpm 32686 root 18 0 7368 1616 888 S 0.0 0.2 0:00.00 nginx

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • How to fix Windows 7 device removal notification loop

    - by Barry Kelly
    Bit of an odd one this. One of our PCs is getting caught in a loop some time after being turned on, usually after a USB storage device has been attached - sometimes an iPod, sometimes a GPS. Specifically, Windows Explorer starts showing a drive icon and letter (E:, as of right now) for the System partition (the small hidden one at the start of the boot drive). Then, the icon disappears. Then it reappears again. And disappears. It does this very quickly, at what looks like maybe 50 times a second. CPU usage in this loop is also very high; averages about 66%. This machine has an i7 920 CPU, which is quad core with hyperthreading; so this usage rate works out to about 5 100% busy threads, along with whatever normal idle load is (particularly Task Manager itself). Inspecting with Process Explorer shows that the device removal notification infrastructure has gone berserk. The threads in system service processes (i.e. apart from Windows Explorer) which are using all the CPU power relate to device notification. The Disk Management MMC snap-in also fails to run when the loop starts. The only way to break the loop, it seems, is to reboot the machine. Anyone seen anything similar to this, and know of a way to fix it? Machine details: Windows 7 x64, fully patched i7 920, 12GB RAM Intel SSD 80GB (X25-M, I believe; not G2) 2TB 5.2K disk for bulk storage AMD HD 5870 Further hardware details await. I'm going to go through and update all drivers I can find.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • cannot get mssql working with sql server 2005

    - by Ryan
    I'm a MySQL/Apache user, trying my hand with IIS and SQL server, so please, if this is a stupid question have patience. I'm using IIS version 7.5. PHP version 5.3.13 and SQL server 2005 IIS is running on port 90, not sure if that will make a difference or not. I know my sql server is running because I can explore/connect to it in Server management studio. I know php is configured properly, because //localhost:90/phpinfo.php works fine. I updated the php_msql.dll extension in phpinfo to: extension=ext/php_msql.dll EDIT- However, when I run phpinfo() under the "configure command" row, this is present: --without-mssql I found/downloaded the ntwdblib.dll and placed it in both sys32 and php root. All these things were supposed to fix the issue, and they haven't. This is the code I'm using, straight from php.net: <?php // Server in the this format: <computer>\<instance name> or // <server>,<port> when using a non default port number $server = 'localhost'; // Connect to MSSQL $link = mssql_connect($server, 'uname', 'pwd'); if (!$link) { die('Something went wrong while connecting to MSSQL'); } ?> obviously I'm using a real username and password, but when I load the file in my browser, I receive a 500 error. Upon checking the log, this is what is displayed: 2012-06-25 12:41:29 ::1 GET /test.php - 90 - ::1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/536.5+(KHTML,+like+Gecko)+Chrome/19.0.1084.56+Safari/536.5 500 0 0 5 That (to me) doesn't help me much. What am I doing wrong? Thank you

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here?

    Read the article

< Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >