Search Results

Search found 10731 results on 430 pages for 'k day'.

Page 348/430 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • How to verify that a physical volume is encrypted? (Ubuntu 10.04 w/ LUKS)

    - by Bob B.
    I am very new to LUKS. During installation, I tried to set up an encrypted physical volume so that everything underneath it would be encrypted. I chose "Use as: physical volume for encryption," the installation completed and I have a working environment. How can I verify that the PV is indeed encrypted? I was never prompted to provide a passphrase, so I most likely missed a step somewhere. At the end of the day, I'd like whole disk encryption if that's possible, so I don't have to worry about which parts of the file system are encrypted and which aren't. If I did miss something, do I have to start over and try again, or can it be done (relatively easily?) after the fact? I would prefer not to introduce more complexity by using TrueCrypt, etc. Environment details: The drives are md raid1. One volume group. A standard boot lv. An encrypted swap lv using a random key (which seems to be working fine). Thank you in advance for your help. This is very much a learn-as-I-go experience.

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Why would the Apache parent process restart silently?

    - by miracle
    I run apache 2.2.9 with mpm prefork on debian lenny. Following http://httpd.apache.org/docs/2.2/mod/prefork.html, I would expect that there is one parent process, running as root and listening as configured, which would start child processes as defined by the Min/Max/etc. directives. I expect the children to be restarted as per MaxRequestsPerChild, but the parent process to stay put with one process id until I restart it manually. Out of a little paranoia, I started monitoring listening ports including process ids. I have a cron job every 20 minutes to run netstat -ap | grep LISTEN and diff the output. Sometimes (about once per day) I see a series of this: 8c8 < tcp6 0 0 [::]:www [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:www [::]:* LISTEN 6607/apache2 10c10 < tcp6 0 0 [::]:https [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:https [::]:* LISTEN 6607/apache2 Over a period of an hour or three, the parent would change its pid at least once every 20 minutes, without any explanation in the log files or any other hint that anything is going wrong. This is not what I expected. What am I missing?

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Windows 7 Synchronize folders BUT while some files are open

    - by Nick
    I need some way to synchronize 2 drives I have. I want to do this ofter (once a day or so) There is the main drive that I want to clone/synchronize on a backup hard disk. The problem is in that there is an open TrueCrypt file mounted as a drive, and its live. If you don't know what true crypt is, basically you create a file on a hard disk and that file is encrypted. It's constantly open and modified live. Also Its pretty large. 100GB + I will use freefilesync . There are many tools that can do that. My question is, is it safe to copy the encrypted file while its open ? Does the software freeze the file in one state and copies it ? Or will I get a corrupted file on the other drive ? It was not clear to me how windows handles that. I read something about shadow copy, and the software says that it supports this. Does anyone know something on the matter that can help me ? or some software that will work ok in my scenario ? Thanks

    Read the article

  • Will the removal of NAT (with the use of IPv6) be bad for consumers? [closed]

    - by Jonathan.
    Possible Duplicate: How will IPv6 impact everyday users? (World IPv6 Day) As I understand when we have finally made the switch to IPv6 not only will NAT be unnecessary but it is incompatible with IPv6? Will that mean that ISPs will have to serve multiple IP addresses per customer? Will they provide a range of addresses for each customer or as each device connects will they get an IP address that isn't necessarily near that of the other devices in their house? But overall will this be bad for the Internet users? as surely it will allow ISPs to see exactly how many devices are being used, and so allow them to charge for the use of additional IP addresses? And then if that happens, what happens when you try to connect an extra device to your network? Will it simply not get an IP address? In my home we have about 15-20 devices connected at once, but for places where there are hundreds of devices, it seems like the perfect opportunity for ISPs to charge more? I think I may have it completely wrong, so is there somewhere where there is an explanation of who things will work when IPv6 becomes the norm?

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • Jabber client for windows 7

    - by Anders
    I am looking for a jabber client with some specific functions. I have spent 1½ day looking for one and it is getting tiresome. Clients that I have been using, that have what I need, but I am not interesting for a reason in are: Pidgin, does not show complete messages in their popups. Miranda IM, I have a constant disconnect issue that does not seem to be resolved in my case. What I need are: Popups A popup that shows broadcasts to users. A popup that show when my username is typed in a conference chat. I need to be able to view the full message in the popup. No configuration of a theme to make this enabled, or if there is a working theme for it already. Preferable placement is on the top right of the screen. Able to 'popup' when running full screen applications, much like games. Conferences Easy access to bookmarked conferences. I do not want to go through submenus to rejoin a disconnected or closed conference. If I close the conference window I want to be connected to the conference until I exit the client. Tabbed interface. Configuration Sober configurations, options are great but there is a limit and the above needs to be availble in the options in a understandable manner. What I wish for: MSN Not needed! If it is avaible then it is a big plus. Facebook Not needed! If it is avaible then it is a big plus. Conferences/chats Not needed! Eyecandy is always nice.

    Read the article

  • Windows 7 - "Magic" frequent folder

    - by TheAdamGaskins
    Every week, I export an mp3 file from audacity into a folder with that day's date (e.g. this past sunday I exported the file to a folder named 20130609). Then I close everything and that's it for a while. Then, I come back a few hours later to upload the file to ftp. I usually have some folders open, so to open a new one, I right click on the folder icon on the taskbar... to open a new folder window and browse to this folder I just created, right? Well I look up a little bit and: So I click it and upload the file, and it actually saves me 30 seconds, which is really awesome... but what in the world? It happens every single week without fail. I create the folder inside the audacity export window. The folder stays on the frequent list until I create a new folder the following week. This was definitely not an advertised feature of Windows 7, and it's extremely handy... but it really just seems like magic to me. How does it work?

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Bing Desktop not updating the wallpaper anymore

    - by warmth
    For some reason, first my workstation and then my tablet stopped updating the wallpaper. First I thought it was my company that was avoiding the app to work properly but then I started noticing that the app itself is a mess: It has two storage and formats for the wallpapers: C:\Users\<username>\AppData\Local\Microsoft\BingDesktop\en-US\Apps\Wallpaper_5386c77076d04cf9a8b5d619b4cba48e\VersionIndependent\images with a #####.jpg (single number) image format & C:\Users\<username>\AppData\Local\Microsoft\BingDesktop\themes with a ####-##-##.jpg (date) image format. I read here that deleting the themes folder it will get remade with the new images, and it worked. However those are not the files used by the Wallpaper app and deleting the images folder won't get the same result. I have added Bing Desktop to the Firewall white list and the issue is still there. Any ideas? Currently I'm using DisplayFusion to place the wallpaper manually because the company doesn't allow change the wallpapers (policies). Note: I wrote to the DisplayFusion developers to suggest adding a feature to support Bing Wallpapers. They told me there was no API support to implement it but they will study this possibility (workaround) for the future: http://stackoverflow.com/questions/10639914/is-there-a-way-to-get-bings-photo-of-the-day

    Read the article

  • What ports, besides 80, need to be available to send (only send) email using phpmailer to gmail over SSL?

    - by Wobblefoot
    Using phpmailer I keep getting a 110 timeout and "Unable to connect to host" when sending email from my web server. The authentication details are right and they work on another server I have (login, pwd, ports etc and gmail acct set up for SSL connections on 465), but it's failing on my new server. FIREWALL: I allow related/established, port 80 and a port for SSH on INPUT, then this on OUTPUT: 7906 474K DROP tcp -- any any anywhere anywhere tcp dpt:smtp 0 0 ACCEPT tcp -- any any localhost.localdomain yw-in-f109.1e100.net tcp dpt:submission 0 0 ACCEPT tcp -- any any localhost.localdomain gx-in-f109.1e100.net tcp dpt:ssmtp 0 0 DROP tcp -- any any anywhere anywhere tcp dpt:submission 9 540 DROP tcp -- any any anywhere anywhere tcp dpt:ssmtp This output chain works on my other server and disabling it doesn't get mail delivered either. WEB SERVER: Varnish (80) Nginx (8088) Drupal 7 PHP5-FPM APC MySQL All works beautifully, except for outgoing email. What else could it be? I understand phpmailer does NOT require a local MTA or procmail (this is sort of the point - I don't want the security or admin overhead of a full blown MTA on my web server). Am I wrong? Do I need an MTA as well? What local ports and programs are used to authenticate over SSL and route mail using phpmailer? Any ideas at all greatly appreciated - wasted a day on this nonsense already!

    Read the article

  • Very high Magento/Apache memory usage even without visitors (are we fooled by our hosting company?)

    - by MrDobalina
    I am no server guy and we have issues with our speed so I come here asking for advise. We have a VPS with 2 cores and 2gb of RAM at a Magento specialized hosting company. Over the course of the last weeks our site speed has gotten worse, even though our store is new, has less than 1000 SKUs and not even 100 visitos a day. At magespeedtest.com we only get 1.87 trans/sec @ 2.11 secs each with a mere 5 concurrent users. Our magento log files are clean, we have no huge database tables or anything like that. When we take a look at our server real time stats, we see that the memory usage jumped up from about 34% to 71% and now 82% in just a few days in idle, with no visitors on the site. Our hosting company said that we do not need to worry about that as it`s maybe related to mysql which creates buffers (which are maybe not even actually being used) and what is important is CPU and swap - stats are ok here. They also said that the low benchmark scores are caused by bad extensions or template modifications on our side. We are not sure if we can trust that statement as we only have 4 plugins installed (all from aheadworks and amasty which are known to be one of the best magento extension developers). Our template modifications are purely html and css, no modifications to the php code. Our pagespeed is ranked with 93/100 in firebug and Magento is properly configured, so the problem really just gets obvious when there are a handful of users on the site at the same time. Can anyone confirm our hosting`s statement about memory usage and where can I start looking for a solution?

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • windows server 2003 speed issues

    - by farzinSH
    I have a HP server with windows server 2003 and 50 windows XP clients. Since a week and a half the networks speed suddenly drop 2-3 times per day. It gets so slow that none of the clients could work with the HIS program installed on them. We tried so many different things such as replacing the hubs,switches and even some wires. Every time one of these changes solves the problem and the network goes back to its normal state. I checked everything. Even when I disconnected all the clients from the server and connected it to just one computer the problem still remained for 2 hours. I just narrowed down the problem to the couple of likely speculations as follows: viruses? (Updated Kaspersky running on the server shows none) server hardware failure? Physical memory usage on the server? (Because the last time the problem occurred none of the changes above solved the issue so I restarted the server an checked the physical memory usage which was 2 GBs. But I noticed it's increasing over time to over 9 GBs...the server has 16 GBs of RAM.) I surfed the internet and got nothing. Any help would do us a lot....thanks in advance

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • Python 2.7 and Ubuntu 10.10: X11 fails

    - by c.a.p.
    Against every recommendation I have apt-get remove python in Ubuntu 10.10 (build 2.6.24-29-server x86_64) and apt-get install python2.7 Some built-in software with python2.6 dependencies (like firefox, other minor stuff) was fixed just by apt-get reinstalling or reinstalling from source and the system was stable for one day until I rebooted. Upon booting I got the message "ubuntu is running in low-graphics mode" I have an NVIDIA Quadro NVS 295 256 running on a HP Z600 Workstation so I sudoed: apt-get --purged nvidia* apt-get --purged xserver apt-get install linux-headers-generic apt-get install nvidia-* apt-get install xserver-xorg dpkg-reconfigure xserver-xorg nvidia-xconfig upon rebooting i get the same "ubuntu is running in low-graphics mode" if I decide to tell the boot menu to restart X, the Ubuntu 10.10 "load screen" shows up and does not do anything for hours. If I "X" the screen remains black for hours; ctrl-c shows that /etc/X11/xorg.conf did not produce any errors just a warning "Type "ONE_LEVEL"...". /var/log/Xorg.0.log issues the following warnings (no errors) (WW) AllowEmptyInput is on (WW) NVIDIA(0): UBB is incompatible If I "startX" I get /usr/bin/python: can't find '__main__.py"' in '/usr/share/command-not-found' Before re-installing xserver-xorg however "startX" ran, just by changing the resolution of the tty1 console. Any hints?

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >