Search Results

Search found 2978 results on 120 pages for 'amazon aws'.

Page 95/120 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Is Family Tree Maker 2011 the right upgrade for a FTM 11 user?

    - by bill weaver
    My father has used Family Tree Maker for years, but hasn't upgraded since version 11. It is difficult to tell from reviews at Amazon and other places whether upgrading to FTM 2011 is a good choice. File incompatibilities and upgrade woes sound like customer service is lacking, and i've read reports of it uploading your data to their database but then trying to sell you a download of data. Looking at the ancestry.com site makes me think it's solely about selling add-ons and upgrades. On the other hand, the feature set seems fairly rich and the software has a pretty strong following. I was able to get Gramps working on my system, but that's not going to work for my dad. Any advice on a good upgrade path?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Unable to ping remote server Nagios

    - by williamsowen
    We've recently set up Nagios on one of our Amazon EC2 instances to act as a monitoring server to our other instances. nrpe was installed on our staging server stager and appears to be working fine: monitoring_server~: /usr/lib/nagios/plugins/check_nrpe -H xx.xx.xx.xx -p 5666 NRPE v2.12 The issue is - when viewing the remote server stager within the Nagios admin screen - it appears to be 'DOWN'. The check_ping command reveals: monitoring_server~: /usr/lib/nagios/plugins/check_ping -H 'xx.xx.xx.xx' -w 5000,100% -c 5000,100% -p 1 PING CRITICAL - Packet loss = 100%|rta=5000.000000ms;5000.000000;5000.000000;0.000000 pl=100%;100;100;0 Can anyone provide some direction on how to get this working? Not sure what else to do

    Read the article

  • Fast, reliable data transfers from/to China

    - by Nils
    We are a small company and we will need to transfer rather large amounts of data (10GB+ each time) between Europe and China in the near future. As many may have experienced, Internet connections to or from China can be rather unreliable and slow at times without any apparent reason. For example, while sending data to China via FTP generally works well, it can be painfully slow in the other direction. Currently, we are investigating new ways to have high transfer rates in both directions. So far we have tried: FTP (see above) FTP over VPN services (generally slower than direct connections) F2F (like Retroshare or Freenet - slow!!) Aspera (fast but expensive!) BitTorrent (unreachable end nodes, b/c of firewalls which we must not configure) We would like to try: Cloud storage (e.g. Amazon S3, Google Storage) - are those services always and reliably reachable from inside China? Point-to-Point VPN (currently not possible, b/c of the network, see above) I'd be especially grateful to hear from people who have already dealt with this kind of problem before.

    Read the article

  • nodejs server hanging from time to time

    - by Johann Philipp Strathausen
    I have a node server (0.6.6) running an Express application, along with Mongoose and s3, on an Ubuntu 11.04 machine. Several times per hour, the server is hanging. That means that the application is working fine, I see the express loggings, and then all of a sudden the server stops responding. No errors, no traces, no loggings, and strangely enough the browser won't show the request even in the network debugging window. From any machine in the local network it's the same behaviour. I restart the server and it's okay again for several minutes, then again starts to hang, everytime while doing something different. The same application on Amazon on the same Ubuntu version works fine and never hangs. I know all this is kind of vague, but I don't know where to start. Has any of you seen something like this before? Any idea?

    Read the article

  • How to keep Varnish cached populate after backend down for an extended period?

    - by Nicholas Tolley Cottrell
    We have Varnish 3.0.2 running on Amazon's Linux and it works great. We have a ttl of 48 hours for most content pages and much longer for images, PDFs etc. This weekend we've taken the backend down for some maintenance, so I upped the ttl to 5 days earlier in the week. I had assumed that anything in cache would continue to be served for up to 5 days, but much to our disappointment we checked varnishstat this morning and the cache was almost completely empty and varnish was serving "page not found" messages. I know that this is not what Varnish is designed to do, but why would it reset its cache when the backend is down? And how can I prevent it for next time?

    Read the article

  • RewriteRule Works With "Match Everything" Pattern But Not Directory Pattern

    - by kgrote
    I'm trying to redirect newsletter URLs from my local server to an Amazon S3 bucket. So I want to redirect from: https://mysite.com/assets/img/newsletter/Jan12_Newsletter.html to: https://s3.amazonaws.com/mybucket/newsletters/legacy/Jan12_Newsletter.html Here's the first part of my rule: RewriteEngine On RewriteBase / # Is it in the newsletters directory RewriteCond %{REQUEST_URI} ^(/assets/img/newsletter/)(.+) [NC] # Is not a 2008-2011 newsletter RewriteCond %{REQUEST_URI} !(.+)(11|10|09|08)_Newsletter.html$ [NC] ## -> RewriteRule to S3 Here <- ## If I use this RewriteRule to point to the new subdirectory on S3 it will NOT redirect: RewriteRule ^(/assets/img/newsletter/)(.+) https://s3.amazonaws.com/mybucket/newsletters/legacy/$2 [R=301,L] However if I use a blanket expression to capture the entire file path it WILL redirect: RewriteRule ^(.*)$ https://s3.amazonaws.com/mybucket/newsletters/legacy/$1 [R=301,L] Why does it only work with a "match everything" expression but not a more specific expression?

    Read the article

  • 503 error Varnish cache when eAccelerator is started

    - by Netismine
    I have a Magento installation running on x-large Amazon server. I have Varnish, memcached and eAccelerator installed on the server. At first everything was working fine, but then at some point it stopped working, throwing 503 error with Varnish cache stamp below it. When I disable eaccelerator, error is gone and site is working. This is my eaccelerator config: extension="eaccelerator.so" eaccelerator.shm_size = "512" eaccelerator.cache_dir = "/var/cache/php-eaccelerator" eaccelerator.enable = "1" eaccelerator.optimizer = "1" eaccelerator.debug = 0 eaccelerator.log_file = "/var/log/httpd/eaccelerator_log" eaccelerator.name_space = "" eaccelerator.check_mtime = "1" eaccelerator.filter = "" eaccelerator.shm_ttl = "0" eaccelerator.shm_prune_period = "0" eaccelerator.shm_only = "0" eaccelerator.allowed_admin_path = "" any hints?

    Read the article

  • SSH tunnel for socks5 proxy is slow with concurrent load

    - by RawwrBag
    I ssh to a remote AWS server using Ubuntu. I use ssh's port forwarding capabilities to do this. I have tried forwarding a dynamic port (ssh -D) or a single port (ssh -L with dante running as a remote socks server). Both are equally slow. I also tried different ciphers (ssh -c). Concurrent TCP connections pretty much do not work. For example, I can go to speedtest.net and start a test (which is fairly fast, probably maxes out my line speed) and if I try and do anything (i.e. load google.com) while the test is still running, all the additional connections seem to hang until the speed test is over. I realize OpenSSH is single-threaded. Is this the problem? It doesn't even show up on my top. Same goes for sshd on the remote server -- no processor hit. Is there anyway to bump ssh performance or should I step up to OpenVPN or something better suited for this?

    Read the article

  • Using Cygwin in Windows 8, chmod 600 does not work as expected?

    - by Castaa
    I'm trying to change the the permissions to my key file key.pem in Cygwin 1.7.11. It has the permissions flags: -rw-rw---- chmod -c 600 key.pem Reports: mode of 'key.pem' changed from 0660 (rw-rw----) to 0600 (rw-------) However: ls -l key.pem still reports key.pem's permission flags are still: -rw-rw---- This reason why I'm asking is that ssh is complaining: Permissions 0660 for 'key.pem' are too open. when I try to ssh into my Amazon EC2 instance. Is this an issue with Cygwin & Windows 8 NTFS or am I missing something?

    Read the article

  • In Varnish, is it normal for the number of freed bytes to be 60% of those allocated?

    - by user331397
    I have an installation of Varnish 3.02 on an Amazon EC2 Medium Linux instance in front of two relatively low-traffic websites. After an uptime of 2 hours, there are 3400 objects in the cache. Using varnishstat, I checked the variables SMA.s0.c_bytes and SMA.s0.c_freed, which I assume correspond to the total number of bytes allocated since startup and the number freed, respectively. No objects should have had time to expire during these two hours, but still about 60% of the memory allocated since startup (330MB out of 560MB) has already been freed. Do you know if this is normal? If not, do you know what kind of configuration could be wrong?

    Read the article

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Backing up a Linux VPS with RSync to Vista

    - by Frank
    I've been working to setup a Linux VPS to host a couple of Wordpress sites and eventually a Mercurial server. I've setup one site and things have gone well. However, before I start moving other things to the VPS, I need to setup a backup solution. My provider, Linode, suggest RSync (among a couple of other options) to do backups. I've seen a few posts on this site that suggests other backup solutions including going to the Amazon Cloud but that costs money and the VPS is all the money I want to spend on this for the time being. So, to help solve that I want to have my backup computer be my home desktop computer. Assuming I'm using RSync, is it possible to use my Vista based home computer to become the destination for the backup? And if it is possible, what type of command or connection would I need to configure on the vista machine? Any insight would be helpful. It's probably obvious, but I've never used RSync.

    Read the article

  • How to show a warning message when entering a folder?

    - by Valter Henrique
    I don't know if this is possible, but, I have a folder which I would like to show some warning message when the user enters in it. In my case would say that the folder could be deleted without previous warning to save some disk space. I already create a file inside the folder with the warning message: WARNING! ########################################################################################################################################################## Please, be advised, that the folder /company-backup/amazon-s3 can be deleted without previous WARNING to save disk space as the INFRASTRUCTURE TEAM judge necessary. Best regards, Infrastructure Team. ########################################################################################################################################################### Is that possible ? Any idea ?

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • Is it safe to use an IDE to SATA power adapter for an extended period of time?

    - by qwertymk
    I just bought a computer from HP and they failed to include SATA power connectors with the power supply other then the one HD and DVD drive. Meanwhile I have two IDE to SATA power adapters that came with my "USB 2.0 to SATA/IDE cable" http://www.amazon.com/USB-2-0-SATA-Cable-Adapter/dp/B001OORN06 3rd pic on the left. I was wondering if I would just open up my computer and use it to plug it my SATA drives to the IDE power sources and mount it to the motherboard, would it damage my drives in the long run or have any other significant effects. A friend told me he knows people who have had their HD burn out because of this

    Read the article

  • Disabling default gestures in Scrybe

    - by RoboShop
    Just upgraded my touchpad drivers and it came with a Scrybe program which allows you to basically do some gesture and automatically load up a website or an application etc. Sounds like it could potentially be very useful. The only thing I don't really like about it is it comes preloaded with like all of these default gestures that take you to facebook, amazon, ebay etc. I'm sure they purposely put them there cause they get money for referels etc. but is there a way to turn them off? I would just like my own links in there. I just think that it takes probably about 2-3 seconds for a gesture to be recognized and that might be due largely to the fact that it's gotta compare it against all these default gestures. If I could somehow disable them, I'm sure it would work faster. Alternatively I'd be happy with a recommendation of a program similar to Scrybe but that works faster.

    Read the article

  • Installing a fake microphone on Windows Server.

    - by Adrian
    My application, that is running on Windows Server (which is an instance on Amazon EC2) requires Skype to be able to make phone calls. The server, of course, does not have a microphone installed and I don't need it to have one, because my application changes the input source to a wav file when the call is established. However, Skype has a strict rule that a microphone must be installed for a call to be made. Thus I want to install a fake microphone that will trick Skype's configuration. So far, I was able to start and run the Windows Sound service, which enabled all of the sound settings. Any ideas are very welcome!

    Read the article

  • Windows Console .exe won't run if it's downloaded from the internet

    - by Jason Kester
    I have a nightly job on Windows Server 2003 that automatically updates itself by downloading its .exe from Amazon S3. I've noticed that when it performs the download and tries to run the newly downloaded .exe, it is immediately kicked back to the command line without actually running anything. I can verify this by sticking the new version of the code directly on the server and watching it execute successfully, then uploading it to the "update" server, running the bootstrapper then running the .exe and observing it fail to execute. I can only assume that this is due to Windows protecting me from running code from outside its trusted zone. How does a fella go about configuring it to allow code from this particular external location to execute? Thanks!

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • OpenVPN bridge network from routed clients

    - by gphilip
    I have the following setup: subnet 1 - 10.0.1.0/24 with a machine used as NAT and also running an OpenVPN client subnet 2 - 192.168.1/24 with an OpenVPN server (the server in subnet 1 connect here) subnet 3 - 10.0.2.0/24 that uses the NAT machine (subnet 1) to access the internet, so all non-local traffic is routed there to the eth0 interface The OpenVPN client creates the tun0 interface and appropriate routing so that I can access machines from 192.168.1/24 [root@ip-10-0-1-208 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... Connected to 192.168.1.186. Escape character is '^]'. [root@ip-10-0-1-208 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.1 10.8.0.5 255.255.255.255 UGH 0 0 0 tun0 10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.0.0 10.8.0.5 255.255.0.0 UG 0 0 0 tun0 However, when I try the same from subnet 3, it can't reach that machine. [root@ip-10-0-2-61 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... I suspect that it's because subnet 3 is routed to eth0 on the NAT machine in subnet 1 and it cannot jump to tun0. What's the easiest way to resolve it? I don't want to use iptables. I can't change the routing from machines in subnet 1 because it's done in AWS and so it works only with specific interfaces. Also, the NAT machine gets its IP with DHCP and so bridging is a bit complicated. IP forwarding is set on the NAT machine [root@ip-10-0-1-208 ~]# cat /proc/sys/net/ipv4/ip_forward 1 Thank you!

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • Replicate portion of an LDAP directory to external server

    - by colemanm
    We're in the process of setting up a Jabber server on Amazon EC2 right now, and we'd like to have our internal users authenticate via LDAP so we don't have to create/manage a separate set of user accounts than the master directory in the office. My question is: is there a way to copy, unidirectionally, a segment of our internal LDAP directory (the user accounts OU) to an external LDAP server and authenticate Jabber against that? We're trying to work around having our externally hosted machines out in the cloud accessing our internal network directly... If we can replicate in one direction only a subset of the user accounts, then if that gets compromised we don't necessarily have a critical security breach into our internal network.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >