Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 24/121 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • Enabling mod_rewrite on Amazon Linux

    - by L. De Leo
    I'm trying to enable mod_rewrite on an Amazon Linux instance. My Directory directives look like this: <Directory /> Order deny,allow Allow from all Options None AllowOverride None </Directory> <Directory "/var/www/vhosts"> Order allow,deny Allow from all Options None AllowOverride All </Directory> And then further down in httpd.conf I have the LoadModule directive: ... other modules... #LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so ... other modules... I have commented out all the Apache modules not needed by Wordpress. Still when I issue http restart and then check the loaded modules with /usr/sbin/httpd -l I get only: [root@foobar]# /usr/sbin/httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Inside the virtual host containing the Wordpress site I have an .htaccess containing: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The .htaccess is owned by apache which is the user apache runs under. The apachectl -t command returns Syntax OK What am I doing wrong? What should I check?

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • configure Reverse DNS for VPS with amazon route 53, ISP or Amazon Route 53 issue?

    - by Oscar Cabrero
    i have a VPS hosted with myhosting.com, the domain is hosted in godaddy and the DNS records are managed in AMAZON route 53. i was told by myhosting support that i should create a PTR record on my DNS but i have read that in order to reverse DNS for an IP this record should be created in the ISP records wich sounds have sense. because if i want to get the domain name via an IP the request will never be forwarded to AMAZON instead it will ask the ISP for it am i rigth or MyHosting support is correct and i should setup PTR record on Amazon which i already did THANKS Oscar

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • How to debug solve 500 Internal error aws micro ec2 with suexec, Apache and php CGi

    - by Oudin
    I'm running WordPress multi-site on an amazon micro ec2 with suexec, Apache and php CGi On Ubuntu 12.04 However I've been experiencing a lot of Internal server 500 errors and I'm in the process of debugging it to find a solution. I've posted my error logs below example.com error.log: [Fri Oct 26 10:10:08 2012] [warn] [client 23.23.xxx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Fri Oct 26 10:10:08 2012] [error] [client 23.23.xxx.xx] Premature end of script headers: wp-cron.php [Fri Oct 26 10:50:04 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/ [Fri Oct 26 10:50:04 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin.php, referer: https://www.example.com/wp-admin/ [Fri Oct 26 10:58:14 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:15 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin-ajax.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:56 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:57 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: plugins.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:59:18 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:59:18 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin-ajax.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 11:01:49 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/ [Fri Oct 26 11:01:49 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: ap_pass_brigade failed in handle_request_ipc function, referer: https://www.example.com/wp-admin/ Apache Log: php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory Recipient names must be specified Recipient names must be specified php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2852 [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2851 [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2853 [Fri Oct 26 10:58:22 2012] [warn] mod_fcgid: process 2892 graceful kill fail, sending SIGKILL php (pre-forking): Cannot allocate memory [Fri Oct 26 10:59:21 2012] [warn] mod_fcgid: process 2894 graceful kill fail, sending SIGKILL [Fri Oct 26 10:59:25 2012] [warn] mod_fcgid: process 2866 graceful kill fail, sending SIGKILL suexec.log: [2012-10-25 16:05:36]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:09:38]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:09:51]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:03]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:06]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:35]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:27]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:29]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:31]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 21:42:12]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 22:56:50]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 02:34:43]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 04:25:07]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 06:35:19]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 06:40:05]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 07:22:45]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:10:05]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:49:24]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:49:24]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi based on the logs can any determine what might be the cause of this? Thinking that it might be the micro instance I'm thinking of upgrading to a small. Any help would be greatly appreciated.

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • Using Cloud Formation provisioned security group with specific subnet

    - by Fred Clausen
    Summary I'm attempting to create an AWS CloudFormation template which contains an instance for which I want to select a particular subnet. If I specify the subnet ID then I get the following error The parameter groupName cannot be used with the parameter subnet. From reading this thread it appears I need to provide security group IDs - not names. How can I create a security group in CloudFormation and then get its ID after the fact? Details The relevant part of the instance config is as follows "WebServerHost": { "Type" : "AWS::EC2::Instance", <..skipping metadata...> "Properties": { "ImageId" : { "ami-1234" }, "InstanceType" : { "Ref" : "WebServerInstanceType" }, "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ], "SubnetId" : "subnet-abcdef123", and the security group looks as follows "WebServerSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Enable HTTP and SSH", "SecurityGroupIngress" : [ {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"}, {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0"} ] } }, How can I create and then get that security group's ID?

    Read the article

  • Read Linux-formatted (ext3) EBS volume mounted on Windows Server 2008 instance

    - by Greg Owen
    I've got a Windows Server 2008 R2 instance set up in Amazon EC2. I've also got some Ubuntu instances on the same EC2 account. I'd like to be able to mount the EBS volume from one of the Ubuntu instances onto the Windows instance as an external drive and then access that drive from the Windows instance. I've looked at tools like ext2fsd and ext2 IFS, but these haven't worked (I couldn't get the former to work, and the latter says that it supports Windows 2008 but gives an error when I try to install it, saying that it only supports up to Windows 2003). I know that there are all kinds of tools to view Linux partitions and that there are filesystems that are compatible with both Linux and Windows, but neither of those options works here (I want to be able to attach and detach the Ubuntu volumes on command, rather than have a permanent partition, and Ubuntu EBS volumes are ext3 by default). Anybody know a good tool I should use?

    Read the article

  • Copy/move EBS volume from one Region to another

    - by Gnanam
    Background of our setup: We've hosted our web-based application in Amazon EC2 US East (Virginia) Region. Our instance is based on Linux distribution (CentOS) and AMI is S3-backed. 1 EBS volume (400 GB size) is attached to this instance. Question: We've planned to migrate our deployment to US West (N. California) Region. From AWS doc, I understood that for moving AMI, there is a command-line tool available - ec2-migrate-bundle. But for moving EBS volume across Region, currently there is no tool available. I'm looking for easiest and/or fastest way of copying/moving EBS volume from one Region to another. Also, are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • Pass User Data to AWS client

    - by bearrito
    Has anyone successful passed user data to the AWS CLI ? I have tried various incantations of the following but it does not work. Docs say string must be base64 encoded : http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html The instance logs never indicate the script is executed and chef is installed. aws ec2 run-instances --image-id ami-a73264ce --count 1 --instance-type t1.micro --key-name scrubbed --iam-instance-profile Arn=arn:aws:iam::scrubbed:instance-profile/scrubbed --user-data $(base64 chef_user_data.sh --wrap=0) chef_user_data.sh #!/bin/bash curl -L https://www.opscode.com/chef/install.sh | sudo bash

    Read the article

  • mod_security: How to allow ssh/http access for admin?

    - by mattesque
    I am going to be installing mod_security on my AWS EC2 Linux instance tonight and need a little help/reassurance. The only thing I am truly worried about right now is making sure my (admin) access to the instance and webserver is maintained w/o compromising security. I use ssh (port 22) and http (80) to access this and I've read horror stories from other EC2 users claiming they were locked out of their sites once they put up a firewall. So my question boils down to: What settings should I put in the mod_security conf file to make sure I can get in on those ports? IP at home is not static. (Hence the issue) Thanks so, so, so much.

    Read the article

  • amazon web services and sql server support

    - by user25011
    Hi All, I have built my application using sql server 2008 and .net framework 3.5 I am looking for a sclable hosting service and have come to think of amazon web services. Does amazon also support hosting of sql server 2008 databases? What hosting services do you advise Thank you.

    Read the article

  • 501 Error during Libjingle PCP on Amazone EC2 running Openfire

    - by AeroBuffalo
    I am trying to implement Google's Libjingle (version: 0.6.14) PCP example and I am getting a 501: feature not implemented error during execution. Specifically, the error occurs after each "account" has connected, been authenticated and began communicating with the other. An abbreviated log of the interaction is provided at the end. I have set up my own jabber server (using OpenFire on an Amazon EC2 server), have opened all of the necessary ports and have added each "account" to the other's roster. The server has been set to allow for file transfers. My being new to working with servers, I am not sure why this error is occur and how to go about fixing it. Thanks in advance, AeroBuffalo P.S. Let me know if there is any additional information needed (i.e. the full program log for either/both ends). Receiving End: [018:217] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [018:217] <iq to="[email protected]/pcp" type="set" id="5"> [018:217] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="402024303" initiator="[email protected]/pcp"> [018:217] <content name="securetunnel" creator="initiator"> [018:217] <description xmlns="http://www.google.com/talk/securetunnel"> [018:217] <type>send:winein.jpeg</type> [018:217] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:217] </description> [018:217] <transport xmlns="http://www.google.com/transport/p2p"/> [018:217] </content> [018:217] </jingle> [018:217] <session xmlns="http://www.google.com/session" type="initiate" id="402024303" initiator="[email protected]/pcp"> [018:217] <description xmlns="http://www.google.com/talk/securetunnel"> [018:217] <type>send:winein.jpeg</type> [018:217] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:217] </description></session> [018:217] </iq> [018:217] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:217] <presence to="[email protected]/pcp" from="forgesend" type="error"> [018:217] <error code="404" type="cancel"> [018:217] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [018:217] </error></presence> [018:218] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:218] <presence to="[email protected]/pcp" from="forgesend" type="error"> [018:218] <error code="404" type="cancel"> [018:218] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [018:218] </error></presence> [018:264] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:264] <iq type="result" id="3" to="[email protected]/pcp"> [018:264] <query xmlns="google:jingleinfo"> [018:264] <stun> [018:264] <server host="stun.xten.net" udp="3478"/> [018:264] <server host="jivesoftware.com" udp="3478"/> [018:264] <server host="igniterealtime.org" udp="3478"/> [018:264] <server host="stun.fwdnet.net" udp="3478"/> [018:264] </stun> [018:264] <publicip ip="65.101.207.121"/> [018:264] </query></iq> [018:420] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:420] <iq to="[email protected]/pcp" type="set" id="5" from="[email protected]/pcp"> [018:420] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="3548650675" initiator="[email protected]/pcp"> [018:420] <content name="securetunnel" creator="initiator"> [018:420] <description xmlns="http://www.google.com/talk/securetunnel"> [018:420] <type>recv:wineout.jpeg</type> [018:420] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:420] </description> [018:420] <transport xmlns="http://www.google.com/transport/p2p"/> [018:420] </content></jingle> [018:420] <session xmlns="http://www.google.com/session" type="initiate" id="3548650675" initiator="[email protected]/pcp"> [018:420] <description xmlns="http://www.google.com/talk/securetunnel"> [018:420] <type>recv:wineout.jpeg</type> [018:420] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:420] </description></session></iq> [018:421] TunnelSessionClientBase::OnSessionCreate: received=1 [018:421] Session:3548650675 Old state:STATE_INIT New state:STATE_RECEIVEDINITIATE Type:http://www.google.com/talk/securetunnel Transport:http://www.google.com/transport/p2p [018:421] TunnelSession::OnSessionState(Session::STATE_RECEIVEDINITIATE) [018:421] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [018:421] <iq to="[email protected]/pcp" id="5" type="result"/> [018:465] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:465] <iq to="[email protected]/pcp" id="5" type="result" from="[email protected]/pcp"/> [198:665] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:20:15 2012 [198:665] <iq type="get" id="162-10" from="forgejabber.com" to="[email protected]/pcp"> [198:665] <ping xmlns="urn:xmpp:ping"/> [198:665] /iq> [198:665] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:20:15 2012 [198:665] <iq type="error" id="162-10" to="forgejabber.com"> [198:665] <ping xmlns="urn:xmpp:ping"/> [198:665] <error code="501" type="cancel"> [198:665] <feature-not-implemented xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [198:665] </error> [198:665] </iq> Sender: [019:043] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:043] <iq type="get" id="3"> [019:043] <query xmlns="google:jingleinfo"/> [019:043] </iq> [019:043] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:043] <iq to="[email protected]/pcp" type="set" id="5"> [019:043] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="3548650675" initiator="[email protected]/pcp"> [019:043] <content name="securetunnel" creator="initiator"> [019:043] <description xmlns="http://www.google.com/talk/securetunnel"> [019:043] <type>recv:wineout.jpeg</type> [019:043] <client-cert>--BEGIN CERTIFICATE----END CERTIFICATE--</client-cert> [019:043] </description> [019:043] <transport xmlns="http://www.google.com/transport/p2p"/> [019:043] </content> [019:043] </jingle> [019:043] <session xmlns="http://www.google.com/session" type="initiate" id="3548650675" initiator="[email protected]/pcp"> [019:043] <description xmlns="http://www.google.com/talk/securetunnel"> [019:043] <type>recv:wineout.jpeg</type> [019:043] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:043] </description></session></iq> [019:043] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:043] <presence to="[email protected]/pcp" from="forgereceive" type="error"> [019:043] <error code="404" type="cancel"> [019:043] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [019:043] </error></presence> [019:044] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:044] <presence to="[email protected]/pcp" from="forgereceive" type="error"> [019:044] <error code="404" type="cancel"> [019:044] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [019:044] </error></presence> [019:044] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:044] <iq to="[email protected]/pcp" type="set" id="5" from="[email protected]/pcp"> [019:044] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="402024303" initiator="[email protected]/pcp"> [019:044] <content name="securetunnel" creator="initiator"> [019:044] <description xmlns="http://www.google.com/talk/securetunnel"> [019:044] <type>send:winein.jpeg</type> [019:044] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:044] </description> [019:044] <transport xmlns="http://www.google.com/transport/p2p"/> [019:044] </content></jingle> [019:044] <session xmlns="http://www.google.com/session" type="initiate" id="402024303" initiator="[email protected]/pcp"> [019:044] <description xmlns="http://www.google.com/talk/securetunnel"> [019:044] <type>send:winein.jpeg</type> [019:044] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:044] </description></session></iq> [019:044] TunnelSessionClientBase::OnSessionCreate: received=1 [019:044] Session:402024303 Old state:STATE_INIT New state:STATE_RECEIVEDINITIATE Type:http://www.google.com/talk/securetunnel Transport:http://www.google.com/transport/p2p [019:044] TunnelSession::OnSessionState(Session::STATE_RECEIVEDINITIATE) [019:044] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:044] <iq to="[email protected]/pcp" id="5" type="result"/> [019:088] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:088] <iq type="result" id="3" to="[email protected]/pcp"> [019:088] <query xmlns="google:jingleinfo"> [019:088] <stun> [019:088] <server host="stun.xten.net" udp="3478"/> [019:088] <server host="jivesoftware.com" udp="3478"/> [019:088] <server host="igniterealtime.org" udp="3478"/> [019:088] <server host="stun.fwdnet.net" udp="3478"/> [019:088] </stun> [019:088] <publicip ip="65.101.207.121"/> [019:088] </query> [019:088] </iq> [019:183] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:183] <iq to="[email protected]/pcp" id="5" type="result" from="[email protected]/pcp"/> [199:381] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:20:15 2012 [199:381] <iq type="get" id="474-11" from="forgejabber.com" to="[email protected]/pcp"> [199:381] <ping xmlns="urn:xmpp:ping"/> [199:381] </iq> [199:381] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:20:15 2012 [199:381] <iq type="error" id="474-11" to="forgejabber.com"> [199:381] <ping xmlns="urn:xmpp:ping"/> [199:381] <error code="501" type="cancel"> [199:381] <feature-not-implemented xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [199:382] </error></iq>

    Read the article

  • What's your opinion in Amazon S3 ?

    - by Space Cracker
    i searching for best online file storage and i found a lot with different features ... i feel that Amazon S3 is the best ... Could any who try such a service give me his opinion on it and if is there any others that are most valuable Amazon S3 ?

    Read the article

  • Amazon S3 not sending Content-Type header

    - by Luke____
    I have an application that downloads content from various sources. It relies on the "Content-Type" header being set on images. The majority of web-servers do this correctly but it appears Amazon S3 server is not setting the Content-Type. I assume Amazon servers are configured correctly so what could be the problem? Are these images not uploaded correctly? Or should I not be relying on content type being set? Example Thanks

    Read the article

  • ASP.Net State Server on EC2 not connection

    - by CountCet
    I am trying to set up an Asp.Net State Server on Amazon EC2. The single web server using this State Server is also on EC2. I've done the following things. I've added the IIS role on the State Server. I changed the value in the registry to allow connections for the service and started the aspstate service. I verified it is listening on port 42424 by checking netstat. I edited the web.config of the Web server to point to this server. I added the the tcp port to my EC2 security group and allowed it for all incoming ip's. Anything else I am not doing?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >