Search Results

Search found 2923 results on 117 pages for 'amazon ami'.

Page 26/117 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Auto re-attach EBS volume on start-up?

    - by Phillip Oldham
    I'm setting up a database server on EC2, and I need to ensure that an EBS volume is automatically attached and is available before the database service starts up. I'm using SMF so I can test whether a particular filesystem is available before starting the db service, so there's no problem from that perspective, however I'm not quite sure how to tell the server to auto-attach the EBS volume during/after boot. What would be the best strategy for this?

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • Mounting Gluster Volumes

    - by Roman Newaza
    I have created Hosted Zone with 2 IP addresses of Gluster Cluster, both IP are returned by dig. After mounting Gluster, I cannot ls mount point as it takes long time. mount shows me it's mounted, but df doesn't. Finally, I have this: ls: cannot access /mnt/storage: Transport endpoint is not connected. But if I mount it with the one of the IP, no problem - volume contents is accessible OS: Ubuntu 11.10 GlusterFS: 3.2.6 Log: http://pastie.org/private/2jgp4h1hnqgzych3djtg I have can telnet storage from client - ports are open.

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • List DB2 version, OS and hardware on Linux? (aws image)H

    - by mestika
    Hello everybody, I'm not that familiar with Linux but I'm currently working on a aws image for an assignment and I need to display the DB2 version, the OS and the hardware. Is there a commando or program of some sort I can use for this purpose? I tried a rpm called "Bonnie" but that only writes the throughput for the system. Thanks Mestika

    Read the article

  • How to move S3 bucket to different location

    - by skrat
    We use S3 for storing millions of entries in our webapp, now we move the whole thing to EC2, EU servers, and we also want to move that S3 data to EU. But the bucket we use is in US, and there seem to be no tool to move whole bucket content to different bucket. There is also problem on how to synchronize the data later on when we switch to EU bucket, the data that will be created meanwhile while the migration was running.

    Read the article

  • Connect to MySQL EC2 Instance outside of VPC

    - by Brian W
    I have a VPC setup with a few EC2 instances inside. I'm attempting to connect to a MySQL database on an EC2 instance outside the VPC, with no luck. I have the security groups on the VPC EC2 instances set to outbound 0.0.0.0/0 which I assumed would let it connect to any outbound connection. I also followed a tutorial on creating a NAT, but wasn't exactly sure how to use it to connect to an external database. In any case, if anyone has experience and knows the proper way to connect to a database outside the VPC, it would be greatly appreciated!

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • File replication among EC2 instances

    - by Peuge
    I am pretty new to AWS so please excuse my ignorance. We are wanting to have a setup whereby we have a SQL DB instance + web server instance. However we would like the Web server to sit behind an ELB thus allowing us to use Autoscaling. My question however is how to we replicate the web app across instances? Say for example we have two web servers running and we need to make a critical update to the web app, ultimately we would only want to upload to one instance and not both. Is it even best practice to store your web app on the instance or are there better ways to store and share the app between instances?

    Read the article

  • One EC2 source with distributed varnish machines

    - by Elad Lachmi
    I have a web site hosted in an EC2 instance (2008 r2 + iis7.5 + sql server). I put one linux box running RHEL with varnish. After some configuration trail and error, I found a configuration that works. Now I want to duplicate the varnish boxes to other availability zones, but continue to pull the pages from the original windows box. It is my understanding that I can put the varnish boxes in different zones and pull from the windows box via it's external IP. But what do I need to do in order for each user to receive content from the box physically closest to them? Is this even possible? Thank you!

    Read the article

  • Are EC2 security group changes effective immediately for running instances?

    - by Jonik
    I have an EC2 instance running, and it belongs to a security group. If I add a new allowed connection to that security group through AWS Management Console, should that change be effective immediately? Or perhaps only after restart of the instance? In my case, I'm trying to allow access to PostgreSQL's default port (tcp 5432 5432 0.0.0.0/0), and I'm not sure if it's the EC2 firewall or PostgreSQL's settings that are refusing the connection.

    Read the article

  • Recommended method for routing www to zone apex (naked domain) using AWS Route 53

    - by Dan Christian
    In my AWS Route 53 control panel I simply have 2 A records currently set up for the 'www' and the 'non www' names. Both point to the Elastic IP address associated with my EC2 Instance. This works well and my website is available at both variations but I really want all 'www' to route to the 'non www'. What is the reccomened method, using AWS Route 53, for routing all traffic that comes to... www.example.com to example.com

    Read the article

  • How to goup EC2 instances in order to delegate administrations to differents teams?

    - by Olivier
    Is it possible (using ARN) to make severals groups of instances. Then using differents policy to grant some access to a group of instance only and not the other instances? For example : { "Statement": [ { "Action": "ec2:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:*", "Resource": "*" }, { "Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:*", "Resource": "*" } ] } Instead of "*" could we use a group or something like that? like a specific subnet? a Tag? or whatever... Thanks for your help

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • monitoring load on AWS EC2

    - by hortitude
    I'm interesting in monitoring our EC2 instances to ensure we scale up when necessary. Right now we are monitoring idle CPU time as our metric. We aren't measuring disk IO as we are not a very disk intensive application. When running on our own hardware in a datacenter I also usually monitor "load" from the top command. My question is: Does it make sense to monitor "load" on a shared env such as EC2? If so, how do you interpret the results?

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • Tracking costs within one AWS account

    - by caius howcroft
    I have what I'm sure is a very common problem. Our company has many projects and groups working for different clients. We do a lot of our development work in the cloud and deploy our solutions there. We have a VPC set up that isolates projects from each other in their own subnet and that VPC is getting a hardware VPN connection back to HQ. We need to keep track of the cost run up by every project. The way I currently implement this is by providing my own tools for starting and stopping instances which log which user (and thus which project) to bill the instance too. This works okay for BoxUsage costs but not for other costs. I could create a separate account for each project and use consolidated billing, this I think would allow me to pay once but track costs per "project", but I would then not be able to share common resources (like bring account B's running instances inside the same VPC). Does anyone have any suggestions? Cheers C

    Read the article

  • NAT and NGINX on the same server

    - by Morten
    I'm setting up a VPC cluster for my collaborative todo list application www.getdoneapp.com. To have my servers on the private network I need a NAT server so my servers on the private network can connect to the internet to receive updates and what not. The NAT server will consume an elastic IP address, so I'm wondering if I can just have that NAT server run nginx to direct traffic to my internal servers for HTTP. So the question is, is it a bad idea to run NGINX and NAT on the same server, or should I go for consuming 2 elastic IP addresses?

    Read the article

  • Adding Multiple Interfaces to EC2 Ubuntu 12.04

    - by nocode
    I have a m1.medium Ubuntu 12.04 instance with two ENI's. I have a VPC setup with a private and public subnet. Private: 10.50.1.0/24 Public: 10.50.101.0/24 I initiated the instance on the private subnet. I configured a NAT instance and route all servers in the private subnet internet access. The route tables on the private subnet point towards the NAT instance and the route table on the public subnet point to the internet gateway. I am trying to add a public interface on the machine so that I can put it behind a ELB. When I added the second ENI and configured a static IP in /etc/network/interfaces and restarted the network services, I can no longer access from the Public subnet to the Private Subnet. Works Private private Private public Does not work Public private From Public Private, I ran a TCPDUMp on the private machine and can see the request coming in. My guess is it's trying to route over the new Public interface instead of the Private. Here's my route: default 10.50.1.1 0.0.0.0 UG 100 0 0 eth0 10.50.1.0 * 255.255.255.0 U 0 0 0 eth0 10.50.101.0 * 255.255.255.0 U 0 0 0 eth1 My networking knowledge is limited and I believe I have to add some routes but unsure of what command/syntax needs to be.

    Read the article

  • Sticky Load Balancing with AWS

    - by John Wheal
    I have just setup a load balancer with AWS for a few instances as search engine crawlers were bringing down the site (it has millions of pages). Parts of the site allow you to login so I selected: Enable Application Generated Cookie Stickiness and everything works fine. I now wonder how this will effect my SEO and the crawlers. As I selected sticky load balancing does this mean that a crawler will be stuck on one server and therefore defeat the point in the load balancer? Any recommendations will be appreciated.

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >