Search Results

Search found 2891 results on 116 pages for 'amazon ecs'.

Page 25/116 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • AWS EC2 - How to specify an IAM role for an instance being launched via awscli

    - by Skaperen
    I am using the "aws ec2 run-instances" command (from the awscli package) to launch an instance in AWS EC2. I want to set an IAM role on the instance I am launching. The IAM role is configured and I can use it successfully when launching an instance from the AWS web UI. But when I try to do this using that command, and the "--iam-instance-profile" option, it failed. Doing "aws ec2 run-instances help" shows Arn= and Name= subfields for the value. When I try to look up the Arn using "aws iam list-instance-profiles" it gives this error message: A client error (AccessDenied) occurred: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/shell/i-15c2766d is not authorized to perform: iam:ListInstanceProfiles on resource: arn:aws:iam::xxxxxxxxxxxx:instance-profile/ (where xxxxxxxxxxxx is my AWS 12-digit account number) I looked up the Arn string via the web UI and used that via "--iam-instance-profile Arn=arn:aws:iam::xxxxxxxxxxxx:instance-profile/shell" on the run-instances command, and that failed with: A client error (UnauthorizedOperation) occurred: You are not authorized to perform this operation. If I leave off the "--iam-instance-profile" option entirely, the instance will launch but it will not have the IAM role setting I need. So the permission seems to have something to do with using "--iam-instance-profile" or accessing IAM data. I repeated several times in case of AWS glitches (they happen sometimes) and no success. I suspected that perhaps there is a restriction that an instance with an IAM role is not allowed to launch an instance with a more powerful IAM role. But in this case, the instance I am doing the command in has the same IAM role that I am trying to use. named "shell" (though I also tried using another one, no luck). Is setting an IAM role not even permitted from an instance (via its IAM role credentials)? Is there some higher IAM role permission needed to use IAM roles, than is needed for just launching a plain instance? Is "--iam-instance-profile" the appropriate way to specify an IAM role? Do I need to use a subset of the Arn string, or format it in some other way? Is it possible to set up an IAM role that can do any IAM role accesses (maybe a "Super Root IAM" ... making up this name)? FYI, everything involves Linux running on the instances. Also, I am running all this from an instance because I could not get these tools installed on my desktop. That and I do not want to put my IAM user credentials on any AWS storage as advised by AWS here. after answered: I did not mention the launching instance permission of "PowerUserAccess" (vs. "AdministratorAccess") because I did not realize additional access was needed at the time the question was asked. I assumed that the IAM role was "information" attached to the launch. But it really is more than that. It is a granting of permission.

    Read the article

  • On AWS EC2, Unable to run sudo command after modifying permissions to /usr folder

    - by Kayote
    All, We have searched quite a bit and a few of 'Eliah Kagan's' posts are great about getting access back to sudo. However, our server is on AWS EC2 & I am a complete newbie to this. We are trying to setup Cronjobs for backing up our server data. What we did: Using Putty, we created a script file: usr/share/site-db-backup/backupToS3.php, however, Ubuntu was not saving the changes we made as it reported we did not have permission as user 'Ubuntu'. Error details are: "Upload of file backupToS3.php was successful but error occurred while setting the permission &/ or timestamp. If the problem persists, turn on 'ignore permission errors' option. Permission denied. Error code: 3 Request code 9" So, we ran the command "sudo chmod -R a+rwx /usr" for granting permission to the folder 'usr'. However, now whatever sudo command is run, we get the error: "/usr/lib/sudo/sudoers.so must be only be writable by owner. fatal error, unable to load plugins." We are complete newbies to Ubuntu & EC2 so do need step by step guidance of how to get sudo back & successfully write to the Crontab script sitting in 'usr/' folder.

    Read the article

  • Mounting Gluster Volumes

    - by Roman Newaza
    I have created Hosted Zone with 2 IP addresses of Gluster Cluster, both IP are returned by dig. After mounting Gluster, I cannot ls mount point as it takes long time. mount shows me it's mounted, but df doesn't. Finally, I have this: ls: cannot access /mnt/storage: Transport endpoint is not connected. But if I mount it with the one of the IP, no problem - volume contents is accessible OS: Ubuntu 11.10 GlusterFS: 3.2.6 Log: http://pastie.org/private/2jgp4h1hnqgzych3djtg I have can telnet storage from client - ports are open.

    Read the article

  • Auto re-attach EBS volume on start-up?

    - by Phillip Oldham
    I'm setting up a database server on EC2, and I need to ensure that an EBS volume is automatically attached and is available before the database service starts up. I'm using SMF so I can test whether a particular filesystem is available before starting the db service, so there's no problem from that perspective, however I'm not quite sure how to tell the server to auto-attach the EBS volume during/after boot. What would be the best strategy for this?

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • List DB2 version, OS and hardware on Linux? (aws image)H

    - by mestika
    Hello everybody, I'm not that familiar with Linux but I'm currently working on a aws image for an assignment and I need to display the DB2 version, the OS and the hardware. Is there a commando or program of some sort I can use for this purpose? I tried a rpm called "Bonnie" but that only writes the throughput for the system. Thanks Mestika

    Read the article

  • How to move S3 bucket to different location

    - by skrat
    We use S3 for storing millions of entries in our webapp, now we move the whole thing to EC2, EU servers, and we also want to move that S3 data to EU. But the bucket we use is in US, and there seem to be no tool to move whole bucket content to different bucket. There is also problem on how to synchronize the data later on when we switch to EU bucket, the data that will be created meanwhile while the migration was running.

    Read the article

  • Create an AWS AMI for Ubuntu with GUI which automatically launches web browser

    - by Rory MacDonald
    I've got an ubuntu AMI setup with ubuntu desktop installed and Chrome installed and set to boot on load (via the startup programmes menu within the ubuntu desktop) I've created an image of this AMI, but any time I launch a new instance running this, the Ubuntu GUI doesn't seem to load, until I SSH into the machine, enable VNC and then connect via Chicken VNC to the machine. At that point, the desktop appears to load + starts the browser. I really need the machine to boot and the browser to load without having to VNC into the machine.. Any help would be appreciated.

    Read the article

  • Connect to MySQL EC2 Instance outside of VPC

    - by Brian W
    I have a VPC setup with a few EC2 instances inside. I'm attempting to connect to a MySQL database on an EC2 instance outside the VPC, with no luck. I have the security groups on the VPC EC2 instances set to outbound 0.0.0.0/0 which I assumed would let it connect to any outbound connection. I also followed a tutorial on creating a NAT, but wasn't exactly sure how to use it to connect to an external database. In any case, if anyone has experience and knows the proper way to connect to a database outside the VPC, it would be greatly appreciated!

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • File replication among EC2 instances

    - by Peuge
    I am pretty new to AWS so please excuse my ignorance. We are wanting to have a setup whereby we have a SQL DB instance + web server instance. However we would like the Web server to sit behind an ELB thus allowing us to use Autoscaling. My question however is how to we replicate the web app across instances? Say for example we have two web servers running and we need to make a critical update to the web app, ultimately we would only want to upload to one instance and not both. Is it even best practice to store your web app on the instance or are there better ways to store and share the app between instances?

    Read the article

  • One EC2 source with distributed varnish machines

    - by Elad Lachmi
    I have a web site hosted in an EC2 instance (2008 r2 + iis7.5 + sql server). I put one linux box running RHEL with varnish. After some configuration trail and error, I found a configuration that works. Now I want to duplicate the varnish boxes to other availability zones, but continue to pull the pages from the original windows box. It is my understanding that I can put the varnish boxes in different zones and pull from the windows box via it's external IP. But what do I need to do in order for each user to receive content from the box physically closest to them? Is this even possible? Thank you!

    Read the article

  • Recommended method for routing www to zone apex (naked domain) using AWS Route 53

    - by Dan Christian
    In my AWS Route 53 control panel I simply have 2 A records currently set up for the 'www' and the 'non www' names. Both point to the Elastic IP address associated with my EC2 Instance. This works well and my website is available at both variations but I really want all 'www' to route to the 'non www'. What is the reccomened method, using AWS Route 53, for routing all traffic that comes to... www.example.com to example.com

    Read the article

  • Are EC2 security group changes effective immediately for running instances?

    - by Jonik
    I have an EC2 instance running, and it belongs to a security group. If I add a new allowed connection to that security group through AWS Management Console, should that change be effective immediately? Or perhaps only after restart of the instance? In my case, I'm trying to allow access to PostgreSQL's default port (tcp 5432 5432 0.0.0.0/0), and I'm not sure if it's the EC2 firewall or PostgreSQL's settings that are refusing the connection.

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • How to goup EC2 instances in order to delegate administrations to differents teams?

    - by Olivier
    Is it possible (using ARN) to make severals groups of instances. Then using differents policy to grant some access to a group of instance only and not the other instances? For example : { "Statement": [ { "Action": "ec2:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:*", "Resource": "*" }, { "Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:*", "Resource": "*" } ] } Instead of "*" could we use a group or something like that? like a specific subnet? a Tag? or whatever... Thanks for your help

    Read the article

  • What differences are there between an official Ubuntu AMI image and a base install from an ISO?

    - by David Winter
    When creating a new instance on AWS using an official Ubuntu 12.04 server AMI, what differences are there compared to if I was to do a standard server install on a computer of my own? For example, the default user is 'ubuntu'. An SSH public key is added to that users authorized_keys file. Sudo is passwordless for that user. PasswordAuthentication is disabled for SSH. etc etc. Configurations have been changed from their defaults, and I'd like to know if there is a list, or somewhere I could find out the modifications made.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • monitoring load on AWS EC2

    - by hortitude
    I'm interesting in monitoring our EC2 instances to ensure we scale up when necessary. Right now we are monitoring idle CPU time as our metric. We aren't measuring disk IO as we are not a very disk intensive application. When running on our own hardware in a datacenter I also usually monitor "load" from the top command. My question is: Does it make sense to monitor "load" on a shared env such as EC2? If so, how do you interpret the results?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >