Search Results

Search found 26618 results on 1065 pages for 'amazon instance store'.

Page 23/1065 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How Amazon ELB Health check Works?

    - by diegodias
    I am having problems configuring ELB for my servers. I start 2 micro instances with the exact same conf and try to do Load Balancing. However they never pass the health check (HTTP port 80 path:"/"). Ping is ok on the website. So is telnet on 80. How did the health check works? Am I doing anything really wrong? EDIT: Both Direct browser access and GET (via curl) works correctly (status 200)

    Read the article

  • how to solve out of memory error in java in amazon ec2 server

    - by sathishkumar
    can anyone explain about this error message? we are using IBM jre to run java application Its occupying more space on the server. JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP032I JVM requested Heap dump using '/home/sathish/jetty6/heapdump.20110417.114115.18926.0001.phd' in response to an event JVMDUMP010I Heap dump written to /home/sathish/jetty6/heapdump.20110417.114115.18926.0001.phd JVMDUMP032I JVM requested Heap dump using '/home/sathish/jetty6/heapdump.20110417.114115.18926.0002.phd' in response to an event JVMDUMP010I Heap dump written to /home/sathish/jetty6/heapdump.20110417.114115.18926.0002.phd JVMDUMP032I JVM requested Heap dump using '/home/sathish/jetty6/heapdump.20110417.114115.18926.0003.phd' in response to an event JVMDUMP010I Heap dump written to /home/sathish/jetty6/heapdump.20110417.114115.18926.0003.phd JVMDUMP032I JVM requested Java dump using '/home/sathish/jetty6/javacore.20110417.114115.18926.0004.txt' in response to an event JVMDUMP010I Java dump written to /home/sathish/jetty6/javacore.20110417.114115.18926.0004.txt

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Allowing a private subnet EC2 access to the internet - Amazon AWS

    - by Xavier Hutchinson
    I have a VPC "VPC with Public and Private Subnets" created via the VPC wizard which should include NAT for the private subnet VPCs however it's not working. They are unable to browse the internet, resolve internet names and ping internet IPs. This is a stock standard conf, I am very sure of that so I am unsure why it's not working. Perhaps there was something additional I am supposed to do that I don't know about? Thank you, Xavier.

    Read the article

  • Amazon EC2 - Unable to connect to MySQL

    - by alexus
    I'm having issue connecting from one VM to another # nmap -p3306 ip-XX-XX-XX-XX.ec2.internal Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-10 17:50 EDT Nmap scan report for ip-XX-XX-XX-XX.ec2.internal (XX.XX.XX.XX) Host is up (0.000033s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 1.05 seconds # in my Security Group I allowed Inbound connectivity via port TCP, portrange 3306 and Source 0.0.0.0/0, so theoratically it should work, but in reality it doesn't( I'm running red hat enterprise linux 7 on both VMs. mariadb.service running fine on another VM and I am able to connect to it locally. DB's: # netstat -anp | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2324/mysqld # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Any ideas what else I missed?

    Read the article

  • Git and Amazon EC2 public key denied

    - by MrNart
    I had git working before on /var/html/projectfolder and realized it was a security risk so I made a new folder /projects from the root folder and tried to replicate what I did and now it doesnt work. Here is the backlog of what I did for my local machine and EC2 - server Server-EC2 1.I added my public key to the authorized_user file in ~/.ssh folder 2.Create a bare repository git init --bare 3.Change folder permissions to sudo chgrp -R ec2-user * sudo chmod -R g+ws * Local Machine create a local repository with git init touch, add, commit readme file pointed origin master to ec2 via git remote add origin ssh://ec2-user@remote-ip/path/to/folder This is my output: Permission Denied (publickey) fatal: The remote end hung up unexpectedly

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • How to Access an AWS Instance with RDC when behind a Private Subnet of a VPC

    - by dalej
    We are implementing a typical Amazon VPC with Public and Private Address - with all servers running the Windows platform. The MS SQL instances will be on the private subnet with all IIS/web servers on the public subnet. We have followed the detailed instructions at Scenario 2: VPC with Public and Private Subnets and everything works properly - until the point where you want to set up a Remote Desktop Connection into the SQL server(s) on the private subnet. At this point, the instructions assume you are accessing a server on the public subnet and it is not clear what is required to RDC to a server on a private subnet. It would make sense that some sort of port redirection is necessary - perhaps accessing the EIP of the Nat instance to hit a particular SQL server? Or perhaps use an Elastic Load Balancer (even though this is really for http protocols)? But it is not obvious what additional setup is required for such a Remote Desktop Connection?

    Read the article

  • How to monitor an Amazon EC2 instance

    - by sektor
    Is there a way to monitor ec2 instances without using cloudwatch? I ask this since ec2 instances are basically VPS's, and using output from commands like top, vmstat, htop in scripts may not give the clear picture as the CPU cycles are shared between other instances as well. What should one keep in mind while monitoring CPU usage on a VPS?, should one have alerts based on top load or % of CPU used by user processes coupled with other factors like processes waiting on disk io, hardware interrupts?

    Read the article

  • Unable to mount Amazon Public Dataset using ec2-create-volume

    - by the0ther
    I am trying to use a Public Dataset with the snapshot id of snap-­e1608d88. I am looking at these instructions, but they do not seem to help. The first suggestion there says I should click on Volumes and create a new volume, set it's size and availability zone, as well as specifying the snapshot id. The problem is, snapshot id is a dropdown, not a text field, and there are over 100 options in the dropdown. Next I installed the ec2 command line tools and tried to run the ec2-create-volume command. For my first attempt I tried ec2-create-volume --snapshot snap-­e1608d88 --availability-zone us-east-1 but that gave output indicating I need to provide a certificate with the --cert switch. Which certficate exactly? I tried my SSH cert at ~/.ssh/id_rsa. No dice. I got the following Java error: "org.codehaus.xfire.fault.XFireFault: General security error;"

    Read the article

  • use Amazon EC2 tools in Capistrano to get the servers to push the code

    - by APZ
    I am trying to use EC2 tools to get all the machines with a particular tag in some type of array in /config/deploy/prod.rb file in Capistrano. Something like this: In prod.rb file: //untested command workers-array[]=$(ec2-describe-instances -F vpc-id=1234 -F tag:Env=prod -F tag:SystemType=worker) for(i=0;i<workers-array.len;i++){ role :worker-A, workers-array[i] } I am not sure how we can do this in capistrano, am new to ruby too. Guys any help on this would be really appreciated.

    Read the article

  • Copy an Amazon EC2 Instance to use locally

    - by Excolo
    Ok, so we have a spare server I have installed Debian Wheezy on, and setup Xen on for virtual machines. It has better performance than all our ec2 instances combined, and will cost less to run (for a few various reasons) I would like to get the EC2 instances downloaded to my server, and converted to run for Xen, but im having difficulty finding anything specific. I did not setup the EC2 instances myself, and am not very familiar with them. Everything I have found (which isnt much) just says "Do XYZ" and I have no idea how to do those. So being as specific as possible would be helpful. Also, confusingly I see people writing in forums saying you can only export linux images (which mine are, Ubuntu images) but then I see on amazons export tool saying you can only export Windows server? Am I missing something here? Is that not the right place to be looking? Thanks

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • How to programmatically add x509 certificate to local machine store using c#

    - by David
    I understand the question title may be a duplicate but I have not found an answer for my situation yet so here goes; I have this simple peice of code // Convert the Filename to an X509 Certificate X509Certificate2 cert = new X509Certificate2(certificateFilePath); // Get the server certificate store X509Store store = new X509Store(StoreName.TrustedPeople, StoreLocation.LocalMachine); store.Open(OpenFlags.MaxAllowed); store.Add(cert); // x509 certificate created from a user supplied filename But keep being presented with an "Access Denied" exception. I have read some information that suggests using StorePermissions would solve my issue but I don't think this is relevant in my code. Having said that, I did test it to to be sure and I couldn't get it to work. I also found suggestions that changing folder permissions within Windows was the way to go and while this may work(not tested), it doesn't seem practical for what will become distributed code. I also have to add that as the code will be running as a service on a server, adding the certificates to the current user store also seems wrong. Is there anyway to programmatically add a certificate into the local machine store?

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • How do you transfer AWS RDS snap shot to a different AWS account

    - by Webmonger
    Hi I have an RDS database that I need to transfer a snapshot of to another AWS account. I understand there are issues being able to do this between availability zones so I'm really unsure if this is possible. The RDS instance is mySql. If it's not possible to transfer the snapshot please could you explain how to transfer the data from one RDS instance to another without downloading any if the contents(The DB is over 200GB). Thanks in advance

    Read the article

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • Are EC2 security group changes effective immediately for running instances?

    - by Jonik
    I have an EC2 instance running, and it belongs to a security group. If I add a new allowed connection to that security group through AWS Management Console, should that change be effective immediately? Or perhaps only after restart of the instance? In my case, I'm trying to allow access to PostgreSQL's default port (tcp 5432 5432 0.0.0.0/0), and I'm not sure if it's the EC2 firewall or PostgreSQL's settings that are refusing the connection.

    Read the article

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Importing AMIs from Hyper-v VHDX

    - by jwdaigle
    I have a couple of VHDX files that we use to template locally hosted VMs. I would like to try (some) of these on Amazon, so I need to build an AMI to upload to AWS. I have found http://aws.amazon.com/ec2/vmimport/ , which is very helpful to get started. It appears that AWS does not yet support VHDX, so I found some info that told me to export it out of Hyper-V as a VHD file, and then convert/upload it, which I am in the process of trying out. But the real question is that when I was looking for info, I came across http://stackoverflow.com/questions/14346114/unable-to-rdp-to-ec2-instance . The AWS documentation seems to imply that all I need to do is run the importer and all will be well. True? I dont want to waste all the upload bandwidth and then find out it wont work. Is there something I need to install into the Hyper-V VM before converting/upload it using the AWS command line tools? EC2-Config? Any help greatly appreciated -

    Read the article

  • AWS Cloud Formation.Requires capabilities : [CAPABILITY_IAM] (Child Stack)

    - by Drew Khoury
    I'm running a CloudFormation template in the AWS Console. Running Stack Directly I started with a template that used IAM resources, and the console prompts me to acknowledge IAM capabilities when running the stack directly. Running Stack as a child I then tried to call the same stack from a parent stack and did not receive the same prompt. The stack then failed with the message: Requires capabilities : [CAPABILITY_IAM] Research The docs indicate that I can run CF scripts in a number of ways. There's plenty of docs around CLI/API and supplying the capability parameter, but there appears to be no information about how to make sure it's applied when running through the console. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html IAM Resources in AWS CloudFormation Templates CF Console CLI API What I've done / What I think I've raised an issue via the forum for now, but no response (yet): https://forums.aws.amazon.com/thread.jspa?threadID=139160 I suspect this is a bug in the Console, as there doesn't appear to be any documentation of how to change the behaviour via the console and as far as I'm aware this should just work. Anyone came across the same problem, or can report that it's working fine for them?

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >