Search Results

Search found 2864 results on 115 pages for 'amazon sns'.

Page 23/115 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • Amazon EC2 and jbossws

    - by avjaz
    Hi - I've deployed a webservice to a Jboss instance running on Amazon EC2. The webservice works fine locally, but when I deploy on EC2, and go to the /jbossws/services page the Endpoint Address for the webservice is the private DNS of the ec2 instance (domU-X-X-X-X etc...), not the public dns (which I would like it to be). I've tried loading the wsdl by changing the private hostname to the public IP; that works, but when I try to call any of the operations I get a HostNotFoundException, I'm guessing due to the fact that the generated wsdl has the stanza: <service name='XXXService'> <port binding='tns:XXXBinding' name='XXXPort'> <soap:address location='http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal:8080/xx/xx/xx'/> </port> </service> where http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal is the internal dns of the ec2 instance. The wsdl is auto generated - Is there a JAXB annotation I can use so that I can force the generated wsdl to use the public dns of the EC2 instance? Many thanks -

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • architecture and tools for a remote control application?

    - by slothbear
    I'm working on the design of a remote control application. From my iPhone or a web browser, I'll send a few commands. Soon my home computer will perform the commands and send back results. I know there are remote desktop apps, but I want something programmable, something simpler, and something that I wrote. My current direction is to use Amazon Simple Queue Service (SQS) as the message bus. The iPhone places some messages in a queue. My local Java/JRuby program notices the messages on the queue, performs the work and sends back status via a different queue. This will be a very low-volume application. At $1.00 for a million requests (plus a handful of data transfer charges), Amazon SQS looks a lot more affordable than having my own server of any type. And super reliable, that's important for me too. Are there better/standard toolkits or architectures for this kind of remote control? Cost is not a big issue, but I prefer the tons I learn by doing it myself. I'm moderately concerned about security, but doubt it will be a problem. The list of commands recognized will be very short, and only recognized in specific contexts. No "erase hard drive" stuff. update: I'll probably distribute these programs to some other people who want the same function, but who don't have Amazon SQS accounts. For now, they'll use anonymous access to my queues, with random 80-character queue names.

    Read the article

  • Can't attach EC2 instance to Network Interface

    - by Ian Warburton
    When trying to attach a network interface, it says... No instances were found for this availability zone. My instance is in us-east-1c and my network interface is in us-east-1b. Is that significant? If so, how do I create the VPC in the same zone and if not then why this error? EDIT: I've re-created the VPC and the Network Interface is now us-east-1c and the EC2 instance is also us-east-1c. Same error message though!

    Read the article

  • Using Cloud Formation provisioned security group with specific subnet

    - by Fred Clausen
    Summary I'm attempting to create an AWS CloudFormation template which contains an instance for which I want to select a particular subnet. If I specify the subnet ID then I get the following error The parameter groupName cannot be used with the parameter subnet. From reading this thread it appears I need to provide security group IDs - not names. How can I create a security group in CloudFormation and then get its ID after the fact? Details The relevant part of the instance config is as follows "WebServerHost": { "Type" : "AWS::EC2::Instance", <..skipping metadata...> "Properties": { "ImageId" : { "ami-1234" }, "InstanceType" : { "Ref" : "WebServerInstanceType" }, "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ], "SubnetId" : "subnet-abcdef123", and the security group looks as follows "WebServerSecurityGroup" : { "Type" : "AWS::EC2::SecurityGroup", "Properties" : { "GroupDescription" : "Enable HTTP and SSH", "SecurityGroupIngress" : [ {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"}, {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0"} ] } }, How can I create and then get that security group's ID?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. Any clue as to what went wrong?

    Read the article

  • AWS RDS connection count

    - by wmarbut
    I am using AWS RDS with MySQL for a project and have a "large" instance. The documentation is clear on what this means as far as compute resources and RAM goes, but I can't find anything that documents how many open database connections that I can have. The app that I am using is PHP and it utilizes PDO with persistent connections. This means that the number of open connections could reach the maximum number of PHP child processes running at any given point. How do I ensure that my RDS instance has a max connections setting high enough to be comfortable with this?

    Read the article

  • Unable to connect to EC2 instance after "reboot"

    - by KPL
    I am not able to connect to my m1.small instance after rebooting it. I have already associated the public IP with this instance. Upon checking the system log, this seems to be the issue: cloud-init-nonethttp://11.84: waiting 10 seconds for network device cloud-init-nonethttp://21.85: waiting 120 seconds for network device cloud-init-nonethttp://141.85: gave up waiting for a network device. Cloud-init v. 0.7.3 running 'init' at Sun, 18 May 2014 07:02:55 +0000. Up 142.54 seconds. ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | False | . | . | 02:43:xx:xx:xx:xx | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A bunch of these follow the above message: 2014-05-18 07:02:56,178 - url_helper.pyWARNING: Calling http://169.254.169.254/2009-04-04/meta-data/instance-id failed 0/120s: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : Errno 101] Network is unreachable) This is obviously related to the network interface not being working correctly. I have tried this so far: Relaunch a new instance from the custom AMI (created from EBS) of the failing instance. The same error shows up in the logs. Attach a new network interface to the EC2 instance. The error still persists. eth1 shows up in the list but the "up" column is False.

    Read the article

  • Static NAT in AWS's Virtual Private Cloud (VPC)

    - by user1050797
    Currently in a VPC with a public and a private subnet, all internet bound traffic from the private subnet could be routed via an NAT instance. The NAT instance will port address translate the packet's source IP to use the NAT instance's elastic IP, so the public server can reply to this public address. This is a PAT mechanism. My question is there a way for me to do a static NAT on my NAT instance -- Using the same NAT instance to static NAT an unassociated but reserved elastic IP to a private subnet host. This NAT instance will behave like a physical firewall doing static nat'ing for a bunch of private ip's.

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Unable to terminate extra EC2 instances

    - by Deborah Cole
    I'm just setting up my AWS server & I'm trying to use the EC2 Console to terminate some extra instances that I generated via the AWS for Eclipse toolkit's New Project AWS Java Web Project utility. Unfortunately, every time I stop, then terminate such an instance via the EC2 Console, it automatically recreates & reactivates itself! I really don't want to be paying for 4 dev systems when I only need 1, so can somebody please clue me in? Please explain gently... I'm new to this environment.

    Read the article

  • EC2 EBS AMI Instance stopping/restarting doesn't start services

    - by tgm
    I've recently been moving our instances to EBS instances (CentOS) and still have a bit of confusion on what's happening when I "stop" and instance. I have some of my services with runlevels 345 on but when I start a stopped instance the services don't start. What's actually happening when I issue a stop command to the instance, and how do I get my services to start automatically when I start the instance up again?

    Read the article

  • rsync to EC2 using ssh -i

    - by isomorphismes
    I'm able to ssh -i mykey.pem to EC2. I'm able to scp -i mykey.pem to EC2. But when I try to rsync -avz -e "ssh -i mykey.pem" I get this error: Warning: Identity file mykey.pem not accessible: No such file or directory. Permission denied (publickey). rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9] Any suggestions what I've done wrong?

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • beanstalk using php-git on windows client

    - by ntidote
    I am trying to install beanstalk for php using git. I am using a Windows Client machine. I am done with the prerequisite installations , credentials setup. I am following the link http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_PHP.sdlc.html The following step does not workout (i use git bash for git related commands) From your Git repository directory, type the following command. git aws.config This gives the error git : 'aws.config' is not a git command. Please suggest how to deal with the issue.

    Read the article

  • How do you transfer AWS RDS snap shot to a different AWS account

    - by Webmonger
    Hi I have an RDS database that I need to transfer a snapshot of to another AWS account. I understand there are issues being able to do this between availability zones so I'm really unsure if this is possible. The RDS instance is mySql. If it's not possible to transfer the snapshot please could you explain how to transfer the data from one RDS instance to another without downloading any if the contents(The DB is over 200GB). Thanks in advance

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Automatically Snapshoting AWS instances (or other back up strategy)

    - by user1172468
    I just realized that my aws instance count has risen into the double digits. I'm currently backing portions of my folders and dbs and moving them off to a backup instance. What I think I should be doing is taking a snapshot of the instances (automatically) and persisting them on S3 so I have a running 7 day collection of daily backups. There is a question asking the same thing here, however the answers don't go into depth. So the closest answer seems to be: use a cron job to snapshot the instance. So do I run the cron job on the instance itself? or do I have a micro instance to run these snapshots? Could I get an example script or the command for say a linux flavor? what software must I have installed to get this to run? Thanks.

    Read the article

  • Postgresql 9.2 where is the initdb located on Ubuntu

    - by thanikkal
    I am trying to install postgres on EC2 / EBS. I am following this article and stuck at the following step. sudo su - su postgres - /usr/pgsql-9.0/bin/initdb -D /pgdata I cant find the initdb command located at the stated location, matter of fact i cant find the pgsql* directory at all under /usr folder. Was this changed for Postgres 9.2 or is there an alternate command that would help me initdb? edit 1: I know the folder pgsql-9.0 is version specific, so i was expecting to see more like pgsql-9.2 or similar.

    Read the article

  • When to increase AWS RDS MySQL Server instance to larger CPU/RAM?

    - by rksprst
    I'm wondering at what stage do I need to increase the image for the RDS MySQL server to a larger CPU/RAM instance. The CPU utilization graph is near 0. The Avg Free Memory is around 150MB. The Avg Swap Usage is 420MB. Read Latency is 0-20ms/op it spikes up randomly. Avg write latency is on average 5ms/op but spikes up to 10-20ms/op. Are there some common rules here that I should follow? Thanks!

    Read the article

  • AWS RDS MySQL remote connection extremely slow

    - by nute
    I have a site hosted on AWS EC2 (Elastic Beanstalk), with a MySQL database hosted on AWS RDS. Everything works fine on the production server, fast and all. However when I try to connect remotely from my local machine, it sometimes gets extremely slow (like 4 minutes to load the list of tables), or simply times out. I added my IP in the security group (which I did correctly, since it sometimes works). When it doesn't work, I at the same time check the prod server and it still looks good.

    Read the article

  • Available Instance types for marketplace ami's

    - by Christian
    I based my autoscaling AMI's on the Turnkey Linux nginx AMI from the marketplace. I am now unable to select any of the newer generation instance types; for instance, my autoscaling uses m3.large type but I'd really like it to use the c3.xlarge type but every time I try to create a c3.xlarge instance with my AMI I get errors; The instance configuration for this AWS Marketplace product is not supported. My question is; Can I override this? I'm not using TKL support or any of their services, just the AMI. If I can't override it, do I have any other options besides creating a brand new AMI from scratch to use?

    Read the article

  • Odd behavior of setting REMOTE_ADDR between Apache, Nginx, and AWS ELB

    - by Chris Drumgoole
    I have encountered a strange issue and am curious if others have encountered this as well. and if there is absolutely anything that can be done.. We have a set up where we have multiple AWS EC2 Linux machines sitting behind a ELB. The EC2 machines are running Nginx. Let's refer to these as my production machines (because they are!) I also have a Rackspace cloud machine running apache. Completely separate. Let's call this the test server. Now, there's a ISP here in Singapore that seems to be funneling traffic through a transparent proxy or something, and when you do a IP check, the IP often changes. In fact, I noticed that when I check on http://www.whatismyip.com, the ip seems to be stable (doesn't change) across refreshes. But, http://www.whatismyipaddress.com, on refreshing, the IP changes! (so my ISP is doing weird stuff). Now, back to my set up, I noticed a couple of things: Checking the REMOTE_ADDR variable from PHP when connecting to a single Nginx production machine (bypassing the load balancer), is set to the stable IP that does change. Checking the REMOTE_ADDR variable from PHP when connecting to the test Apache server, it is set to the IP that does change on refreshes. Checking the headers when connecting to the nginx production machines through the ELB, the ELB sets the HTTP_X_FORWARDED_FOR to the stable IP. Has anyone experienced this odd behavior? Is there nothing that I can do? And which IP should I "trust"? (the one Apache gives, or the one ELB and Nginx gives?) Thanks! Chris

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >