Search Results

Search found 2912 results on 117 pages for 'amazon vpc'.

Page 60/117 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • "Unable to associated Elastic IP with cluster" in Eclipse Plugin Tutorial

    - by Jeffrey Chee
    Hi all, I am currently trying to evaluate AWS for my company and was trying to follow the tutorials on the web. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2241 However I get the below error during startup of the server instance: Unable to associated Elastic IP with cluster: Unable to detect that the Elastic IP was orrectly associated. java.lang.Exception: Unable to detect that the Elastic IP was correctly associated at com.amazonaws.ec2.cluster.Cluster.associateElasticIp(Cluster.java:802) at com.amazonaws.ec2.cluster.Cluster.start(Cluster.java:311) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.launch(ElasticClusterBehavior.java:611) at com.amazonaws.eclipse.wtp.Ec2LaunchConfigurationDelegate.launch(Ec2LaunchConfigurationDelegate.java:47) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:696) at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3051) at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3001) at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:300) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Then after a while, another error occur: Unable to publish server configuration files: Unable to copy remote file after trying 4 timeslocal file: 'XXXXXXXX/XXX.zip' Results from first attempt: Unexpected exception: java.net.ConnectException: Connection timed out: connect root cause: java.net.ConnectException: Connection timed out: connect at com.amazonaws.eclipse.ec2.RemoteCommandUtils.copyRemoteFile(RemoteCommandUtils.java:128) at com.amazonaws.eclipse.wtp.tomcat.Ec2TomcatServer.publishServerConfiguration(Ec2TomcatServer.java:172) at com.amazonaws.ec2.cluster.Cluster.publishServerConfiguration(Cluster.java:369) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.publishServer(ElasticClusterBehavior.java:538) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:866) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Can anyone point me to what I'm doing wrong? I followed the tutorials and the video tutorials on youtube exactly. Best Regards ~Jeffrey

    Read the article

  • OpenStreetMap and Hadoop

    - by portoalet
    Hi, I need some ideas for a weekend project about Hadoop and OpenStreetMap. I have access to AWS EC2 instance with OpenStreetMap snapshot in my EBS volume. The OpenStreetMap data is in a PostgreSQL database. What kind of MapReduce function can be run on the OpenStreetMap data, assuming I can export them into xml format, and then place into HDFS ? In other words, I am having a brain cramp at the moment, and cannot think what kind of MapReduce operation that can extract valuable insight from the OpenStreetMap xml? (i.e. extract all the places designated as park or golf course. But this needs to be done once only, not continuously) Many Thanks

    Read the article

  • Can't ssh to ec2 permission denied (publickey)

    - by Chris Barnes
    I have existing instances running and I can connect to them fine. Even if I start a new instance from one of my saved ami's I can connect to it fine but any new public or community ami (I've tried 2 offical Ubuntu ami's and 1 Fedora quickstart ami) I get permission denied (publickey). The permissions are good on my key file. I've also tried creating a new keyfile. My ec2 firewall rules are good, I've also tried creating a new group. This is the error I'm getting. ssh -v -i ec2-keypair [email protected] OpenSSH_5.2p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/chris/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ec2-xxx.xxx.xxx.xxx.compute-1.amazonaws.com [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file ec2-keypair type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-xxx.xxx.xxx.xxx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/chris/.ssh/known_hosts:13 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-keypair debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • EC2 AMI for CentOS 5.x 64-bit

    - by Etienne
    Which AMI would you suggest for CentOS 5.x 64-bit? There is quite a large list but I am clueless as to how to make my decision based on the list here: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=208&resultOffset=0&sortField=107&sortOrder=0&filterEntryTypeID=-1 (I tried 'Rating' but that's too subjective) I also don't want to build my own AMI (for now).

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Container Options in AWS Elastic Beanstalk

    - by Sangram Anand
    We have deployed a java webapplication in Elastic Beanstalk with the minimum instance count 1 and max instance count 2 for Autoscaling. The custom AMI we are using is c1.medium with Sun JDK 6. The environment status changed to yellow and then red. After checking into the log file from the snapshot logs we found a exception - Caused by: java.lang.OutOfMemoryError: Java heap space. Assuming this could be one of the possible reason for the Environment failure. The settings that we have configured in the Environment Container option are Initial JVM Heap Size (MB) - 256M Maximum JVM Heap Size (MB) - 512m The maximum heap size the java virtual machine will ever consume, specified on the JVM launch command line using -Xmx. Maximum JVM Permanent Generation Size (MB) - 512m Should i increase the Heap size from 512m to more or is it fine.

    Read the article

  • Rails based S3 file manager

    - by Jim Jones
    Hi, I'm looking for an open source project that provides a file manager type interface to S3. The ability to view files and "folders", add/edit/delete files/folders, etc. I've seen http://s3fm.com, but I'd like to host something like that myself. Does anything like this exist? Thanks.

    Read the article

  • Hadoop or Hadoop Streaming for MapReduce on AWS

    - by aeolist
    I'm about to start a mapreduce project which will run on AWS and I am presented with a choice, to either use Java or C++. I understand that writing the project in Java would make more functionality available to me, however C++ could pull it off too, through Hadoop Streaming. Mind you, I have little background in either language. A similar project has been done in C++ and the code is available to me. So my question: is this extra functionality available through AWS or is it only relevant if you have more control over the cloud? Is there anything else I should bear in mind in order to make a decision, like availability of plugins for hadoop that work better with one language or the other? Thanks in advance

    Read the article

  • Adding S3 metadata using jets3t

    - by billintx
    I'm just starting to use the jets3t API for S3, using version 0.7.2 I can't seem to save metadata with the S3Objects I'm creating. What am I doing wrong? The object is successfully saved when I putObject, but I don't see the metadata after I get the object. S3Service s3Service = new RestS3Service(awsCredentials); S3Bucket bucket = s3Service.getBucket(BUCKET_NAME); String key = "/1783c05a/p1"; String data = "This is test data at key " + key; S3Object object = new S3Object(key,data); object.addMetadata("color", "green"); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } s3Service.putObject(bucket, object); S3Object retreivedObject = s3Service.getObject(bucket, key); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } Here's the output before putObject Content-Length==37 color==green Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8 md5-hash==00e764936dd5ea4fab2c4574df5ef550 Here's the output after putObject/getObject Content-Length==37 ETag=="00e764936dd5ea4fab2c4574df5ef550" request-id==9ED1633672C0BAE9 Date==Wed Mar 24 09:51:44 CDT 2010 Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8

    Read the article

  • Getting ID of an instance newly launched with ec2-api-tools

    - by Jonik
    I'm launching an EC2 instance, by invoking ec2-run-instances from simple a bash script, and want to perform further operations on that instance (e.g. associate elastic IP), for which I need the instance id. The command is something like ec2-run-instances ami-dd8ea5a9 -K pk.pem -C cert.pem --region eu-west-1 -t c1.medium -n 1, and its output: RESERVATION r-b6ea58c1 696664755663 default INSTANCE i-945af9e3 ami-dd8ea5b9 pending 0 c1.medium 2010-04-15T10:47:56+0000 eu-west-1a aki-b02a01c4 ari-39c2e94d In this example, i-945af9e3 is the id I'm after. So, I'd need a simple way to parse the id from what the command returns - how would you go about doing it? My AWK is a little rusty... Feel free to use any tool available on a typical Linux box. (If there's a way to get it directly using EC2-API-tools, all the better. But afaik there's no EC2 command to e.g. return the id of the most recently launched instance.)

    Read the article

  • Rails upload to s3 performance issue

    - by Denis
    Hello, I'm building an app to store files on my s3 account. I use Rails 3.0.0beta A lot of files can be uploaded at the same time, and the cost (from a performance point of view) of an upload is quite heavy, my app will be busy handling uploads all the time! Maybe a solution is to upload directly to s3, but I still need a submit to my app, at least to store the file's name. I'm wondering what is the best solution?

    Read the article

  • Cache front end for the JetS3t API

    - by Joshua
    Storage via JetS3t REST API seems to be very slow. Is there a caching front end for the JetS3t API for avoiding a network hit on the fetch calls [link text][1] [1]: http://jets3t.s3.amazonaws.com/api/org/jets3t/service/S3Service.html#getObject(org.jets3t.service.model.S3Bucket, java.lang.String, java.util.Calendar, java.util.Calendar, java.lang.String[], java.lang.String[], java.lang.Long, java.lang.Long)

    Read the article

  • Technology stack for very frequent gps data collection

    - by gvaswani
    I am working on a project that involves gps data collection from many users (say 1000) every second (while they move). I am planning on using a dedicated database instance on EC2 with the mysql persistent block storage and run a ruby on rails application with nginx frontend. I haven't worked on such data collection application before. Am I missing something here? I will have a another instance which will act as application server and use the data from the same EBS. If anybody has dealt with such a system before, Any advise would be much appreciated?

    Read the article

  • Securing S3 via your own application

    - by Neil Middleton
    Imagine the following use case: You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3. How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2? We're talking Rails if that's a help.

    Read the article

  • paperclip callbacks or simple processor?

    - by holden
    I wanted to run the callback after_post_process but it doesn't seem to work in Rails 3.0.1 using Paperclip 2.3.8. It gives an error: undefined method `_post_process_callbacks' for #<Class:0x102d55ea0> I want to call the Panda API after the file has been uploaded. I would have created my own processor for this, but as Panda handles the processing, and it can upload the files as well, and queue itself for an undetermined duration I thought a callback would do fine. But the callbacks don't seem to work in Rails3. after_post_process :panda_create def panda_create video = Panda::Video.create(:source_url => mp3.url.gsub(/[?]\d*/,''), :profiles => "f4475446032025d7216226ad8987f8e9", :path_format => "blah/1234") end I tried require and include for paperclip in my model but it didn't seem to matter. Anyideas?

    Read the article

  • Django as S3 proxy

    - by schneck
    Hi there, I extended a ModelAdmin with a custom field "Download file", which is a link to a URL in my Django project, like: http://www.myproject.com/downloads/1 There, I want to serve a file which is stored in a S3-bucket. The files in the bucket are not public readable, and the user may not have direct access to it. Now I want to avoid that the file has to be loaded in the server memory (these are multi-gb-files) avoid to have temp files on the server The ideal solution would be to let django act as a proxy that streams S3-chunks directly to the user. I use boto, but did not find a possibility to stream the chunks. Any ideas? Thanks.

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >