Search Results

Search found 2923 results on 117 pages for 'amazon ami'.

Page 60/117 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • "Unable to associated Elastic IP with cluster" in Eclipse Plugin Tutorial

    - by Jeffrey Chee
    Hi all, I am currently trying to evaluate AWS for my company and was trying to follow the tutorials on the web. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2241 However I get the below error during startup of the server instance: Unable to associated Elastic IP with cluster: Unable to detect that the Elastic IP was orrectly associated. java.lang.Exception: Unable to detect that the Elastic IP was correctly associated at com.amazonaws.ec2.cluster.Cluster.associateElasticIp(Cluster.java:802) at com.amazonaws.ec2.cluster.Cluster.start(Cluster.java:311) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.launch(ElasticClusterBehavior.java:611) at com.amazonaws.eclipse.wtp.Ec2LaunchConfigurationDelegate.launch(Ec2LaunchConfigurationDelegate.java:47) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:696) at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3051) at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3001) at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:300) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Then after a while, another error occur: Unable to publish server configuration files: Unable to copy remote file after trying 4 timeslocal file: 'XXXXXXXX/XXX.zip' Results from first attempt: Unexpected exception: java.net.ConnectException: Connection timed out: connect root cause: java.net.ConnectException: Connection timed out: connect at com.amazonaws.eclipse.ec2.RemoteCommandUtils.copyRemoteFile(RemoteCommandUtils.java:128) at com.amazonaws.eclipse.wtp.tomcat.Ec2TomcatServer.publishServerConfiguration(Ec2TomcatServer.java:172) at com.amazonaws.ec2.cluster.Cluster.publishServerConfiguration(Cluster.java:369) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.publishServer(ElasticClusterBehavior.java:538) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:866) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Can anyone point me to what I'm doing wrong? I followed the tutorials and the video tutorials on youtube exactly. Best Regards ~Jeffrey

    Read the article

  • OpenStreetMap and Hadoop

    - by portoalet
    Hi, I need some ideas for a weekend project about Hadoop and OpenStreetMap. I have access to AWS EC2 instance with OpenStreetMap snapshot in my EBS volume. The OpenStreetMap data is in a PostgreSQL database. What kind of MapReduce function can be run on the OpenStreetMap data, assuming I can export them into xml format, and then place into HDFS ? In other words, I am having a brain cramp at the moment, and cannot think what kind of MapReduce operation that can extract valuable insight from the OpenStreetMap xml? (i.e. extract all the places designated as park or golf course. But this needs to be done once only, not continuously) Many Thanks

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Rails based S3 file manager

    - by Jim Jones
    Hi, I'm looking for an open source project that provides a file manager type interface to S3. The ability to view files and "folders", add/edit/delete files/folders, etc. I've seen http://s3fm.com, but I'd like to host something like that myself. Does anything like this exist? Thanks.

    Read the article

  • Hadoop or Hadoop Streaming for MapReduce on AWS

    - by aeolist
    I'm about to start a mapreduce project which will run on AWS and I am presented with a choice, to either use Java or C++. I understand that writing the project in Java would make more functionality available to me, however C++ could pull it off too, through Hadoop Streaming. Mind you, I have little background in either language. A similar project has been done in C++ and the code is available to me. So my question: is this extra functionality available through AWS or is it only relevant if you have more control over the cloud? Is there anything else I should bear in mind in order to make a decision, like availability of plugins for hadoop that work better with one language or the other? Thanks in advance

    Read the article

  • Adding S3 metadata using jets3t

    - by billintx
    I'm just starting to use the jets3t API for S3, using version 0.7.2 I can't seem to save metadata with the S3Objects I'm creating. What am I doing wrong? The object is successfully saved when I putObject, but I don't see the metadata after I get the object. S3Service s3Service = new RestS3Service(awsCredentials); S3Bucket bucket = s3Service.getBucket(BUCKET_NAME); String key = "/1783c05a/p1"; String data = "This is test data at key " + key; S3Object object = new S3Object(key,data); object.addMetadata("color", "green"); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } s3Service.putObject(bucket, object); S3Object retreivedObject = s3Service.getObject(bucket, key); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } Here's the output before putObject Content-Length==37 color==green Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8 md5-hash==00e764936dd5ea4fab2c4574df5ef550 Here's the output after putObject/getObject Content-Length==37 ETag=="00e764936dd5ea4fab2c4574df5ef550" request-id==9ED1633672C0BAE9 Date==Wed Mar 24 09:51:44 CDT 2010 Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8

    Read the article

  • Technology stack for very frequent gps data collection

    - by gvaswani
    I am working on a project that involves gps data collection from many users (say 1000) every second (while they move). I am planning on using a dedicated database instance on EC2 with the mysql persistent block storage and run a ruby on rails application with nginx frontend. I haven't worked on such data collection application before. Am I missing something here? I will have a another instance which will act as application server and use the data from the same EBS. If anybody has dealt with such a system before, Any advise would be much appreciated?

    Read the article

  • Rails upload to s3 performance issue

    - by Denis
    Hello, I'm building an app to store files on my s3 account. I use Rails 3.0.0beta A lot of files can be uploaded at the same time, and the cost (from a performance point of view) of an upload is quite heavy, my app will be busy handling uploads all the time! Maybe a solution is to upload directly to s3, but I still need a submit to my app, at least to store the file's name. I'm wondering what is the best solution?

    Read the article

  • Cache front end for the JetS3t API

    - by Joshua
    Storage via JetS3t REST API seems to be very slow. Is there a caching front end for the JetS3t API for avoiding a network hit on the fetch calls [link text][1] [1]: http://jets3t.s3.amazonaws.com/api/org/jets3t/service/S3Service.html#getObject(org.jets3t.service.model.S3Bucket, java.lang.String, java.util.Calendar, java.util.Calendar, java.lang.String[], java.lang.String[], java.lang.Long, java.lang.Long)

    Read the article

  • Securing S3 via your own application

    - by Neil Middleton
    Imagine the following use case: You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3. How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2? We're talking Rails if that's a help.

    Read the article

  • paperclip callbacks or simple processor?

    - by holden
    I wanted to run the callback after_post_process but it doesn't seem to work in Rails 3.0.1 using Paperclip 2.3.8. It gives an error: undefined method `_post_process_callbacks' for #<Class:0x102d55ea0> I want to call the Panda API after the file has been uploaded. I would have created my own processor for this, but as Panda handles the processing, and it can upload the files as well, and queue itself for an undetermined duration I thought a callback would do fine. But the callbacks don't seem to work in Rails3. after_post_process :panda_create def panda_create video = Panda::Video.create(:source_url => mp3.url.gsub(/[?]\d*/,''), :profiles => "f4475446032025d7216226ad8987f8e9", :path_format => "blah/1234") end I tried require and include for paperclip in my model but it didn't seem to matter. Anyideas?

    Read the article

  • Django as S3 proxy

    - by schneck
    Hi there, I extended a ModelAdmin with a custom field "Download file", which is a link to a URL in my Django project, like: http://www.myproject.com/downloads/1 There, I want to serve a file which is stored in a S3-bucket. The files in the bucket are not public readable, and the user may not have direct access to it. Now I want to avoid that the file has to be loaded in the server memory (these are multi-gb-files) avoid to have temp files on the server The ideal solution would be to let django act as a proxy that streams S3-chunks directly to the user. I use boto, but did not find a possibility to stream the chunks. Any ideas? Thanks.

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • How do you distinguish your EC2 instances?

    - by Erik
    The ec2-describe-instances command is not very helpful in distinguishing the instances. Are there command line tools that give a better overview? Perhaps somewhat like http://github.com/newbamboo/manec2 but with support for different regions etc.

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • Serving files over HTTPS dynamically based on request.ssl? with Attachment_fu

    - by Marston A.
    I see there is a :user_ssl option in attachment_fu which checks the amazon_s3.yml file in order to serve files via https:// In the s3_backend.rb you have this method: def self.protocol @protocol ||= s3_config[:use_ssl] ? 'https://' : 'http://' end But this then makes it serve ALL s3 attachments with SSL. I'd like to make it dynamic depending if the current request was made with https:// i.e: if request.ssl? @protocol = "https://" else @protocol = "http://" end How can I make it work in this way? I've tried modifying the method and then get the NameError: undefined local variable or method `request' for Technoweenie::AttachmentFu::Backends::S3Backend:Module error

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >