Search Results

Search found 2891 results on 116 pages for 'amazon ecs'.

Page 60/116 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Container Options in AWS Elastic Beanstalk

    - by Sangram Anand
    We have deployed a java webapplication in Elastic Beanstalk with the minimum instance count 1 and max instance count 2 for Autoscaling. The custom AMI we are using is c1.medium with Sun JDK 6. The environment status changed to yellow and then red. After checking into the log file from the snapshot logs we found a exception - Caused by: java.lang.OutOfMemoryError: Java heap space. Assuming this could be one of the possible reason for the Environment failure. The settings that we have configured in the Environment Container option are Initial JVM Heap Size (MB) - 256M Maximum JVM Heap Size (MB) - 512m The maximum heap size the java virtual machine will ever consume, specified on the JVM launch command line using -Xmx. Maximum JVM Permanent Generation Size (MB) - 512m Should i increase the Heap size from 512m to more or is it fine.

    Read the article

  • Rails based S3 file manager

    - by Jim Jones
    Hi, I'm looking for an open source project that provides a file manager type interface to S3. The ability to view files and "folders", add/edit/delete files/folders, etc. I've seen http://s3fm.com, but I'd like to host something like that myself. Does anything like this exist? Thanks.

    Read the article

  • Hadoop or Hadoop Streaming for MapReduce on AWS

    - by aeolist
    I'm about to start a mapreduce project which will run on AWS and I am presented with a choice, to either use Java or C++. I understand that writing the project in Java would make more functionality available to me, however C++ could pull it off too, through Hadoop Streaming. Mind you, I have little background in either language. A similar project has been done in C++ and the code is available to me. So my question: is this extra functionality available through AWS or is it only relevant if you have more control over the cloud? Is there anything else I should bear in mind in order to make a decision, like availability of plugins for hadoop that work better with one language or the other? Thanks in advance

    Read the article

  • Adding S3 metadata using jets3t

    - by billintx
    I'm just starting to use the jets3t API for S3, using version 0.7.2 I can't seem to save metadata with the S3Objects I'm creating. What am I doing wrong? The object is successfully saved when I putObject, but I don't see the metadata after I get the object. S3Service s3Service = new RestS3Service(awsCredentials); S3Bucket bucket = s3Service.getBucket(BUCKET_NAME); String key = "/1783c05a/p1"; String data = "This is test data at key " + key; S3Object object = new S3Object(key,data); object.addMetadata("color", "green"); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } s3Service.putObject(bucket, object); S3Object retreivedObject = s3Service.getObject(bucket, key); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } Here's the output before putObject Content-Length==37 color==green Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8 md5-hash==00e764936dd5ea4fab2c4574df5ef550 Here's the output after putObject/getObject Content-Length==37 ETag=="00e764936dd5ea4fab2c4574df5ef550" request-id==9ED1633672C0BAE9 Date==Wed Mar 24 09:51:44 CDT 2010 Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8

    Read the article

  • Getting ID of an instance newly launched with ec2-api-tools

    - by Jonik
    I'm launching an EC2 instance, by invoking ec2-run-instances from simple a bash script, and want to perform further operations on that instance (e.g. associate elastic IP), for which I need the instance id. The command is something like ec2-run-instances ami-dd8ea5a9 -K pk.pem -C cert.pem --region eu-west-1 -t c1.medium -n 1, and its output: RESERVATION r-b6ea58c1 696664755663 default INSTANCE i-945af9e3 ami-dd8ea5b9 pending 0 c1.medium 2010-04-15T10:47:56+0000 eu-west-1a aki-b02a01c4 ari-39c2e94d In this example, i-945af9e3 is the id I'm after. So, I'd need a simple way to parse the id from what the command returns - how would you go about doing it? My AWK is a little rusty... Feel free to use any tool available on a typical Linux box. (If there's a way to get it directly using EC2-API-tools, all the better. But afaik there's no EC2 command to e.g. return the id of the most recently launched instance.)

    Read the article

  • Rails upload to s3 performance issue

    - by Denis
    Hello, I'm building an app to store files on my s3 account. I use Rails 3.0.0beta A lot of files can be uploaded at the same time, and the cost (from a performance point of view) of an upload is quite heavy, my app will be busy handling uploads all the time! Maybe a solution is to upload directly to s3, but I still need a submit to my app, at least to store the file's name. I'm wondering what is the best solution?

    Read the article

  • Cache front end for the JetS3t API

    - by Joshua
    Storage via JetS3t REST API seems to be very slow. Is there a caching front end for the JetS3t API for avoiding a network hit on the fetch calls [link text][1] [1]: http://jets3t.s3.amazonaws.com/api/org/jets3t/service/S3Service.html#getObject(org.jets3t.service.model.S3Bucket, java.lang.String, java.util.Calendar, java.util.Calendar, java.lang.String[], java.lang.String[], java.lang.Long, java.lang.Long)

    Read the article

  • Technology stack for very frequent gps data collection

    - by gvaswani
    I am working on a project that involves gps data collection from many users (say 1000) every second (while they move). I am planning on using a dedicated database instance on EC2 with the mysql persistent block storage and run a ruby on rails application with nginx frontend. I haven't worked on such data collection application before. Am I missing something here? I will have a another instance which will act as application server and use the data from the same EBS. If anybody has dealt with such a system before, Any advise would be much appreciated?

    Read the article

  • Securing S3 via your own application

    - by Neil Middleton
    Imagine the following use case: You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3. How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2? We're talking Rails if that's a help.

    Read the article

  • paperclip callbacks or simple processor?

    - by holden
    I wanted to run the callback after_post_process but it doesn't seem to work in Rails 3.0.1 using Paperclip 2.3.8. It gives an error: undefined method `_post_process_callbacks' for #<Class:0x102d55ea0> I want to call the Panda API after the file has been uploaded. I would have created my own processor for this, but as Panda handles the processing, and it can upload the files as well, and queue itself for an undetermined duration I thought a callback would do fine. But the callbacks don't seem to work in Rails3. after_post_process :panda_create def panda_create video = Panda::Video.create(:source_url => mp3.url.gsub(/[?]\d*/,''), :profiles => "f4475446032025d7216226ad8987f8e9", :path_format => "blah/1234") end I tried require and include for paperclip in my model but it didn't seem to matter. Anyideas?

    Read the article

  • Django as S3 proxy

    - by schneck
    Hi there, I extended a ModelAdmin with a custom field "Download file", which is a link to a URL in my Django project, like: http://www.myproject.com/downloads/1 There, I want to serve a file which is stored in a S3-bucket. The files in the bucket are not public readable, and the user may not have direct access to it. Now I want to avoid that the file has to be loaded in the server memory (these are multi-gb-files) avoid to have temp files on the server The ideal solution would be to let django act as a proxy that streams S3-chunks directly to the user. I use boto, but did not find a possibility to stream the chunks. Any ideas? Thanks.

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • How do you distinguish your EC2 instances?

    - by Erik
    The ec2-describe-instances command is not very helpful in distinguishing the instances. Are there command line tools that give a better overview? Perhaps somewhat like http://github.com/newbamboo/manec2 but with support for different regions etc.

    Read the article

  • deploment on EC2 using poolparty and chef server

    - by Pravin
    hi, does anyone have done the rails application deployment on EC2 using poolpary gems and chef server(not chef solo).please share your experiences if you know some blogs or code links(except poolpartyrb.com and related to it). the poolparty script must be able to launch an selected AMI instance with two EBS blocks(data and DB) use one elastic ip,fetch code repo and install chef server on selected instance.or if you have used chef server for rails deployment please share your exp. Thanks, Pravin

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • Serving files over HTTPS dynamically based on request.ssl? with Attachment_fu

    - by Marston A.
    I see there is a :user_ssl option in attachment_fu which checks the amazon_s3.yml file in order to serve files via https:// In the s3_backend.rb you have this method: def self.protocol @protocol ||= s3_config[:use_ssl] ? 'https://' : 'http://' end But this then makes it serve ALL s3 attachments with SSL. I'd like to make it dynamic depending if the current request was made with https:// i.e: if request.ssl? @protocol = "https://" else @protocol = "http://" end How can I make it work in this way? I've tried modifying the method and then get the NameError: undefined local variable or method `request' for Technoweenie::AttachmentFu::Backends::S3Backend:Module error

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • nginx error: (99: Cannot assign requested address)

    - by k-g-f
    I am running Ubuntu Hardy 8.04 and nginx 0.7.65, and when I try starting my nginx server: $ sudo /etc/init.d/nginx start I get the following error: Starting nginx: [emerg]: bind() to IP failed (99: Cannot assign requested address) where "IP" is a placeholder for my IP address. Does anybody know why that error might be happening? This is running on EC2. My nginx.conf file looks like this: user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log /usr/local/nginx/logs/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 3; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } and my /usr/local/nginx/sites-enabled/example.com looks like: server { listen IP:80; server_name example.com; rewrite ^/(.*) https://example.com/$1 permanent; } server { listen IP:443 default ssl; ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; server_name example.com; access_log /home/example/example.com/log/access.log; error_log /home/example/example.com/log/error.log; }

    Read the article

  • How to load secure S3 images into Flex with temporary URLs

    - by Yarin
    I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not. I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control. Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >