Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 61/121 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Is there a way to send personal documents on Kindle for Mac app?

    - by Sid
    I have the Kindle App on my Mac, and an android phone. When I email documents to my [email protected] id, I am able to see it in my library, and subsequently send it to my Android device. However, I'm not able to send it to the Kindle App for my Mac. The Kindle for Mac FAQs clearly state that Magazines, personal documents, etc. are not supported. However, I came across here that there is a workaround to this, although I've not been able to figure out what it is.

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • How do I automatically start Clamz with AMZ files for Amazon MP3 downloads?

    - by Takkat
    Chromium can open downloaded files with the default application (e.g. PDF in Evince). In my setup a downloaded .AMZ (for Amazon MP3) always opened with Gedit. However I would like to have all downloaded .amz files to autromatically open with Clamz, a command line tool for downloading that works like a charm. As in Nautilus my .amz files were associated to open with Gedit too I thought it was a good idea to add a clamz.desktop file in ~/.local/share/applications (according to this answer) [Desktop Entry] Encoding=UTF-8 Name=Clamz Comment=Open AMZ files for Amazon MP3 download Exec=/usr/bin/clamz %u Terminal=True Type=Application Icon= Categories=Application; StartupNotify=true MimeType=audio/x-amzxml; NoDisplay=true This lets me choose Clamz as default application in Nautilus. But when opening an .amz file in Nautilus it still does not open with Clamz as expected but is treated as an executable text file instead (note that the executable bit is not set!). Is there any other way to make Chromium or Nautilus always open an .amz file with Clamz? Did I miss to change setting in another place?

    Read the article

  • Openfire scalability question XMPP Server

    - by candoyo
    Hello! Do you guys know how well openfire scales? My users will be using the application to do normal chatting like msn no file transfer for now. We will be using Amazon's EC2 server to run the chat server we would like to support over 1 Million users in total and around 30-50K active users during peak times. Since clustering is now opensource, I though Openfire might be the way to go, how much will it cost for the coherence license or can I bipass that somehow? Also, I wanted to develop plugin for Openfire if we go with it. Any pointers on how to set up a dev env and get going would be helpful too! Thanks ya'all! :)

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • Free service that allows storing game data online?

    - by StackedCrooked
    I have created a small game in Java and I would like to add the ability for a player to publish his highscores online. I'm willing to write the server software myself (it's easy these days with Ruby Mongrel, or even C++). All I just need to have some sort hosting. One solution that immediately comes to mind is Amazon EC2. But that's kind of expensive for my needs. Since the requirements are very minimal (I don't even need a website, just a web service) I think there may be a cheaper solution out there. Does anyone know of a free or cheap provider for this kind of thing?

    Read the article

  • Problem with openssl_get_privatekey returning false

    - by Joe Corkery
    I am trying to generate a signed url for Amazon's CloudFront service but am running into problems in that the openssl_get_privatekey function appears to be returning false and I can't quite figure out why. Here is the code (PHP) that I am using: $priv_key = file_get_contents(path_to_my_pem_file); $priv_keyid = openssl_get_privatekey($privkey); Unfortunately, everytime I try this openssl_get_privatekey fails silently and I run into errors when I try to sign with openssl_sign later on. I've tried printing out the contents of $priv_key after it has been read in and it appears to be correct. I'm running this on RHEL 5.4 using PHP 5.2.13. I've confirmed that file pem file is readable and I've also run dos2unix on it just in case (didn't work before or after). Any thoughts would be greatly appreciated as I am relatively new to both PHP and openssl.

    Read the article

  • 404 redirect with cloud storage

    - by Jeremy DeGroot
    I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image. We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.

    Read the article

  • What can I use for voluntary donations? (like Tipjoy)

    - by Ken
    There used to be a site called Tipjoy that would let me put a small "donate" button on a webpage, and users could donate small amounts (like 25c) to me easily. I think it was a pretty neat idea, since I want to have a way for people to give me money, and I don't like advertisements, and I don't update regularly enough to sell subscriptions like bloggers. I just have some simple web services and open-source program and I want an easy way for people to drop me some change if they think they're useful. I've found out that Amazon used to have a similar service, but it's also been shut down. Is there any similar web service available today? If not, what's the closest thing to offer -- a Paypal link?

    Read the article

  • s3 / php script looping (strace)

    - by Neil
    Anyone using the following php S3 client library? http://undesigned.org.za/2007/10/22/amazon-s3-php-class It's been working fine for me for a few days, just noticed that a script I have in place now just ends up hanging. Running this through strace, I see something like: poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) Looking at what's running, I see that it's not even getting to the point where it makes the curl call. Any thoughts? Thanks!

    Read the article

  • Using PIG with Hadoop, how do I regex match parts of text with an unknown number of groups?

    - by lmonson
    I'm using Amazon's elastic map reduce. I have log files that look something like this random text foo="1" more random text foo="2" more text noise foo="1" blah blah blah foo="1" blah blah foo="3" blah blah foo="4" ... How can I write a pig expression to pick out all the numbers in the 'foo' expressions? I prefer tuples that look something like this: (1,2) (1) (1,3,4) I've tried the following: TUPLES = foreach LINES generate FLATTEN(EXTRACT(line,'foo="([0-9]+)"')); But this yields only the first match in each line: (1) (1) (1)

    Read the article

  • PIG doesn't read my custom InputFormat

    - by Simon Guo
    I have a custom MyInputFormat that suppose to deal with record boundary problem for multi-lined inputs. But when I put the MyInputFormat into my UDF load function. As follow: public class EccUDFLogLoader extends LoadFunc { @Override public InputFormat getInputFormat() { System.out.println("I am in getInputFormat function"); return new MyInputFormat(); } } public class MyInputFormat extends TextInputFormat { public RecordReader createRecordReader(InputSplit inputSplit, JobConf jobConf) throws IOException { System.out.prinln("I am in createRecordReader"); //MyRecordReader suppose to handle record boundary return new MyRecordReader((FileSplit)inputSplit, jobConf); } } For each mapper, it print out I am in getInputFormat function but not I am in createRecordReader. I am wondering if anyone can provide a hint on how to hoop up my costome MyInputFormat to PIG's UDF loader? Much Thanks. I am using PIG on Amazon EMR.

    Read the article

  • AWS SES for bulk mail : Require email verification?

    - by weotch
    We're thinking of moving to Amazon's SES for sending bulk mail. It appears that we have a unique API call for each email we want to send. So if there are 20k emails to send, we make 20k API calls. My question is, do we need to verify these email addresses before we send to them? We have an existing database of users and I'd rather the transition to SES to be transparent to them. I noticed that SES has an API method for verifying emails. If we aren't required to verify, why would someone would use this method?

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • Why wouldn't an S3 ACL "stick"?

    - by Chris Phillips
    We would like to set an ACL to allow access to one of our buckets with a partner account. We've tested the process on a test account and everything works fine. On our production account/buckets, however, we can set the ACL and see the update but as soon as we attempt to access the bucket from the other account we get a forbidden response. Afterwards, when we look at the ACL list for the bucket, the permission is gone. We've tried using both Amazon's new S3 tool in the AWS Management Console and CloudBerry Explorer and both tools exhibit exactly the same behavior. Using the same process to update an ACL from our test account works as expected ( the ACL update "sticks" ). What would cause the ACL to not "stick"? Does anyone have any ideas on how to fix/workaround the problem?

    Read the article

  • CSS not displayed depending on page

    - by Kanjiroushi
    I have a friend that has a really strange issue with my website. When he clicks on http://www.copeo.fr/ the page displays fine but when he clicks on a link like www.copeo.fr/user/ the CSS is not applied even after a refresh. The raw html does display. I asked him to display the CSS that is hosted on amazon S3 hcopeoressources.s3.amazonaws.com/style/futurvert/style.css and it displays fine. The code validates on W3C validator so does the CSS. I am lost what could be the origin of the issue. Could it be its enterprise cache? configuration of IE7 on his machine? If it happens to someone else who could explain the issue to me, I am all hears. Thanks

    Read the article

  • How to host a naked domain on a CDN?

    - by rjw79
    If I have a domain that I wish to serve "naked" eg http://examp.le/, and efficiently with a CDN, what are my options? The issue is that the CDNs I looked at all want you to use a CNAME so that they can do geo ip lookup. CNAMES are not meant to be served at the same level as other records, and this apparently breaks some dns resolvers. You at least need SOA and MX records at the same level for a naked domain. The only solutions are: having A records in your own dns, thus skipping the geo ip, or finding a cdn who will allow delegation of the whole domain so they can do geo ip things for the A record directly. I've tried googling and can't find any Cdn who offers this. Any ideas? I looked closely at Amazon cloudfront and rackspace cloudfiles. I couldn't work it out for those.

    Read the article

  • Ruby: Streaming large AWS S3 object freezes

    - by Peter
    Hi, I am using the ruby aws/s3 library to retrieve files from Amazon S3. I stream an object and write it to file as per the documentation (with debug every 100 chunks to confirm progress) This works for small files, but randomly freezes downloading large (150MB) files on VPS Ubuntu. Fetching the same files (150MB) from my mac on a much slower connection works just fine. When it hangs there is no error thrown and the last line of debug output is the 'Finished chunk'. I've seen it write between 100 and 10,000 chunks before freezing. Anyone come across this or have ideas on what the cause might be? Thanks The code that hangs: i=1 open(local_file, 'w') do |f| AWS::S3::S3Object.value(key, @s3_bucket) do |chunk| puts("Writing chunk #{i}") f.write chunk.read_body puts("Finished chunk #{i}") i=i+1 end end

    Read the article

  • What payment gateways do real customers really use when given the choice?

    - by ??????
    I would like to give customers the option of paying however they can whether that be through a proper gateway (e.g. SagePay) or through something else such as PayPal, Amazon Checkout or Google Checkout. Personally I have not bought anything through the Amazon Checkout except for on Amazon.co.uk and my PayPal buys have been limited. As for Google Checkout I have no idea what that is or how it works from a consumer perspective. I understand that people buying from smaller sites are happier to pay by PayPal as they have an account already and trust PayPal. As for Amazon Payments and Google Checkout, do people actually use them if given the choice? There are a lot of people on Kindles these days, happy to buy stuff via Amazon on their Kindle. Would Amazon Payments make sense to this growing crowd? With too many payment gateways on offer it might be confusing at the checkout. Does anyone know if this is a problem for genuine customers? I also have not seen many 'pay by Amazon Payments' icons on websites (you see PayPal all the time). Does advertising the fact that you can pay by Amazon Payments increase sales, e.g. to Kindle owners that have a nebulous book-buying account that 'their other half doesn't know about'?

    Read the article

  • ez components and AWS PHP SDK makes ez components freak out

    - by David
    Hi, I try to work with ez Components and AWS PHP SDK at the same time. I have a file called resize.php which is just handling resizing images using the ez Components ImageTransition tools. I queue the image for resize in Amazon AWS SQS. If I load the AWS PHP SDK and ez Components in the same file, PHP always complains about not finding the ez Components classes. Code looks something like this: amazonSQS.php: require 'modules/resize.php'; require 'modules/aws/sdk.class.php'; $sqs = new AmazonSQS(); $response = $sqs->send_message($queue_url, $message); resize.php: function resize_image($filename) { $settings = new ezcImageConverterSettings( array( //new ezcImageHandlerSettings( 'GD', 'ezcImageGdHandler' ), new ezcImageHandlerSettings( 'ImageMagick', 'ezcImageImagemagickHandler' ), ) ); Error message: Fatal error: Class 'ezcImageConverterSettings' not found in /home/www.com/public_html/modules/resize.php on line 10 If I call resize.php from another PHP file which has AWS not included, it works fine. I load ezComponents like this: require 'ezc/Base/ezc_bootstrap.php'; It is installed as a PEAR package. Any idea someone?

    Read the article

  • Heroku and Refinerycms: Application failed to start ~ attachment_fu problem

    - by John Deely
    Ok so I'm trying to get Refinerycms working with Heroku, and I'm new at all of this. I've set up an amazon s3 account and added keys and ids to the amazon_s3.yml files. When launched on Heroku at gart.heroku.com I get the following error: App failed to start /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:in read': No such file or directory - /disk1/home/slugs/141557_e8490b3_d5eb/mnt/config/amazon_s3.yml (Errno::ENOENT) from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:inincluded' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:in include' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:inhas_attachment' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/app/models/image.rb:13 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:inrequire_or_load' ... 42 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 The s3_backend.rb line 187 contains: @@s3_config = @@s3_config = YAML.load(ERB.new(File.read(@@s3_config_path)).result)[RAILS_ENV].symbolize_keys Any help would be great!

    Read the article

  • Knowing the selections made on a 'multichooser' box in a mechanical turk hit (using Command Line Too

    - by gveda
    Hi All, I am new to Amazon Mechanical Turk, and wanted to create a hit with a qualification task. I am using the command line tools interface. One of the questions in my qualification task involves users selecting a number of options. I use a 'multichooser' selection type. Now I want to grade the responses based on the selections, where each selection has a different score. So for example, s1 has a score of 5, s2 of 10, s3 of 6, and so on. If the user selects s1 and s3, he/she gets a score of 11. Unfortunately, doing something like the following does not work: <AnswerOption> <SelectionIdentifier>s1</SelectionIdentifier> <AnswerScore>5</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s2</SelectionIdentifier> <AnswerScore>10</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s3</SelectionIdentifier> <AnswerScore>6</AnswerScore> </AnswerOption> If I do this, when I select multiple things, I get a score of 0. If I select only one option, say s1, then I get the appropriate score. Can you please help me on how to go about this? I could ask the same question 5 times with the same options, but then users might choose the same answer multiple times - something I wish to avoid. Thanks! Gaurav

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >