Search Results

Search found 2842 results on 114 pages for 'amazon route53'.

Page 28/114 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Paperclip and Amazon S3 Issue

    - by Jimmy
    Hey everyone, I have a rails app running on Heroku. I am using paperclip for some simple image uploads for user avatars and some other things, I have S3 set as my backend and everything seems to be working fine except when trying to push to S3 I get the following error: The AWS Access Key Id you provided does not exist in our records. Thinking I mis-pasted my access key and secret key, I tried again, still no luck. Thinking maybe it was just a buggy key I deactivated it and generated a new one. Still no luck. Now for both keys I have used the S3 browser app on OS X and have been able to connect to each and view my current buckets and add/delete buckets. Is there something I should be looking out for? I have my application's S3 and paperclip setup like so development: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] test: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] production: bucket: (unique_name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] has_attached_file :cover, :styles => { :thumb => "50x50" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":class/:id/:style/:filename" Note: I just added the (unique name) bits, those aren't actually there--I have also verified bucket names, but I don't even think this is getting that far. I also have my heroku environment vars setup correctly and have them setup on dev

    Read the article

  • Amazon EC2 RSA key stopped authenticating - Permission denied (publickey)

    - by shedd
    Authenticating to our Ubuntu EC2 instance worked fine until a little while ago. All of a sudden, the key is being rejected. When we create a new instance with the keypair, we're able to connect to the instance perfectly, so it appears to be an issue with the existing instance. Port 22 is open. Any suggestions on what to look at from a configuration standpoint so we can fix this? Any thoughts on how we can get into the box? Here is the SSH debug output. Is there anything obviously amiss? Thanks so much! $ ssh -v -i ~/zzz.pem ubuntu@###.###.###.### OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ###.###.###.### [###.###.###.###] port 22. debug1: Connection established. debug1: identity file zzz.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '###.###.###.###' is known and matches the RSA host key. debug1: Found key in /zzz/.ssh/known_hosts:18 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /zzz/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Offering public key: zzz.txt debug1: Authentications that can continue: publickey debug1: Trying private key: zzz.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Amazon ELB CNAME record not working

    - by BarthQuay
    Hi there, I have set up my EC2 infrastructure behind an ELB instance and by using the ELBs DNS name everything works as expected. Now i wanted to forward a subdomain of my main project domain to the ELBs DNS Name with a CNAME entry. I did this about 12 hours ago and it doesnt seem to work, and i dont know why. The subdomain just cant be resolved. This is the DNS entry which was processed from my DNS provider without errors yesterday: @ IN A 111.111.111.111 localhost IN A 127.0.0.1 mail IN A 111.111.111.111 www IN A 111.111.111.111 ftp IN CNAME www beta IN CNAME myelbnamehere.eu-west-1.elb.amazonaws.com imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ IN MX 10 mail Using nslookup, all the subdomains and main domain gets looked up correctly, but beta.domain.com doesnt. I get "** server can't find beta.domain.com: NXDOMAIN" What am i doing wrong ? Do i need to wait longer ? When i use the ELB DNS name directly everything works as expected. When i do an NSlookup on my providers DNS Server, the CNAME gets resolved, but it looks like any other DNS server cant find the subdomain thanks in advance

    Read the article

  • Best practices for building a simple, scalable cluster on Amazon EC2 for a Java web app

    - by Alex B
    I want to build a Java web app and deploy it on EC2. It will be written in Java and will use MySQL. I was hoping to get some pointers on the actual deployment process and configuration. In particular I'm interested in the following topics: machine images (diy vs ready made) mysql replication and backup to S3 ways of deploying and redeploying the app to EC2 without interruptions firewalls? load balancing and auto scaling cloudtools (or alternative tools)

    Read the article

  • Amazon S3 Change file download name

    - by Daveo
    I have files stored on S3 with a GUID as the key name. I am using a pre signed URL to download as per S3 REST API I store the original file name in my own Database. When a user clicks to download a file from my web application I want to return their original file name, but currently all they get is a GUID. How can I achieve this? My web app is in salesforce so I do not have much control to do response.redirects all download the file to the web server then rename it due to governor limitations. Is there some HTML redirect, meta refresh, Javascript I can use? Is there some way to change the download file name for S3 (the only thing I can think of is coping the object to a new name, downloading it, then deleting it). I want to avoid creating a bucket per user as we will have a lot of users and still no guarantee each file with in each bucket will have a unique name Any other solutions?

    Read the article

  • Amazon S3 File Uploads

    - by Abdul Latif
    I can upload files from a form using post, but I am trying to find out how to add extra fields to the form i.e File Description, Type and etc. Also can I upload more than one file at once, I know it says you can't using post in the documentation but are there any work arounds? Thanks in Advance

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

  • Amazon Elastic MapReduce: Exception from FileSystem

    - by S.N
    Hi, I run my application using ruby client: ruby elastic-mapreduce -j j-20PEKMT9BRSUC --jar s3n://sakae55/lib/edu.cit.som.jar --main-class edu.cit.som.hadoop.SOMDriver --arg s3n://sakae55/repository/input/ecoli/ --arg s3n://sakae55/repository/output/ecoli/pl/ --arg s3n://sakae55/repository/data/ecoli/som.txt Then, I am seeing the following error: java.lang.IllegalArgumentException: This file system object (file:///) does not support access to the request path 'hdfs://i -10-195-207-230.ec2.internal:9000/mnt/var/lib/hadoop/tmp/mapred/system/job_201004221221_0017/job.jar' You possibly called Fi eSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path. at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:52) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:416) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:259) at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:676) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:200) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1184) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1160) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1132) at org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:662) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:729) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026) at edu.cit.som.hadoop.SOMDriver.runIteration(SOMDriver.java:106) at edu.cit.som.hadoop.SOMDriver.train(SOMDriver.java:69) at edu.cit.som.hadoop.SOMDriver.run(SOMDriver.java:52) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at edu.cit.som.hadoop.SOMDriver.main(SOMDriver.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:155) at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) I am not sure why the error references to "file:///" even though all the arguments I pass do not use the schema.

    Read the article

  • Amazon SimpleDB Identity Seed equivalent

    - by Zaff
    Is there an equivalent to an identity Seed in SimpleDB? If the answer is no, how do you handle creating something like a customer number or order number that will prevent the creation duplicate numbers? My experience is mainly from SQL Server in which I would either create a primary key with an identity seed or use transactions in a stored procedure to increment the number. Thanks for your help!

    Read the article

  • Ejabberd clustering problem with amazon EC2 server

    - by user353362
    Hello Guys! I have been trying to install ejabberd server on Amazons EC2 instance. I am kinds a stuck at this step right now. I am following this guide: http://tdewolf.blogspot.com/2009/07/clustering-ejabberd-nodes-using-mnes... From the guide I have sucessfully completed the Set up First Node (on ejabberd1) part. But am stuck in part 4 of Set up Second Node (on ejabberd2) So all in all, I created the main node and am able to run the server on that node and access its admin console from then internet. In the second node I have installed ejabberd. But I am stuck at point 4 of setting up the node instruction presented in this blog (http://tdewolf.blogspot.com/2009/07/clustering-ejabberd-nodes-using-mnes...). I execute this command " erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia " on the second server and get a crashing error: root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia {error_logger,{{2010,5,28},{23,52,25}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,duplicate_name}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2010,5,28},{23,52,25}},crash_report,[[{pid,<0.21.0},{registered_name,net_kernel},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{initial_call,{net_kernel,init,['Argument__1']}},{ancestors,[net_sup,kernel_sup,<0.8.0]},{messages,[]},{links,[#Port<0.52,<0.18.0]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,23},{reductions,518}],[]]} {error_logger,{{2010,5,28},{23,52,25}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[['ejabberd@domU-12-31-39-0F-7D-14',shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2010,5,28},{23,52,25}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2010,5,28},{23,52,25}},crash_report,[[{pid,<0.7.0},{registered_name,[]},{error_info,{exit,{shutdown,{kernel,start,[normal,[]]}},[{application_master,init,4},{proc_lib,init_p_do_apply,3}]}},{initial_call,{application_master,init,['Argument_1','Argument_2','Argument_3','Argument_4']}},{ancestors,[<0.6.0]},{messages,[{'EXIT',<0.8.0,normal}]},{links,[<0.6.0,<0.5.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,233},{stack_size,23},{reductions,123}],[]]} {error_logger,{{2010,5,28},{23,52,25}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# any idea what going on? I am not really sure how to solve this problem :S how to let ejabberd only access register from one special server? › Is that the right way of copying .erlang.cookie file? Submitted by privateson on Sat, 2010-05-29 00:11. before this I was getting this error (see below), I solved it by running this command: chmod 400 .erlang.cookie Also to copy the cookie I simply created a file using vi on the second server and copied the secret code from server one to the second server. Is that the right way of copying .erlang.cookie file? ERROR ~~~~~~~~~~ root@domU-12-31-39-0F-7D-14:/etc/ejabberd# erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia {error_logger,{{2010,5,28},{23,28,56}},"Cookie file /root/.erlang.cookie must be accessible by owner only",[]} {error_logger,{{2010,5,28},{23,28,56}},crash_report,[[{pid,<0.20.0},{registered_name,auth},{error_info,{exit,{"Cookie file /root/.erlang.cookie must be accessible by owner only",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{initial_call,{auth,init,['Argument__1']}},{ancestors,[net_sup,kernel_sup,<0.8.0]},{messages,[]},{links,[<0.18.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,23},{reductions,439}],[]]} {error_logger,{{2010,5,28},{23,28,56}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{"Cookie file /root/.erlang.cookie must be accessible by owner only",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{offender,[{pid,undefined},{name,auth},{mfa,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2010,5,28},{23,28,56}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2010,5,28},{23,28,56}},crash_report,[[{pid,<0.7.0},{registered_name,[]},{error_info,{exit,{shutdown,{kernel,start,[normal,[]]}},[{application_master,init,4},{proc_lib,init_p_do_apply,3}]}},{initial_call,{application_master,init,['Argument_1','Argument_2','Argument_3','Argument_4']}},{ancestors,[<0.6.0]},{messages,[{'EXIT',<0.8.0,normal}]},{links,[<0.6.0,<0.5.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,233},{stack_size,23},{reductions,123}],[]]} {error_logger,{{2010,5,28},{23,28,56}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# cat /var/log/ejabberd/ejabberd.log =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:154) : pubsub init "localhost" [{access_createnode, pubsub_createnode}, {plugins, ["default","pep"]}] =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:210) : ** tree plugin is nodetree_default =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:214) : ** init default plugin =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:214) : ** init pep plugin =ERROR REPORT==== 2010-05-28 23:40:08 === ** Connection attempt from disallowed node 'ejabberdctl1275090008486951000@domU-12-31-39-0F-7D-14' ** =ERROR REPORT==== 2010-05-28 23:41:10 === ** Connection attempt from disallowed node 'ejabberdctl1275090070163253000@domU-12-31-39-0F-7D-14' **

    Read the article

  • Amazon S3 Add METADATA to existing KEY

    - by Daveo
    In S3 REST API I am adding metadata to an existing object by using the PUT (Copy) command and copying a key to the same location with 'x-amz-metadata-directive' = 'REPLACE' What I want to do is change the download file name by setting: Content-Disposition: attachment; filename=foo.bar; This sets the metadata correctly but when I download the file it still uses the keyname instead of 'foo.bar' I use a software tool S3 Browser to view the metadata and it looks correct (apart from 'Content-Disposition' being all lower case as that's was S3 ask me to sign) Then using S3 Browser I just pressed, then save without changing anything and now it works??? What am I missing how come setting a metadata 'Content-Disposition: attachment; filename=foo.bar;' from my web app does not work but does work from S3 Browser?

    Read the article

  • Amazon EC2 Development Stack

    - by sirmak
    I want to use EC2 for some reasons and EC2 provides both windows and linux deployments, but linux is much cheaper (linux reserved instances are approx. %63-%85 price of windows ones and spot instances are %50 cheaper for linux). I need a type safe lang and mainstream platform and I prefer to use .net/c# stack (but not mono for some reasons), but in this situation java seems a better fit for the future (when ec2 instance counts begin to increase). So, is it worth to use .net ? best regards,

    Read the article

  • Amazon S3 collisions with heroku and paperclip

    - by poseid
    I have an app on my localhost for development and an app for testing on heroku. Image upload with localhost and paperclip always works. However, doing the same experiment with image upload on my heroku app, the app hangs... and the upload seems to be going on forever. I suspect that there is a collision going on. What is needed to get but uploads working? Or do I need to use different buckets for each environment?

    Read the article

  • Amazon S3 - HTTPS/SSL - Is it possible?

    - by Kerry
    I saw a few other questions regarding this without any real answers or information (or so it appeared). I have an image here: http://furniture.retailcatalog.us/products/2061/6262u9665.jpg Which is redirecting to: http://furniture.retailcatalog.us.s3.amazonaws.com/products/2061/6262u9665.jpg I need it to be: https://furniture.retailcatalog.us/products/2061/6262u9665.jpg So I installed a wildcard ssl on retailatalog.us (we have other subdomains), but it wasn't working. I went to check https://furniture.retailcatalog.us.s3.amazonaws.com/products/2061/6262u9665.jpg And it wasn't working. How do I make this work?

    Read the article

  • Special chars in Amazon S3 keys?

    - by Martin
    Is it possible to have special characters like åäö in the key? If i urlencode the key before storing it works, but i cant really find a way to access the object. If i write åäö in the url i get access denied (like i get if the object is not found). If i urlencode the url i paste in the browser i get "InvalidURICouldn't parse the specified URI". Is there some way to do this?

    Read the article

  • Map Reduce job on Amazon: argument for custom jar

    - by zero51
    Hi all, This is one of my first try with Map Reduce on AWS in its Management Console. Hi have uploaded on AWS S3 my runnable jar developed on Hadoop 0.18, and it works on my local machine. As described on documentation, I have passed the S3 paths for input and output as argument of the jar: all right, but the problem is the third argument that is another path (as string) to a file that I need to load while the job is in execution. That file resides on S3 bucket too, but it seems that my jar doesn't recognize the path and I got a FileNotFound Exception while it tries to load it. That is strange because this is a path exactly like the other two... Anyone have any idea? Thank you Luca

    Read the article

  • Serving images from Amazon S3 in PHP application

    - by luckytaxi
    So it just occurred to me that once I upload profile pics to S3, I have to figure out a way to keep track of the files. For example, if "susan" uploads 3 profile pics, I need to recall those 3 pictures and display it on her profile page if someone views her page. With that said, would the following work? User uploads picture from form Save file information (filename, user info, etc...) into DB and reference URL from S3 Upload photos to S3 When displaying pictures, I'll query the DB for the info and display the images from S3 accordingly.

    Read the article

  • How to securely stream video from amazon S3

    - by JP.
    I have couple of copyright videos available on my S3 buckets. I want to stream them on my website, but at the same time. I don't want the users to rip the video from the video player. I tried to google about it but still i am not confident on this, coz i do not know the intricacies of options available like Server Side encryption None/ AES-256 2) A very interesting option is under Metadata tab - It shows couple of keys & Values. How can i use them to secure my video content? 3) Add more meta data and related options?

    Read the article

  • noSQL AMazon ec2 (Any suggestions?)

    - by terence410
    There are a lot of discussion on this but I still don't have clear idea what is the best solution. I am currently considering MongoDB. Do you think it's good? What about Cassandra? Besides, ThruDB looks good but seems there is no official release.

    Read the article

  • Python CGI on Amazon AWS EC2 micro-instance -- a how-to!

    - by user595585
    How can you make an EC2 micro instance serve CGI scripts from lighthttpd? For instance Python CGI? Well, it took half a day, but I have gotten Python cgi running on a free Amazon AWS EC2 micro-instance, using the lighttpd server. I think it will help my fellow noobs to put all the steps in one place. Armed with the simple steps below, it will take you only 15 minutes to set things up! My question for the more experienced users reading this is: Are there any security flaws in what I've done? (See file and directory permissions.) Step 1: Start your EC2 instance and ssh into it. [Obviously, you'll need to sign up for Amazon EC2 and save your key pairs to a *.pem file. I won't go over this, as Amazon tells you how to do it.] Sign into your AWS account and start your EC2 instance. The web has tutorials on doing this. Notice that default instance-size that Amazon presents to you is "small." This is not "micro" and so it will cost you money. Be sure to manually choose "micro." (Micro instances are free only for the first year...) Find the public DNS code for your running instance. To do this, click on the instance in the top pane of the dashboard and you'll eventually see the "Public DNS" field populated in the bottom pane. (You may need to fiddle a bit.) The Public DNS looks something like: ec2-174-129-110-23.compute-1.amazonaws.com Start your Unix console program. (On Max OS X, it's called Terminal, and lives in the Applications - Utilities folder.) cd to the directory on your desktop system that has your *.pem file containing your AWS keypairs. ssh to your EC2 instance using a command like: ssh -i <<your *.pem filename>> ec2-user@<< Public DNS address >> So, for me, this was: ssh -i amzn_ec2_keypair.pem [email protected] Your EC2 instance should let you in. Step 2: Download lighttpd to your EC2 instance. To install lighttpd, you will need root access on your EC2 instance. The problem is: Amazon will not let you sign in as root. (Not straightforwardly, at least.) But there is a workaround. Type this command: sudo /bin/bash The system prompt-character will change from $ to #. We won't exit from "sudo" until the very last step in this whole process. Install the lighttpd application (version 1.4.28-1.3.amzn1 for me): yum install lighttpd Install the FastCGI libraries for lighttpd (not needed, but why not?): yum install lighttpd-fastcgi Test that your server is working: /etc/init.d/lighttpd start Step 3: Let the outside world see your server. If you now tried to hit your server from the browser on your desktop, it would fail. The reason: By default, Amazon AWS does not open any ports to your EC2 instance. So, you have to open the ports manually. Go to your EC2 dashboard in your desktop's browser. Click on "Security Groups" in the left pane. One or more security groups will appear in the upper right pane. Choose the one that was assigned to your EC2 instance when you launched your instance. A table called "Allowed Connections" will appear in the lower right pane. A pop-up menu will let you choose "HTTP" as the connection method. The other values in that line of the table should be: tcp, 80, 80, 0.0.0.0/0 Now hit your EC2 instance's server from the desktop in your browser. Use the Public DNS address that you used earlier to SSH in. You should see the lighttpd generic web page. If you don't, I can't help you because I am such a noob. :-( Step 4: Configure lighttpd to serve CGI. Back in the console program, cd to the configuration directory for lighttpd: cd /etc/lighttpd To enable CGI, you want to uncomment one line in the < modules.conf file. (I could have enabled Fast CGI, but baby steps are best!) You can do this with the "ed" editor as follows: ed modules.conf /include "conf.d\/cgi.conf"/ s/#// w q Create the directory where CGI programs will live. (The /etc/lighttpd/lighttpd.conf file determines where this will be.) We'll create our directory in the default location, so we don't have to do any editing of configuration files: cd /var/www/lighttpd mkdir cgi-bin chmod 755 cgi-bin Almost there! Of course you need to put a test CGI program into the cgi-bin directory. Here is one: cd cgi-bin ed a #!/usr/bin/python print "Content-type: text/html\n\n" print "<html><body>Hello, pyworld.</body></html>" . w hellopyworld.py q chmod 655 hellopyworld.py Restart your lighttpd server: /etc/init.d/lighttpd restart Test your CGI program. In your desktop's browser, hit this URL, substituting your EC2 instance's public DNS address: http://<<Public DNS>>/cgi-bin/hellopyworld.py For me, this was: http://ec2-174-129-110-23.compute-1.amazonaws.com/cgi-bin/hellopyworld.py Step 5: That's it! Clean up, and give thanks! To exit from the "sudo /bin/bash" command given earlier, type: exit Acknowledgements: Heaps of thanks to: wiki.vpslink.com/Install_and_Configure_lighttpd www.cyberciti.biz/tips/lighttpd-howto-setup-cgi-bin-access-for-perl-programs.html aws.typepad.com/aws/2010/06/building-three-tier-architectures-with-security-groups.html Good luck, amigos! I apologize for the non-traditional nature of this "question" but I have gotten so much help from Stackoverflow that I was eager to give something back.

    Read the article

  • how to add a REverse PTR Record on Amazon Route 53?

    - by Oscar Cabrero
    if i have the below ip 168.144.254.X and i would like to add a ptr record in amazon in the form of X.254.144.168.in-addr.arpa what should be in the name field and what should be in the value field i have a zone created with a name like mydomain.com which host the DNS records for my ip. amazon wont let me add a value of X.254.144.168.in-addr.arpa in the name field do i need to create a new zone for the ip in order to allow this?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >