Search Results

Search found 1351 results on 55 pages for 'aws s3'.

Page 23/55 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Using Amazon S3/Cloudfront and Encoding.com to deliver web video – step by step for iPhone/iPod/iPad

    - by joelvarty
      The Amazon AWS newsletter for May 2010 had a great link in it to this article by encoding.com on how you can use they service to encode your video for multi-format, multi-bandwidth streaming to many devices, including iPhone, iPad, and Flash with H264.   This looks like it doesn’t actually take advantage of CloudFront streaming, but merely splits your encoded files into the available chunks and includes all of the M3U8 files that point to the different bitrates and such.   This looks like a pretty sweet service in general, especially since they seem to have an API as well, so that may be very useful to those of you out there looking to host video. more later – joel

    Read the article

  • Using AWS or Azure, what to do about emails?

    - by Paul
    I'm coming from a background of paying a hosting company X amount per month for a server. This server comes with IIS, WebsitePanel and Smartermail all bundled together. When I create a new domain using WebsitePanel it automatically creates my email account. All I then need to do is configure my DNS to point to the server. I've decided that it is more cost efficient to move to AWS / Azure. Has anyone come from a similar background and moved onto a cloud system? I'd be interested to know what you did regarding emails. So far, these are the suggestions I've seen: Use Google Apps for each domain Use something like Elastic Email to sent out emails Launch a new instance and host an email server on that The first option seems like quite a lot of manual configuration, the second one works good with outgoing emails but what about receiving? Option 3 would make it less cost effective. What is your experience?

    Read the article

  • REST authentication S3 like hmac sha1 signature vs symetric data encryption.

    - by coulix
    Hello stackers, I was arguing about an S3 like aproach using authorization hash with a secret key as the seed and some data on the request as the message signed with hmac sha1 (Amazon S3 way) vs an other developer supporting symetric encryption of the data with a secret key known by the emiter and the server. What are the advantage of using signed data with hmac sha1 vs symetric key other than the fact that with the former, we do not need to encrypt the username or password. What would be the hardest to break ? symetric encryption or sha1 hashing at la S3 ? If all big players are using oauth and similar without symetric key it is sure that there are obvious advantages, what are those ?

    Read the article

  • Why am I having DNS problems going through Network Solutions DNS to Amazon AWS?

    - by BestPractices
    Network Solutions appears to have an issue with AWS hostnames. This AWS ELB has been out there for months and is resolvable from every major DNS provider but network solutions. Any idea as to why? WORKING (4.2.2.2 DNS) $ nslookup testloadbalancer-1761726467.us-west-2.elb.amazonaws.com Server: 4.2.2.5 Address: 4.2.2.5#53 Non-authoritative answer: Name: testloadbalancer-1761726467.us-west-2.elb.amazonaws.com Address: 50.112.251.201 NOT WORKING (Network Solutions DNS) $ nslookup testloadbalancer-1761726467.us-west-2.elb.amazonaws.com ns1.worldnic.com Server: ns1.worldnic.com Address: 205.178.190.1#53 ** server can't find testloadbalancer-1761726467.us-west-2.elb.amazonaws.com.localhost: SERVFAIL

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • RewriteRule Works With "Match Everything" Pattern But Not Directory Pattern

    - by kgrote
    I'm trying to redirect newsletter URLs from my local server to an Amazon S3 bucket. So I want to redirect from: https://mysite.com/assets/img/newsletter/Jan12_Newsletter.html to: https://s3.amazonaws.com/mybucket/newsletters/legacy/Jan12_Newsletter.html Here's the first part of my rule: RewriteEngine On RewriteBase / # Is it in the newsletters directory RewriteCond %{REQUEST_URI} ^(/assets/img/newsletter/)(.+) [NC] # Is not a 2008-2011 newsletter RewriteCond %{REQUEST_URI} !(.+)(11|10|09|08)_Newsletter.html$ [NC] ## -> RewriteRule to S3 Here <- ## If I use this RewriteRule to point to the new subdirectory on S3 it will NOT redirect: RewriteRule ^(/assets/img/newsletter/)(.+) https://s3.amazonaws.com/mybucket/newsletters/legacy/$2 [R=301,L] However if I use a blanket expression to capture the entire file path it WILL redirect: RewriteRule ^(.*)$ https://s3.amazonaws.com/mybucket/newsletters/legacy/$1 [R=301,L] Why does it only work with a "match everything" expression but not a more specific expression?

    Read the article

  • Problem with wake after suspend using USB remote.

    - by Bod
    Hi, I'm a linux newbie looking for some help. I'm currently setting up an XBMC HTPC using a laptop and 10.10 and all works great except for waking from resume using the power button on the remote. The suspend works from remote works fine as does the resume using the power button on the laptop. I've checked /proc/acpi/wakeup which initially showed the following. Device S-state Status Sysfs node C096 S5 *disabled pci:0000:00:1e.0 C0F1 S3 *disabled pci:0000:00:1d.0 C0F8 S3 *disabled pci:0000:00:1d.1 C0F9 S3 *disabled pci:0000:00:1d.2 C0FA S3 *disabled pci:0000:00:1d.3 C0FB S3 *disabled pci:0000:00:1d.7 C102 S5 *disabled pci:0000:00:1c.0 C22B S5 *disabled pci:0000:08:00.0 C115 S5 *disabled pci:0000:00:1c.2 C22C S5 *disabled C118 S5 *disabled pci:0000:00:1c.3 C22C S5 *disabled I've since configured the above so that the S3 devices above are enabled. I've confirmed that they are the correct devices using lspci 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) None of this has worked unfortunately and I'm now stuck. It simply refuses to wakeup from the remote. The USB receiver shows no activity LED while suspended. Suspend/resume from the remote works fine from Windows 7 so I know the laptop is ok with it. Any ideas? I need to get this sorted to gain Wife Approval for this system. Thanks, Bod.

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • How to sort output of "s3cmd ls"

    - by dba.in.ua
    Amazon "s3cmd ls" takes like this output: 2010-02-20 21:01 1458414588 s3://file.tgz.00 2010-02-20 21:10 1458414527 s3://file.tgz.01 2010-02-20 22:01 1458414588 s3://file.tgz.00 2010-02-20 23:10 1458414527 s3://file.tgz.01 2010-02-20 23:20 1458414588 s3://file.tgz.02 How to select all files of archive, ending at 0 ... X, with the latest date of fileset ? Bash, regexp ? Thanx!

    Read the article

  • How to reliably recieve message from AWS that my instance was rebooted / terminated / stopped?

    - by Andrew Smith
    I have Nagios, and I want it to stop monitoring instances when they are stopped from the console. The requirements are: The message passed from AWS is 100R% reliable, e.g. when Nagios is down, and the message cannot be delivered, it will be re-delivered promptly when Nagios is up The message will pass quickly There is no need to scan status of all instances via EC2 API all the time, but only once a while Many thanks!

    Read the article

  • AWS Free Usage Tier + Cloudflare... possible?

    - by crashintoty
    If I throw my MySQL/PHP app up on a Amazon EC2 instance (using their AWS Free Usage Tier program) and couple it with CloudFlare (the free plan of course) roughly how many daily visitors can I comfortably handle before performance starts to suffer? Just looking for a rough estimate or educated guess - I understand this setup might be less than ideal but I'm still very curious nonetheless. Thanks in advance

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • Should I stick only to AWS RDS Automated Backup or DB Snapshots?

    - by James Wise
    I am using AWS RDS for MySQL. With it comes on backup, I understand that amazon provides two types of backup - automated backup and database (DB) snapshot. The difference is explain in here - http://aws.amazon.com/rds/faqs/#23. However, I am still confuse if should I stick to automated backup only or both automated and manual (db snapshots). What do you think guys? What's the setup of your own? I heard to others that automated backup is not reliable due to some unrecoverable database when the DB instance is crashed so the DB snapshots are the way to rescue you. If I will do daily DB snapshots as similar settings to automated backup, I have gonna pay much bunch of bucks. Hope anyone could enlighten me or advise me the right set up. Thanks. James

    Read the article

  • Keeps "SSH timeout" error in our AWS instance- how do i diagnose?

    - by ming yeow
    I am befuddled by this error. We keep failing to SSH into our AWS instance, whether it is is deployment or via console. I have tried rebooting a few times, but it does not seem to be helping. Here are a couple of error messages i keep getting. connection failed for: HOST.NAME.amazonaws.com (Errno::ETIMEDOUT: Operation timed out - connect(2)) 111.222.333.444: ssh connection failed at 2010-07-02 03:39:37 I also SSHed in when it was up, and monitored "top" when ssh times out. looking at the memory logs, it does not look like any program was hogging

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >