Search Results

Search found 71 results on 3 pages for 'cloudfront'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Force CloudFront distribution/file update

    - by Martin
    I'm using Amazon's CloudFront to serve static files of my web apps. Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed? Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

    Read the article

  • Creating Signed URLs for Amazon CloudFront

    - by Zack
    Short version: How do I make signed URLs "on-demand" to mimic Nginx's X-Accel-Redirect behavior (i.e. protecting downloads) with Amazon CloudFront/S3 using Python. I've got a Django server up and running with an Nginx front-end. I've been getting hammered with requests to it and recently had to install it as a Tornado WSGI application to prevent it from crashing in FastCGI mode. Now I'm having an issue with my server getting bogged down (i.e. most of its bandwidth is being used up) due to too many requests for media being made to it, I've been looking into CDNs and I believe Amazon CloudFront/S3 would be the proper solution for me. I've been using Nginx's X-Accel-Redirect header to protect the files from unauthorized downloading, but I don't have that ability with CloudFront/S3--however they do offer signed URLs. I'm no Python expert by far and definitely don't know how to create a Signed URL properly, so I was hoping someone would have a link for how to make these URLs "on-demand" or would be willing to explain how to here, it would be greatly appreciated. Also, is this the proper solution, even? I'm not too familiar with CDNs, is there a CDN that would be better suited for this?

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • Flowplayer RTMP streaming, mp4, Amazon Cloudfront and iPad/iPhone

    - by circey
    I've been working on a site where 2 video clips are streamed using Amazon Cloudfront and Flowplayer. You can see one video/page here: http://graemeclarkoration.org.au/gcorationp1.htm (works as a Highslide popup/modal window, hence the lack of adornment). While it works in all browsers and Android devices, I can't get it to work on an iPad or an iPhone; the page opens fine and the video box appears but the video never loads. Does anyone have any ideas how to fix or even why the video won't load? MTIA

    Read the article

  • Secure Streaming CDN Video Content

    - by Donalds
    Hi, I am using Amazon CloudFront to stream paid video content to my users and I am having problems getting the videos secured. Wowza does that by creating a secure token, but the use of wowza would be much more costly. Is there anyway that I can better protect my content by using Cloudfront or other CDN? Thanks!

    Read the article

  • OSMF seek with Amazon Cloudfront

    - by giorrrgio
    I've written a little OSMF player that streams via RTMP from Amazon Cloudfront. There's a known issue, the mp3 duration is not correctly readed from metadata and thus the seek function is not working. I know there's a workaround implying the use of getStreamLength function of NetConnection, which I successfully implemented in a previous non-OSMF player, but now I don't know how and when to call it, in terms of OSMF Events and Traits. This code is not working: protected function initApp():void { //the pointer to the media var resource:URLResource = new URLResource( STREAMING_PATH ); // Create a mediafactory instance mediaFactory = new DefaultMediaFactory(); //creates and sets the MediaElement (generic) with a resource and path element = mediaFactory.createMediaElement( resource ); var loadTrait:NetStreamLoadTrait = element.getTrait(MediaTraitType.LOAD) as NetStreamLoadTrait; loadTrait.addEventListener(LoaderEvent.LOAD_STATE_CHANGE, _onLoaded); player = new MediaPlayer( element ); //Marker 5: Add MediaPlayer listeners for media size and current time change player.addEventListener( DisplayObjectEvent.MEDIA_SIZE_CHANGE, _onSizeChange ); player.addEventListener( TimeEvent.CURRENT_TIME_CHANGE, _onProgress ); initControlBar(); } private function onGetStreamLength(result:Object):void { Alert.show("The stream length is " + result + " seconds"); duration = Number(result); } private function _onLoaded(e:LoaderEvent):void { if (e.newState == LoadState.READY) { var loadTrait:NetStreamLoadTrait = player.media.getTrait(MediaTraitType.LOAD) as NetStreamLoadTrait; if (loadTrait && loadTrait.netStream) { var responder:Responder = new Responder(onGetStreamLength); loadTrait.connection.call("getStreamLength", responder, STREAMING_PATH); } } }

    Read the article

  • AWS Elastic load balancer doesn't decrease instances from Alarm Trigger

    - by jchysk
    I have a load balancer that I created an auto-scaling-group and launch-config for. I created the auto-scaling-group with a min-size of 1 and max size of 20. I have a scaledown policy: as-put-scaling-policy SBMScaleDownPolicy --auto-scaling-group SBMAutoScaleGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300 Then I set up an alarm: mon-put-metric-alarm SBMLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 35 --alarm-actions arn:aws:autoscaling:us-east-1:policystuffhere:autoScalingGroupName/SBMAutoScaleGroup:policyName/SBMScaleDownPolicy --dimensions "AutoScalingGroupName=SBMAutoScaleGroup" When average CPU usage over 10 minutes is under 35, in CloudFront the alarm shows up as "In Alarm State" but doesn't decrease the number of instances. Also, if there's only one instance running it'll spin up another to 2 even if a scale up alarm isn't hit. It seems like the default value is just set to 2 somehow. How can I change this?

    Read the article

  • options for physical architecture of rails site regarding caching server or cdn

    - by timpone
    I have a rails app that is set on a single server currently. On production, I force_ssl for everything. I am interested in using a caching server for images (I'm fine with css and js being served from origin for time being). Would nginx or varnish (which I have no experience with) be a better solution (for October 2012)? I'd imagine that it would be easy to switch these around while still on this single server architecture. Or would something like cloudfront (which I also have no experience with) make sense for hosting image files? I know this is a vague question but appreciate any current feedback. thx in advance

    Read the article

  • How to host a naked domain on a CDN?

    - by rjw79
    If I have a domain that I wish to serve "naked" eg http://examp.le/, and efficiently with a CDN, what are my options? The issue is that the CDNs I looked at all want you to use a CNAME so that they can do geo ip lookup. CNAMES are not meant to be served at the same level as other records, and this apparently breaks some dns resolvers. You at least need SOA and MX records at the same level for a naked domain. The only solutions are: having A records in your own dns, thus skipping the geo ip, or finding a cdn who will allow delegation of the whole domain so they can do geo ip things for the A record directly. I've tried googling and can't find any Cdn who offers this. Any ideas? I looked closely at Amazon cloudfront and rackspace cloudfiles. I couldn't work it out for those.

    Read the article

  • Choosing between cloud (Cloudfoundry ) and virtual servers - for developers

    - by Mike Z
    I just came across some articles on how to setup your own cloud using Cloudfoundry and Ubuntu, this got me thinking, choosing our infrastructure, if we want to use our own servers what's the advantage of cloud on virtual servers vs just using virtual servers, VPN? If we now develop for the cloud later if we need help we can quickly move on to a cloud provider, but other than that what's the advantage and disadvantage of private cloud in these areas? speed of development, testing, deployment server management security having an extra layer (cloud) will that have a hit on server performance, how big? any other advantage/disadvantage?

    Read the article

  • Using Amazon S3/Cloudfront and Encoding.com to deliver web video – step by step for iPhone/iPod/iPad

    - by joelvarty
      The Amazon AWS newsletter for May 2010 had a great link in it to this article by encoding.com on how you can use they service to encode your video for multi-format, multi-bandwidth streaming to many devices, including iPhone, iPad, and Flash with H264.   This looks like it doesn’t actually take advantage of CloudFront streaming, but merely splits your encoded files into the available chunks and includes all of the M3U8 files that point to the different bitrates and such.   This looks like a pretty sweet service in general, especially since they seem to have an API as well, so that may be very useful to those of you out there looking to host video. more later – joel

    Read the article

  • Serving Compressed Files Amazon vs Lightty

    - by tike
    We are currently using amazon CloudFront to serve css and according to Amazon itself, Amazon CloudFront can serve both compressed and uncompressed files from an origin server. But while i check compression it shows everything fine in origin server but it shows notcompressed checking in the link with cloudfront. e.g. http://www.port80software.com/tools/compresscheck.asp?url=http%3A%2F%2Fimgsrv.mydomain.com%2Fen-UK%2Fsomething.css it would result with Compression status: (gzip) while with cloudfront http://www.port80software.com/tools/compresscheck.asp?url=http%3A%2F%2hereisit.cloudfront.net%2F%2Fsomething.css Compression status: Uncompressed Origin server is running lighttpd with mod_deflate however, allowed config is: deflate.allowed_encodings = ("bzip2", "gzip", "deflate") [i would think, putting extra allowed encoding wont affect as such.] Here i am clueless, what is the real issue.

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

  • Need a CDN with SSL

    - by Till
    We currently use Edgecast through Speedyrails. Back when I did my research they were both fast and very cost-effective. I haven't looked in a while, but now we need SSL on our assets as well. I reached out to our current provider and they want a setup fee and something like 260 USD per host per month (we use multiple hosts currently). I looked at AWS Cloudfront and it seems the most cost affective way to get SSL, but it's not a custom domain then (e.g. cdn.example.org), which I could live with. Has any else researched this lately and has any providers to get in touch with - can be resellers or direct buys. I'm not looking for a bargain, I just want to get an idea what these things cost. Edit, 2012-08-23: Must have is custom origin. E.g. I don't want to manually upload files somewhere else. Edgecast and Cloudfront both support this.

    Read the article

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • Create 301 Redirection in Amazon Route 53 for Wildcard Subdomains

    - by Eric Yin
    My domain name hosted on Route 53 DNS. Amazon has a guide to do 301 redirection for www. To naked domain by point www. version to a S3 static website with 301 setted up. My question is, how can I have *.domain.com all have 301 redirec to naked domain name. I guess either: Some way to get all wildcard subdomains end up into one S3 bucket, how? Or: Use CloudFront on the www. version S3 site and put wildcard subdomains on the CloudFront, but how? Or: There's some hidden settings just lies on Router 53, then where? Or: use EC2, better not suggest me this, too costing for this task. Please advice.

    Read the article

  • Cloud just for hosting big files?

    - by yes123
    I need a solution to store my big files (50MB+ each). Currently I am using an european dedicated server (100MBits) with 8000GB/motnh at 60USD. I would like to use a cloud service that autmatically fetches my files from my server the first time users request it (like a classic cdn) (So I can have all files stored within 1 server) I was looking at Amazon CloudFront and, to get the same bandwidth 8'000 GB/month, I have to pay like 2000 USD vs my 60 USD of my dedicated server. Is there a cheaper alternative?

    Read the article

  • Can't seem to sign AWS Cloud Front URL properly

    - by Joe Corkery
    Hi everybody, I found a lot of detailed examples online on how to sign an Amazon CloudFront URL for private content. Unfortunately, whenever I implement these examples my URL doesn't seem to work. The resource path is correct because I can download the file when it is set for world read, but the URL doesn't work when set just for authorized users. The PHP code I am using is below. If anybody has any insights as to what I might be doing wrong (I'm guessing it is something obvious that I am just not seeing right now), it would be greatly appreciated. function urlCloudFront($resource) { $AWS_CF_KEY = 'APKA...'; $priv_key = file_get_contents(path_to_pem_file); $pkeyid = openssl_get_privatekey($priv_key); $expires = strtotime("+ 3 hours"); $policy_str = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}'; $policy_str = trim( preg_replace( '/\s+/', '', $policy_str ) ); $res = openssl_sign($policy_str, $signature, $pkeyid, OPENSSL_ALGO_SHA1); $signature_base64 = (base64_encode($signature)); $repl = array('+' => '-','=' => '_','/' => '~'); $signature_base64 = strtr($signature_base64,$repl); $url = $resource . '?Expires=' .$expires. '&Signature=' . $signature_base64 . '&Key-Pair-Id='. $AWS_CF_KEY; print '<p><A href="' .$url. '">Download VIDA (CloudFrount)</A>'; } urlCloudFront("http://mydistcloud.cloudfront.net/mydir/myfile.tar.gz"); Thanks.

    Read the article

  • How do I login once I promote my Windows Server 2012 to domain controller in my Amazon VPC?

    - by Developr
    I am following this guide: http://d36cz9buwru1tt.cloudfront.net/pdf/EC2_AD_How_to.pdf to setup my domain controller. I get AD installed correctly, but when I do the promotion to DC, the server restarts and when I try to access it, I am unable to login using any of the local system accounts. I even created my own separate user account, but that did not help. I made sure to disable the amazon settings for renaming the machine, the machine has a static ip and has been renamed.

    Read the article

  • Building a distributed system on Amazon Web Services

    - by Songo
    Would simply using AWS to build an application make this application a distributed system? For example if someone uses RDS for the database server, EC2 for the application itself and S3 for hosting user uploaded media, does that make it a distributed system? If not, then what should it be called and what is this application lacking for it to be distributed? Update Here is my take on the application to clarify my approach to building the system: The application I'm building is a social game for Facebook. I developed the application locally on a LAMP stack using Symfony2. For production I used an a single EC2 Micro instance for hosting the app itself, RDS for hosting my database, S3 for the user uploaded files and CloudFront for hosting static content. I know this may sound like a naive approach, so don't be shy to express your ideas.

    Read the article

  • Is HR The New IT?

    - by Scott Ewart
    Is HR The New IT?  As recruitment, on-boarding and development head to the cloud and mobile devices put sophisticated tools into everyone’s hands, HR leaders are discovering that technology savvy and analytical skills are key to effective talent management. In this article by Ladan Nikravan in the September edition of Talent Management magazine, Oracle's own Chris Leone, SVP of Fusion Strategy, gives his take on how Technology trends such as social, mobile, big data and the cloud are creating a fundamental change in how employees and HR create value and relationships within the networked organization. Read the full article here: http://d27vj430nutdmd.cloudfront.net/23555/122778/122778.1.pdf

    Read the article

  • What is the advantage to hosting static resources on a separate domain?

    - by Michael Ekstrand
    I notice a lot of sites host their resources on a separate domain from the main site, e.g. StackExchange using sstatic.net, Barnes & Noble using imagesbn.com, etc. I understand that there are benefits to putting your static resources on a separate host, possibly with an efficient static-file web server like nginx, freeing up the main server to focus on serving dynamic content. Similarly, outsourcing to a shared CDN like cloudfront Akamai is logical. What is the benefit to using a separate domain otherwise, though? Why sstatic.net instead of static.stackexchange.com? Update: Several answers miss the core question. I understand that there is benefit to splitting between multiple hosts — parallel downloads, slimmer web server, etc. But what is more elusive is why multiple domains. Why sstatic.net rather than static.stackexchange.com as the host for shared resources? So far, only one answer has addressed that.

    Read the article

  • Can I make Google Analytics set its cookies on just a subdomain? (I.e. www.domain.com, not domain.com)

    - by Paul D. Waite
    I’m using Google Analytics on a site — let’s call it www.domain.com. My Google Analytics website profile is for www.domain.com, and my only report is set up for www.domain.com. Requests to domain.com redirect permanently to www.domain.com. I’ve got the regular Analytics JavaScript on my index page for the domain. For some reason, it seems to be setting its cookies for domain.com instead of www.domain.com. This is unfortunate, as I’ve got cdn.domain.com set up as a CDN using Amazon Cloudfront, so I’d rather not have useless cookies (Analytics seems to set four cookies) cluttering up those requests. How can I make Analytics set cookies for www.domain.com instead of domain.com?

    Read the article

< Previous Page | 1 2 3  | Next Page >