Search Results

Search found 2842 results on 114 pages for 'amazon route53'.

Page 35/114 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • What does BucketAlreadyOwnedByYou error (from Amazon S3) actually mean? I can't find any reason affe

    - by Phyo Wai Win
    Hi there, I am using Amazon S3 to back up my Rails app's mysql database. And I am using astrails-safe plugin to do that and I got the "Your previous request to create the named bucket succeeded and you already own it. (AWS::S3::BucketAlreadyOwnedByYou)" error back whenever I try to update it. I have checked that the folder in which I am going to back up is there in my account already. It's just that I can't upload the files from the code (using astrails-safe). Any help would be appreciated! Thanks.

    Read the article

  • Selecting keys based on metadata, possible with Amazon S3?

    - by nbv4
    I'm sending files to my S3 bucket that are basically gzipped database dumps. They keys are a human readable date ("2010-05-04.dump"), and along with that, I'm setting a metadata field to the UNIX time of the dump. I want to write a script that retrieve the latest dump from the bucket. That is to say I want the the key with the largest unix time metadata value. Is this possible with Amazon S3, or is this not how S3 is meant to work? I'm using both the command line tool aws, and the python library boto

    Read the article

  • How do I use Amazon's new RRS for S3?

    - by pbarney
    Reduced Redundancy Storage (RRS) is a new service from Amazon that is a bit cheaper than S3 because there is less redundancy. However, I can not find any information on how to specify that my data should use RRS rather than standard S3. In fact, there doesn't seem to be any website interface for an S3 services. If I log into AWS, there are only options for EC2, Elastic MapReduce, CloudFront and RDS, none of which I use. Any insight? Thanks.

    Read the article

  • With the attachment_fu rails plugin, is there any way to delete files uploaded to Amazon S3?

    - by Eric Nguyen
    Let's say I'm using attachment_fu to attach profile pics to user profiles in a system, with Amazon S3 used as the actual file storage. When users upload new profile pics, I'd like to replace the attached file with the new one. I can do this within my database (i.e. the file metadata) easily, but attachment_fu doesn't seem to provide methods for deleting the files from S3. Am I missing something, or am I approaching this the wrong way? Many thanks!

    Read the article

  • Which is better, save a website images on Amazon S3, Flickr or Picasa?

    - by Amr ElGarhy
    I have a website and going to extend it, so users will upload their images on this website, i want to save users images in another storage service. Users will save images and view them, and also share with others. I know that i can do that using Amazon S3, Flickr or Picasa. But i want to know which is better than which? which one should i use and why? Based on your experiences, can you recommend one, or advice me if you a better service than those 3?

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Permission denied on /dev/xvdf despite sudo?

    - by Sid
    I'm trying to get a stream off port 9999 and write to /dev/xvdf. I'm using Amazon EC2 with the Ubuntu 11.10 server image. By default I log in as 'ubuntu' which has sudo privileges. However, when I run the following netcat command, I get this error ubuntu@ip-10-252-35-122:~$ ls -al /dev/xv* brw-rw---- 1 root disk 202, 1 2011-11-30 22:22 /dev/xvda1 brw-rw---- 1 root disk 202, 80 2011-11-30 22:27 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo netcat -p 9999 -l > /dev/xvdf bash: /dev/xvdf: Permission denied ubuntu@ip-10-252-35-122:~$ Any idea why I get the permission denied error and how I can work around it? Update: Something mysterious is running in the background that resets the permissions? Check the snippet below and the permission flags seem to automatically reset when I try using /dev/xvdf !?! ubuntu@ip-10-252-35-122:~$ sudo chmod 777 /dev/xvdf ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brwxrwxrwx 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo nc -p 9999 -l > /dev/xvdf This is nc from the netcat-openbsd package. An alternative nc is available in the netcat-traditional package. usage: nc [-46DdhklnrStUuvzC] [-i interval] [-P proxy_username] [-p source_port] [-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [hostname] [port[s]] ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brw-rw---- 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ I'm using stock Ubuntu Amazon EC2 images from http://alestic.com/ (links to images at the top)

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • s3fs Input/output error

    - by shadow_of__soul
    i'm trying to set up a backup system with s3fs and the amazon s3 service. i followed this 2 guides: http://qugstart.com/blog/linux/how-to-mount-an-amazon-s3-bucket-as-virtual-drive-on-centos-5-2/ http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ anyway making a tail to the /var/log/messages i get: Aug 28 13:37:46 server s3fs:###response=403 i already tried creating the authentication file on /etc/passwd-s3fs and setting there the access and private key, passing it trough the command line, i checked several times the credentials and i used it with s3fox, and is working. i also have set the time of the machine (with the date command) to be the same as the amazon S3 servers (i got the time of the S3 server uploading a file with the file manager) not only rsync don't work, commands like ls or cp in the /mnt/s3 didn't work also. any help of how i can solve/debug this? Regards, Shadow.

    Read the article

  • AWS RDS Mysql with benstalk Hibernate app: Character encoding issue

    - by TeraTon
    I'm running a webapp from amazon rds with tomcat 7 and spring, which uses hibernate as the persistence layer. The application and utf-8 encoding work properly on localhost, but for some reason when I deploy to amazon, the UTF-8 encoding breaks. I use mysql 5.5.27 on amazon rds and the table that we wish to update has collation set to utf8 - utf8_unicode_ci And in hibernate I have set: < prop key="hibernate.connection.charSet"UTF-8 UTF-8 characters get replaced by ??? and this is of course especially bad for passwords and usernames + email as it basically kills them. Anyone else encountered character encoding breaking when deploying to amazon?

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • How to hide Amazon webstore default toolbar with Prototype?

    - by melaos
    Hi Guys, after playing around this morning, i've found that there's this default chunk of html code in the amazon webstore which will add a toolbar on top of the page. the html looks like below: <td id="wba_logo_bg"> <table class="logo" border="0" cellpadding="0" cellspacing="0" width="100%"> <tbody><tr><td align="left"></td> <td class="wba_account" style="padding: 5px;" align="right" valign="top"> <table border="0" cellpadding="0" cellspacing="0"> <form action="#" id="searchForm" method="get" name="searchForm"></form> <tbody><tr><td class="wba_account_link"> <a xmlns:xhtml="http://www.w3.org/1999/xhtml" class="myAccountNav" href="#" onclick="return false;">home</a></td> <td class="myAccountDots"></td> <td class="wba_account_link"><a class="myAccountNav" href="#" onclick="return false;">view cart</a></td> <td class="myAccountDots"></td><td class="wba_account_link"><a class="myAccountNav" href="#" onclick="return false;">my account</a></td> <td class="myAccountDots"></td><td class="wba_account_link"><a class="myAccountNav" href="#" onclick="return false;">order status</a></td> <td><img src="pageEditor_files/1_pixel.gif" hspace="7"></td> <td><input name="keyword" tabindex="1" type="text"></td> <td><img alt="Search" class="wba_search_btn" onclick="return false;" onkeyup="if (13==event.keyCode) searchForm.submit();" src="pageEditor_files/btn_search.gif" style="cursor: pointer;" tabindex="2" title="Search" hspace="3"> </td></tr></tbody> </table> </td></tr></tbody> </table> </td> and thus far i was able to use prototype to find those with the class name of wba_account_link and hide them via the codes below: function hideAmazonToolbar() { $("wba_logo_bg").hide(); }//end function but what i really want to do is preferably to hide the whole tbody instead, but with my limited prototype skills, i don't really know how to do this. can anybody point me to the right resources on how to get this done? thanks! EDIT Went higher up and apparently there's a td with an id, and solve it using prototype hide function! man, i love javascript framework :) thanks :)

    Read the article

  • Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?

    - by BigJoe714
    I know this is probably possible using Streams, but I wasn't sure the correct syntax. I would like to pass a string to the Save method and have it gzip the string and upload it to Amazon S3 without ever being written to disk. The current method inefficiently reads/writes to disk in between. The S3 PutObjectRequest has a constructor with InputStream input as an option. import java.io.*; import java.util.zip.GZIPOutputStream; import com.amazonaws.auth.PropertiesCredentials; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.PutObjectRequest; public class FileStore { public static void Save(String data) throws IOException { File file = File.createTempFile("filemaster-", ".htm"); file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file)); writer.write(data); writer.flush(); writer.close(); String zippedFilename = gzipFile(file.getAbsolutePath()); File zippedFile = new File(zippedFilename); zippedFile.deleteOnExit(); AmazonS3 s3 = new AmazonS3Client(new PropertiesCredentials( new FileInputStream("AwsCredentials.properties"))); String bucketName = "mybucket"; String key = "test/" + zippedFile.getName(); s3.putObject(new PutObjectRequest(bucketName, key, zippedFile)); } public static String gzipFile(String filename) throws IOException { try { // Create the GZIP output stream String outFilename = filename + ".gz"; GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(outFilename)); // Open the input file FileInputStream in = new FileInputStream(filename); // Transfer bytes from the input file to the GZIP output stream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); // Complete the GZIP file out.finish(); out.close(); return outFilename; } catch (IOException e) { throw e; } } }

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Incremental backups from Rackspace Cloud Files to Amazon Glacier

    - by Martin Wilson
    Is there a software product/module (open-source or commercial) that can provide incremental backups from Rackspace Cloud Files to Amazon Glacier? We are looking for something that will provide the following functionality (or achieve the same result, i.e. a cost-effective backup strategy for files stored in Rackspace Cloud Files): Work out which files have been added to or modified in a Rackspace Cloud account (since the last backup). Create a ZIP (or similar) of these files and store them in Amazon Glacier. Keep a record of which files are in which ZIPs. Ideally, restore either a single file or all files from Glacier back into Rackspace.

    Read the article

  • Sendmail relay out of Amazon EC2?

    - by Stephen Belanger
    I have a site running on CentOS 5.4 through Amazon EC2. Unfortunately, Amazon has had nothing but trouble with their entire IP range getting black listed from spam tracking services regularly. I need a mail server, so I setup an smtp server elsewhere that I want to send to, but I can't just send directly to it through PHP because the direct smtp request is way too slow. What I want to do is relay through sendmail, but I've never used sendmail before, so I have no idea how to configure it. All I want is for all emails sent from localhost to be relayed to one specific external server, but I don't know how to do that. I tried to find a tutorial online, but couldn't find anything that was particularly clear as to how I go about doing that. Can anyone help me out?

    Read the article

  • Jungledisk - choosing between Amazon S3 and Rackspace ?

    - by Dietmar
    Hi, jungledisk is owned by rackspace and offers an option to choose Amazon S3 which charges for traffic whereby if data is stored on rackspace, no charges for traffic. For a small company, to just backup files with jungle disk from ubuntu linux OS computers, i wonder what to choose. Commercial psychologically the price on traffic and the Amazon brand make it appear or maybe it is, of better quality than the cheaper Rackspace offer which is when it comes for just backing up ok ? but if its about backing up around 30 GB which updates from time to time, what would you consider a good choice ?

    Read the article

  • Post-Purchase Social Media

    - by David Dorf
    When you make a particularly good purchase, the natural tendency is to share the experience with friends. You show them your cool new toy or garment, then explain how you discovered such a great deal, all the while implying you are the world's most savvy shopper. My wife does it with clothes, housewares, and books, and I do it with wiz-bang techie stuff. Post-purchase euphoria or Buyer's remorse are associated with most purchases beyond day-to-day needs. So now let's add social media to the mix. Haul videos are a YouTube phenomenon where a shopper describes their latest haul on video. Blair Fowler, aka juicystar07, is an excellent example. She and her older sister's haul videos have been viewed 75,000,000 times, at times causing particular items to sell out after being showcased. If you're not already on this bandwagon, checkout Blair's haul video from her trip to Forever21. There are a couple good articles on this trend from ABC's GMA, Slate, and NPR. Some retailers are already sending free products to these fashionistas in the hopes they'll be reviewed on camera. For those less willing to exert themselves, there's Blippy, a service that automatically tweets your purchases. Similar to Twitter, your purchases are tweeted so your friends can see what you've purchased and your network can make comments. In the example to the right, co-founder Philip Kaplan purchased a gift for his wife from the store Does Your Mother Know, proving the point that the need for privacy is overblown. Blippy has partnerships with selected merchants like Apple, Amazon, and Netflix and can also get purchases from the credit cards you've registered. When you register, you can configure whether to automatically tweet each purchase, or approve them first. No sense in broadcasting my need for Rogaine, right? This is a good thing for retailers, as it helps spread the word about purchases and gives other people ideas. Rick just bought an ooma from Amazon. What the heck is ooma? Oh, its like Vonage but no monthly bills. I'm there.

    Read the article

  • What are the options for hosting a small Plone site? [closed]

    - by Tina Russell
    Possible Duplicate: How to find web hosting that meets my requirements? I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • Web Designer and/or Developer

    - by chimps
    we've outsourced our app development, the dev's have created a DB hosted on Amazon-EC2. we're in talks with a web designer for website but the designer does not do any backend integration. i.e connect the website with DB created by app developers do you recommend getting designs from the designer and getting a freelancer to do the front-back end integration, I mean would there be issues/complications? Or go with designer who provides the complete package?

    Read the article

  • Hosting multiple low traffic websites on ec2

    - by Niko Sams
    We have like 30 websites with almost no traffic (<~10 visits / day) which are currently hosted on a dedicated server. We are evaluating hosting on Amazon EC2 however I'm not sure how to do that properly. One (micro) instance per website is too expensive ~10 websites on one instance (using apache virtual hosts) make auto scaling impossible (or at least difficult) Or is cloud computing not suitable for such a usecase?

    Read the article

  • AdBlock users statistics

    - by DataSmarter
    Are there statistics of internet users that use AdBlock or other ad blocking plug-ins? Are there some statistical breakdown, for example, per country (I assume it must vary a lot)? I was unable to google the information I am looking for. The reason I am asking is because I have just signed up for the "Amazon Partners" program and see that this affiliate program is listed on the AdBlock blacklist.

    Read the article

  • How to Make the Kindle Fire Silk Browser *Actually* Fast!

    - by The Geek
    Not that long ago, we reviewed the Kindle Fire, and one of our biggest complaints was how lousy the browser is—but we’ve discovered the trick to making it actually fast. Here’s how to fix it. How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >