Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 105/121 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • What considerations should be made for a web app to be released on a cloud hosted system?

    - by Rhubarb
    I have a web app that is primarily a WordPress app, but it pulls content from a Django app, simply by calling a service that uses Django models. My understanding of cloud computing is a bit vague. If the site needs to scale up with short notice, does the cloud provider (Amazon, Rackspace, whomever) simply spin up new instances (copies) of my initially configured server? How is state managed between all of them? Are there any good primers on this subject? It's hard to find much out there without getting caught up in the marketing.

    Read the article

  • Apache Axis2 tried books

    - by josh
    I am looking to get my hands wet in Java Web Services. Apache CXF and Axis2 seem to be in demand a lot. I googled a bit for finding a good Axis2 book that has lot of working examples. some books like Quick Start Apache Axis 2 seem to have bad reviews on Amazon and titles like Web Services with Apache CXF and Axis2 seem to lack working examples. I want to get opinion from people who currently work with Axis2 and have read a book or two on the subject. Which is a good book for an Axis2 n00b.

    Read the article

  • RDF Usage Rates for Syndication

    - by David in Dakota
    Is RDF still used widely for content syndication? Specifically, I know only of Slashdot as a large scale website syndicating content in that format (say versus RSS). Understandably this might seem vague to answer so more specifically: Can anyone list any larger sites similar in scale to Amazon or CNN using it? Any web based publishing platforms (Wordpress, Joomla, etc...) that generate syndication feeds with this xml vocabulary. Any other more quantifiable evidence that it is used for syndication online. I understand that RDF may be a parent specification but in this case I'm talking about sites that syndicate content using <rdf as a root element and heavily leveraging elements from the RDF namespace: http://www.w3.org/1999/02/22-rdf-syntax-ns#

    Read the article

  • Expression Tree : C#

    - by nettguy
    My understanding of expression tree is : Expression trees are in-memory representation of expression like arithmetic or boolean expression.The expressions are stored into the parsed tree.so we can easily transalate into any other language. Linq to SQL uses expression tree.Normally when our LINQ to SQL query compiler translates it to parsed expression trees.These are passed to Sql Server as T-SQL Statements.The Sql server executes the T-SQL query and sends down the result back.That is why when you execute LINQ to SQL you gets IQueryable<T> not IEnumetrable<T>.Because IQuerybale contains public IQueryable:IEnumerable { Type Element {get;} Expression Expression {get;} IQueryaleProvider Provider {get;} } Questions : Microsoft uses Expression trees to play with LINQ-to-Sql.What are the different ways can i use expression trees to boost my code. Apart from LINQ to SQL,Linq to amazon ,who used expression trees in their applications? Linq to Object return IEnumerable,Linq to SQL return IQueryable ,What does LINQ to XML return?

    Read the article

  • ssl port didnt work on nginx

    - by Jin Lin
    I set up the unicorn and nginx on one of my ec2 machine. and my request are loading ok with nginx listen to port 80. but when I enable it to ssl, which listen to port 443. It doesn't work. and it can still work with port 80, https. server { listen 443 ssl; # replace with your domain name server_name domain.com; # replace this with your static Sinatra app files, root + public root /home/ubuntu/domain/public; ssl on; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.key; # maximum accepted body size of client request client_max_body_size 4G; # the server will close connections after this time keepalive_timeout 5; location ~ ^/assets/ { add_header ETag ""; gzip_static on; expires max; add_header Cache-Control public; } location / { proxy_set_header X-Forwarded-Proto https; try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # pass to the upstream unicorn server mentioned above proxy_pass http://unicorn_server; } }

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • Ngram IDF smoothing

    - by adi92
    I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents. I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf.. For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf.. I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest. If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores. Anybody has any better ideas?

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • Ruby ways to authenticate using headers?

    - by webdestroya
    I am designing an API system in Ruby-on-Rails, and I want to be able to log queries and authenticate users. However, I do not have a traditional login system, I want to use an APIkey and a signature that users can submit in the HTTP headers in the request. (Similar to how Amazon's services work) Instead of requesting /users/12345/photos/create I want to be able to request /photos/create and submit a header that says X-APIKey: 12345 and then validate the request with a signature. Are there any gems that can be adapted to do that? Or better yet, any gems that do this without adaptation? Or do you feel that it would be wiser to just have them send the API key in each request using the POST/GET vars?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • How to decouple an app's agile development from a database using BDUF?

    - by Rob Wells
    G'day, I was reading the article "Database as a Fortress" by Dan Chak from the excellent book "97 Things Every Software Architect Should Know" (sanitised Amazon link) which suggests that databases should not be designed using an agile approach. There's an SO question on agile approaches and databases "Agile development and database changes" which has some excellent answers covering agile development approaches. In fact, one of the answers supplies a brilliant idea of what's needed for each update of the DB. ;-) But after reading Dan Chak's article, I am left wondering if an agile approach is really suitable for large scale systems. This of course leads on to the question of how best to decouple an agile approach for the application that is interacting with the BDUF database design without adding complicated translation layers in the final design employed? Any suggestions? cheers,

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • How to bind a control to a singleton in Cocoa?

    - by SphereCat1
    I have a singleton in my FTP app designed to store all of the types of servers that the app can handle, such as FTP or Amazon S3. These types are plugins which are located in the app bundle. Their path is located by applicationWillFinishLoading: and sent to the addServerType: method inside the singleton to be loaded and stored in an NSMutableDictionary. My question is this: How do I bind an NSDictionaryController to the dictionary inside the singleton instance? Can it be done in IB, or do I have to do it in code? I need to be able to display the dictionary's keys in an NSPopupButton so the user can select a server type. Thanks in advance! SphereCat1

    Read the article

  • Java interface and abstract class issue

    - by George2
    Hello everyone, I am reading the book -- Hadoop: The Definitive Guide, http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/0596521979/ref=sr_1_1?ie=UTF8&s=books&qid=1273932107&sr=8-1 In chapter 2 (Page 25), it is mentioned "The new API favors abstract class over interfaces, since these are easier to evolve. For example, you can add a method (with a default implementation) to an abstract class without breaking old implementations of the class". What does it mean (especially what means "breaking old implementations of the class")? Appreciate if anyone could show me a sample why from this perspective abstract class is better than interface? thanks in advance, George

    Read the article

  • Embeding EasyVideoPlayer Code into Wordpress Theme - Video not showing

    - by bbacarat
    I'm attempting to place some embed code into a Premium WordPress Theme. NOTE: I'm not great when it comes to php. The embed code is produced by a video player called EasyVideoPlayer. (Basically it allows me to use Amazon S3 and gives me feedback on when people stop watching the video.) This is the embed code I have: _evpInit('ZXh0cmEtbW9uZXktZnJvbS1ob21lLTEubW92'); I've opened the index.php wordpress file and placed this video embed code in between the that represents the area of the website I want it to show up. However the video is not showing. If we place both the theme and video player aside, would you expect the php code to accept what I've done or is this not the way to go about adding this embed code? NOTE:I've contacted both the Wordpress Premium Theme support at Woothemes.com and the video players support for EasyVideoPlayer.com However both tend to stop at the point that another paid product is involved! Grrreat. website is www.extramoneyfromhome.co.uk

    Read the article

  • Is GAE Really GZipping My Content? Slow Response Times with GAE as CDN

    - by viatropos
    I am testing out Google App Engine as a free Content Delivery Network and it feels like it's taking a long time to serve up my content. Why does this gae page take a say a half a second to download, while your typical stack overflow page downloads much faster even with a ton more content? What am I missing here? All I have done is create an app and uploaded an image according to that tutorial, but content is being served very slowly it seems. Any suggestions? (Not considering Amazon or other CDNs right now, just looking for help with GAE). Note: I am using Safari when I visit those links, maybe safari is causing problems?

    Read the article

  • How do API Keys and Secret Keys work?

    - by viatropos
    I am just starting to think about how api keys and secret keys work. Just 2 days ago I signed up for Amazon S3 and installed the S3Fox Plugin. They asked me for both my Access Key and Secret Access Key, both of which require me to login to access. So I'm wondering, if they're asking me for my secret key, they must be storing it somewhere right? Isn't that basically the same thing as asking me for my credit card numbers or password and storing that in their own database? How are secret keys and api keys supposed to work? How secret do they need to be? Are these applications that use the secret keys storing it somehow? Thanks for the insight.

    Read the article

  • How important is caching for a site's speed with PHP?

    - by benhowdle89
    I've just made a user-content orientated website http://www.humanisms.co.uk Its done in PHP, MySQL and jQuery's AJAX, at the moment there is only a dozen or so submissions and already i can feel it lagging slightly when it goes to a new page (therefore running a new mysql query) Is it most important for me to try and optimise my mysql queries (by prepared statements) or is it worth in looking at CDN's (amazon s3) and caching (much like the WordPress plugin WP Super Cache) which works by serving static HTML files when there hasnt been new content submitted. Which route is the most beneficial, for me as a developer, to take, ie. where am i better off concentrating my efforts to speed up the site?

    Read the article

  • Looking for suggestions for hosting Windows 2000 Server in the cloud / VPS / etc?

    - by JohnyD
    I have a Windows 2000 Server, currently virtualized in Hyper-V, that I would like to get running off-site as a backup (cloud, VPS, etc). You can't virtualize in EC2 and I'm fairly certain there are no Server 2000 AMI's floating about (correct me if I'm wrong!). If anyone has a recommendation on how I can get a virtualized Windows 2000 Server running in a secure, remote environment I would be grateful. As far as locations go I'd be interested in both North America as well as Australia and Europe. In a nutshell, we're ploughing our way out of a legacy codebase and this server is the last that remains of the legacy apps. However, it is still very much used by our clients. Everything is backed up each night (data, images, etc) to tape which is then taken offsite. However, in the event of a fire I would love to have a backup legacy server to point DNS records to. So while I am rebuilding from the ashes our services would already be available. It would save a lot of time and make my managers all the more happy (and that's what it's all about, riighhtt? :D) Thank you all for your suggestions. Please let me know if I've left out any important information. Additional info: - the legacy codebase does not function properly in Server 2003

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >