Search Results

Search found 71852 results on 2875 pages for 'data load'.

Page 159/2875 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • jQuery slider to load content one at a time

    - by Barrett
    I have a slider that load all of my content at once. Into a div. Like so: external page.php $get_users = mysql_query("SELECT * FROM user WHERE id!='$user_id'"); while ($rows = mysql_fetch_assoc($get_users)) { $id = $rows['id']; $firstname = $rows['firstname']; $display_info .= ' <div class="f_outer" id="' . $id . '"> <div class="f_name likeu">' . $firstname . '</div> </div>'; } echo $display_info; I call this page from my find.php page using bxslider Here is my find.php page below. <script type="text/javascript"> $(function() { var slider = $("#slider1").bxSlider(); $("#slider-like").live('click', function() { slider.goToNextSlide(); return false; }); }); </script> <div id="slider-like>Yes</div> <div id="slider1"> <?PHP include ("external.php"); ?> </div> So what I get is all of my .f_outer div on the find.php page. I have hundreds of user and they will all be loaded at once. I would like to only load one slide at a time. So when I click #slider-like it load one of my dive from my external page.

    Read the article

  • NGINX access logging with subdomain

    - by user353877
    We are trying to log requests made through an nginx load balancer. When we make requests to our server on a subdomain (api.blah.com), the request does not show up in the access logs However, requests made directly to blah.com do show up in the access logs. CONFIGURATION INFO We have a DNS record that creates a CNAME for the subdomain 'api' TRIED SO FAR We have tried looking in nginx.conf for exclusions (or anything that would be telling it to not log) We have tried adding server entries with the subdomain specifically and telling those to log but nothing seems to make a difference

    Read the article

  • How to make Nginx fire 504 immediately is server is not available?

    - by Georgiy Ivankin
    I have Nginx set up as a load balancer with cookie-based stickiness. The logic is: If the cookie is NOT there, use round-robbing to choose a server from cluster. If the cookie is there, go to the server that is associated with the cookie value. Server is then responsible for setting the cookie. What I want to add is this: If the cookie is there, but server is down, fallback to round-robbing step to choose next available server. So actually I have load balancing and want to add failover support on top of it. I have managed to do that with the help of error_page directive, but it doesn't work as I expected it to. The problem: 504 (and the fallback associated with it) fires only after 30s timeout even if the server is not physically available. So what I want Nginx to do is fire a 504 (or any other error, doesn't matter) immediately (I suppose this means: when TCP connection fails). This is the behavior we can see in browsers: if we go directly to server when it is down, browser immediately tells us that it can't connect. Moreover, Nginx seems to be doing this for 502 error: if I intentionally misconfigure my servers, Nginx fires 502 immediately. Configuration (stripped down to basics): http { upstream my_cluster { server 192.168.73.210:1337; server 192.168.73.210:1338; } map $cookie_myCookie $http_sticky_backend { default 0; value1 192.168.73.210:1337; value2 192.168.73.210:1338; } server { listen 8080; location @fallback { proxy_pass http://my_cluster; } location / { error_page 504 = @fallback; # Create a map of choices # see https://gist.github.com/jrom/1760790 set $test HTTP; if ($http_sticky_backend) { set $test "${test}-STICKY"; } if ($test = HTTP-STICKY) { proxy_pass http://$http_sticky_backend$uri?$args; break; } if ($test = HTTP) { proxy_pass http://my_cluster; break; } return 500 "Misconfiguration"; } } } Disclaimer: I am pretty far from systems administration of any kind, so there may be some basics that I miss here. EDIT: I'm interested in solution with standard free version of Nginx, not Nginx Plus. Thanks.

    Read the article

  • How do I mirror a MySQL database?

    - by user45745
    I'm running two load balanced servers for one website, and I'd like the databases to be synchronized. Queries may be run on either of the two servers because they are both production sites, so the replication can't just work one way. It doesn't have to be in real-time, just fairly accurate so people don't notice a difference when they get switched to a different server.

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Vagrant up doesn't load chef configs and doesn't keep an error log

    - by la_f0ka
    I'm trying to set up a vagrant box and I'm running with all sort of troubles. Right now I'm getting a strange error message where it states there's a stack trace file with more info, but that file is no where to be found. This is the error: stdin: is not a tty [Sun, 16 Sep 2012 18:31:47 +0000] INFO: *** Chef 0.10.0 *** [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Setting the run_list to ["recipe[apt]", "recipe[openssl]", "recipe[apache2]", "recipe[mysql]", "recipe[mysql::server]", "recipe[php]", "recipe[php::module_apc]", "recipe[php::module_curl]", "recipe[php::module_mysql]", "recipe[apache2::mod_php5]", "recipe[apache2::mod_rewrite]"] from JSON [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Run List is [recipe[apt], recipe[openssl], recipe[apache2], recipe[mysql], recipe[mysql::server], recipe[php], recipe[php::module_apc], recipe[php::module_curl], recipe[php::module_mysql], recipe[apache2::mod_php5], recipe[apache2::mod_rewrite]] [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Run List expands to [apt, openssl, apache2, mysql, mysql::server, php, php::module_apc, php::module_curl, php::module_mysql, apache2::mod_php5, apache2::mod_rewrite] [Sun, 16 Sep 2012 18:31:48 +0000] INFO: Starting Chef Run for natty.talifun.com [Sun, 16 Sep 2012 18:31:48 +0000] ERROR: Running exception handlers [Sun, 16 Sep 2012 18:31:48 +0000] ERROR: Exception handlers complete [Sun, 16 Sep 2012 18:31:48 +0000] FATAL: Stacktrace dumped to /tmp/vagrant-chef-1/chef-stacktrace.out [Sun, 16 Sep 2012 18:31:48 +0000] FATAL: NameError: wrong constant name Chef-symfony2Console Chef never successfully completed! Any errors should be visible in the output above. Please fix your recipes so that they properly complete. And this is what my vagrantfile looks like: Vagrant::Config.run do |config| config.vm.box = "ubuntu-1104-server-i386" config.vm.network :hostonly, "33.33.33.33" config.vm.forward_port 80, 8000 config.vm.share_folder "symfony.tests", "/var/www/symfony.tests", "data", :nfs => true config.vm.provision :chef_solo do |chef| chef.cookbooks_path = ["../my-recipes/cookbooks", "site-cookbooks"] chef.add_recipe "apt" chef.add_recipe "openssl" chef.add_recipe "apache2" chef.add_recipe "mysql" chef.add_recipe "mysql::server" chef.add_recipe "php" chef.add_recipe "php::module_apc" chef.add_recipe "php::module_curl" chef.add_recipe "php::module_mysql" chef.add_recipe "apache2::mod_php5" chef.add_recipe "apache2::mod_rewrite" chef.add_recipe "Symfony" chef.json = { :mysql => { :server_root_password => 'root', :bind_address => '127.0.0.1' } } end end

    Read the article

  • SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables

    - by pinaldave
    I love interesting conversations with related to SQL Server. One of my friends Madhivanan always comes up with an interesting point of conversation. Here is one of the conversation between us. I am very confident this blog post will for sure enable you with some new knowledge. Madhi: How do I know if any table has a uniqueidentifier column used in it? Pinal:  I am sure you know that you can do it through some DMV or catalogue views. Madhi: I know that but how can we do that without using DMV or catalogue views? Pinal: Hm… what can I use? Madhi: You can use table name. Pinal: Easy, just say SELECT YourUniqueIdentCol FROM Table. Madhi: Hold on, the question seems to be not clear to you – you do know the name of the column. The matter of the fact, you do not know if the table has uniqueidentifier column. Only information you have is table name. Pinal: Madhi, this seems like you are changing the question when I am close to answer. Madhi: Well, are you clear now? Let me say it again – How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is table name and you are allowed to return any kind of error if table does not have uniqueidentifier column. Pinal: Do you know the answer? Madhi: Yes. I just wanted to test your knowledge about SQL. Pinal: I will have to think. Let me accept I do not know it right away. Can you share the answer please? Madhi: I won! Here it goes! Pinal: When I have friends like you – who needs enemies? Madhi: (laughter which did not stop for a minute). CREATE TABLE t ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, data VARCHAR(60) ) INSERT INTO t (data) SELECT 'test' INSERT INTO t (data) SELECT 'test1' SELECT $rowguid FROM t DROP TABLE t This is indeed very interesting to me. Please note that this is not the optimal way and there will be many other ways to retrieve uniqueidentifier name and value. What I learned from this was if I am in a rush to check if the table has uniqueidentifier and I do not know the name of the same, I can use SELECT TOP (1) $rowguid and quickly know the name of the column. I can later use the same columnname in my query. Madhi did teach me this new trick. Did you know this? What are other ways to get the check uniqueidentifier column existence in a database? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Slow transfer to external USB3 hard drive

    - by JMP
    Trying to backup data from hard drive before reloading windows following some issue with its load. Having trouble with the file transfer to a USB3/2 external hard drive NTFS. Getting transfer speed of about 116.7kB/sec. In other words its taking about 5 hours to transfer 1.4GB. I've got about 80GB to go. So the transfer is going to take 11days. Seems a little on the slow side. Am I missing something? Is there a way to make this faster. No issue with the external drive transferring this amount in windows. But don't have that option at the moment.

    Read the article

  • When to use an Array vs When to use a Vector, when dealing with GameObjects?

    - by user32465
    I understand that from other answers, Arrays and Vectors are the best choices. Many on SE claim that Linked Lists and Maps are bad for video game programming. I understand that for the most part, I can use Arrays. However, I don't really understand exactly when to use Vectors over Arrays. Why even use Vectors? Wouldn't it be best if I simply always used an Array, that way I know how much memory my game needs? Specifically my game would only ever load a single "Map" area of tiles, such as Map[100][100], so I could very easily have an array of GameObjectContainer GameObjects[100][100], which would reserve an entire map's worth of possible gameobjects, correct? So why use a Vector instead? Memory is quite large on modern hardware.

    Read the article

  • Copying 500GB Data to EC2 Instances Local Drive

    - by iCode
    Please do not ask me why (they made me) but I have to copy 500GB data to the local drive every 200 node/instances that I am launching in EC2. For reasons beyond this post, this data must by on the local drive and not EBS drive so I can not benefit from snapshots. What is the fastest way that I can manage to this? Copying from S3 to each node takes a long time. I trying to attached an EBS volume to every node with the data and then copy the data from EBS to the local drive but that also take a long time (several hours_) Now, I am also thinking to use bit torrent but not sure how well it is going to be. What is the best way to copy 500GB of static data to each local drive of 200 ec2 instances? The 500Gb of data is composed several hundred of file with varying size but the biggest file is 20GB.

    Read the article

  • Recovered all previous data of partition in Windows

    - by Komal Sorathiya
    My whole drive data is lost because I installed ubuntu. After that, I formatted my Laptop and make new drives. in which, one partition is cant accessible because its a raw partition. I recovered some data from that partition using iCare Recovery Software. Then, I fully format that partition, and I put data in that place. I remove all files from that. My problem is that I want to recover my data which is deleted because of ubuntu.. I can recover the most recent data from iCare Recovery, but i cannot recover previous data.. Please, help me for that. I am trying this from many days. Thanks

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • SQL 2008 R2: Data\Log partions

    - by Reese Hirth
    I have a SQL Server setup that a previous IT person set up with a 2TB data partition and a 1TB log partition. The OS partition is 244GB and SQL is installed on a separate 1TB partition. We have an additional 8TB of storage that I would like the new IT staff to bring on line. He wants to create 4 new 2TB data partition. I see this as confusing. Can't we just backup the current data partition, blow it away, and create a new 10TB data partition I'm responsible for administering the data on the server but am not allowed to do the setup myself. This is a GIS server running ArcGIS Server with around 60 geodatabases ranging from 20BG to a couple that may grow to over a TB. So, 5-2TB data partitions or 1-10TB partition. Thanks for the advice.

    Read the article

  • /data/tmp on database server?

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. My teacher gave out an assignment which mentioned "copy cars.dat to /data/tmp on the MySQL database server" without any explanations, I do not know what is the "/data/tmp on database server" means exactly? Basically after that I need to execute SQL statement like LOAD DATA INFILE '/data/tmp/cars.dat' INTO TABLE cars So, what does copy cars.dat to /data/tmp on the database server means as there is no /data/tmp directory even? Personally, I checked /etc/mysql/my.cnf file, inside which there are definitions of : ... basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp ... Does it mean to copy cars.dat to the tmpdir which is just /tmp under root directory??

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • How to achieve redundancy across data centers?

    - by BrandonBT
    I have a LAMP server with a lot of hardware redundancy built in. I am not worried about the server becoming unavailable. What I am worried about, however, are potential network issues in the data center the server is in. What I would like to have is another server in another data center for redundancy. Load balancing is less of a concern. With that said, I am relatively clueless on two points: How to have two servers in two geographically separate data centers that have exactly the same data, in terms of both files and MySQL databases. How to ensure that all traffic coming into one data center are automatically transferred to the other database in the case of a network or server failure at the first data center. Any guidance on how to accomplish the above two problems would be greatly appreciated.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >