Search Results

Search found 31902 results on 1277 pages for 'sql backup'.

Page 98/1277 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Backing up Windows Server 2008 R2 to FTP server

    - by Adrian Grigore
    Hi, I'm looking for an inexpensive way of backing up my Windows 2008 R2 dedicated server to an FTP server. To be any useful, the software should also be able to restore the server by using a bootable CD and the backup set stored on the FTP server. So Windows server backup seems to be out of the question. Can anyone recommend any suitable products? Preferably some you have actually tried yourself? Thanks, Adrian Edit: Just to clarify, by inexpensive I mean something that costs 250 EUR or less...

    Read the article

  • Back up Windows 2008 SBS to iSCSI disk

    - by Farseeker
    I've almost no experience with SBS 2008, so please excuse my noob question! SBS 2008 only has the most basic backup utility built in as far as I can tell (similar to Vista), and it will only back up to physical volumes. I've read that you can set up a batch task to backup to a network volume, but right now I just need to get something deployed ASAP. We have an iSCSI target with plenty of free space. Is it worth backing up to an iSCSI target? Or am I wasting my time? If I need to do a recovery from the iSCSI disk, how would I go about it?

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Cannot connect to a 2008 sql server named instance hosted in a azure virtual machine

    - by emardini
    When I try to connect to a named instance in a SQL SERVER hosted in a azure VM I get this message: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) The problem is the sql browser is not working properly, when I start the sql browser service it closes after a few seconds and the event log says "There are no instances of SQL Server or SQL Server Analysis Services." But I do have a named instance, I can connect locally to this instance. I've re-installed sql browser and the instance but ii does not work. The host is a azure virtual machine windows server 2008 datacenter. Please help. Thank you

    Read the article

  • Windows 7 Image Restore with a smaller hard drive

    - by Vaccano
    I have a 500 GB drive that I have made a system image of. I would like to move that to a 250 GB drive (because it is a Solid State drive). I have made a Windows 7 Backup Image of my 500 GB drive. I am currently only using 163 GB of that drive. Can I just restore that to the target drive or will the restore be expecting a 500 GB drive? If it is expecting it I can shrink my partition to less that 250 and backup again. But I would rather not if that is not needed. Will the restore realize that it is not using all the space and just take what it needs?

    Read the article

  • WesternDigital SmartWare - still problematic?

    - by FrustratedWithFormsDesigner
    I've seen a lot of bad press on WD SmartWare (I think it comes on most WD backup devices now, such as their MyBook product line), mostly related to how it's impossible to remove properly or replace. There are allegations (I couldn't tell how true they were) that it has/is a rootkit, as well. Most of the articles are a couple of years old, so I'm wondering if SmartWare is still just as problematic as it was. Does it still have a nasty rootkit reputation and should I just stick with the Windows 7 built-in backup system, or is the current SmartWare generation improved and better behaved?

    Read the article

  • Deleting pagefile.sys on shutdown

    - by Daniel E. Shub
    I have a Windows XP machine (it is a VM running in Xen) that I would like to backup. I have enabled ClearPageFileAtShutdown by following MS KB 314834. If I cleanly shutdown the XP machine and then mount the drive in another machine (which is trivial since the machine is virtual) I still have a large pagefile.sys. I was hoping that enabling ClearPageFileAtShutdown would result in a pagefile.sys with a size near zero. I have two questions. First, is it possible to have pagefile.sys be deleted, or have a drastic size reduction, at shutdown? Second, can I exclude pagefile.sys from my backup?

    Read the article

  • Using mongodump with an auth enabled mongodb server

    - by bb-generation
    I'm trying to do a daily backup of my mongodb server (auth enabled) using the mongodump tool. mongodump provides two parameters to set the credentials: -u [ --username ] arg username -p [ --password ] arg password Unfortunately they don't provide any parameter to read the password from stdin. Therefore everytime I run this command, everyone on the server can read the password (e.g. by using ps aux). The only workaround I have found is stopping the database and directly accessing the database files using the --dbpath parameter. Is there any other solution which allows me to backup the mongodb database without stopping the server and without "publishing" my password? I am using Debian squeeze 6.0.5 amd64 with mongodb 1.4.4-3.

    Read the article

  • Is it a good idea to take onsite/offsite backups of server images?

    - by ServerAdminGuy45
    Assuming a non-virtualized environment it a good idea to take actual images of servers (using something like Acronis True Image) and store them on\off site? Backing up data is great but I feel it would be good to have copies of OS images in the event hardware dies or an upgrade gets botched I can always revert back. What would be your recommended way to do this (preferably using a NAS and an online backup service)? I was talking with the Iron Mountain folks and the service they described is more geared toward taking incremental snapshots of data. I'm not sure if there's a way to backup images in an incremental way such that only the changes between them are saved (that way I'm not wasting X GB each time I take an image).

    Read the article

  • Extract duplicity difftar files manually

    - by isnogud
    I have a duplicity backup which i am not able to recover with duplicity. By calling duplicity file:///path/to/backups /path/to/dir, it returns "Local and Remote metadata are syncronized, no sync needed." but the /path/to/dir is empty. I decrypted all backup volumes and I'm able to view and extract the files from the different difftar files. My only problem is that there are files partitioned and saved in folders named after the files. Can anyone give me a simple script or at least a hint how to untar these difftar files so i get the actual files instead of the partitioned ones?

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Difference between SQL 2005 and SQL 2008 for inserting multiple rows with XML

    - by Sam Dahan
    I am using the following SQL code for inserting multiple rows of data in a table. The data is passed to the stored procedure using an XML variable : INSERT INTO MyTable SELECT SampleTime = T.Item.value('SampleTime[1]', 'datetime'), Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/MyRecord') T(item) I have a whole bunch of unit tests to verify that I am inserting the right information, the right number of records, etc.. when I call the stored procedure. All fine and dandy - that is, until we began to monkey around with the compatibility level of the database. The code above worked beautifully as long as we kept the compatibility level of the DB at 90 (SQL 2005). When we set the compatibility level at 100 (SQL 2008), the unit tests failed, because the stored procedure using the code above times out. The unit tests are dropping the database, re-creating it from scripts, and running the tests on the brand new DB, so it's not - I think - a question of the 'old compatibility level' sticking around. Using the SQL Management studio, I made up a quick test SQL script. Using the same XML chunk, I alter the DB compat level , truncate the table, then use the code above to insert 650 rows. When the level is 90 (SQL 2005), it runs in milliseconds. When the level is 100 (SQL 2008) it sometimes takes over a minute, sometimes runs in milliseconds. I'd appreciate any insight anyone might have into that. EDIT The script takes over a minute to run with my actual data, which has more rows than I show here, is a real table, and has an index. With the following example code, the difference goes between milliseconds and around 5 seconds. --use [master] --ALTER DATABASE MyDB SET compatibility_level =100 use [MyDB] declare @xml xml set @xml = '<?xml version="1.0"?> <Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Record> <SampleTime>2009-01-24T00:00:00</SampleTime> <Volume1>0</Volume1> <Volume2>0</Volume2> </Record> ..... 653 records, sample time spaced out 4 hours ........ </Root>' DECLARE @myTable TABLE( ID int IDENTITY(1,1) NOT NULL, [SampleTime] [datetime] NOT NULL, [Volume1] [float] NULL, [Volume2] [float] NULL) INSERT INTO @myTable select T.Item.value('SampleTime[1]', 'datetime') as SampleTime, Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/Record') T(item) I uncomment the 2 lines at the top, select them and run just that (the ALTER DATABASE statement), then comment the 2 lines, deselect any text and run the whole thing. When I change from 90 to 100, it runs all the time in 5 seconds (I change the level once, but I run the series several times to see if I have consistent results). When I change from 100 to 90, it runs in milliseconds all the time. Just so you can play with it too. I am using SQL Server 2008 R2 standard edition.

    Read the article

  • java database backup restore

    - by jawath
    how do i backup /restore any kind of databases inside my java application to flate files.Are there any tools framework available to backup database to flat file like CSV, XML,or secure encrypted file,or restore from csv or xml files to databases ,it should be also capable of dumping table vise restore and backup also

    Read the article

  • Custom Online Backup Solution Advice

    - by Martín Marconcini
    I have to implement a way so our customers can backup their SQL 2000/5/8 databasase online. The application they use is a C#/.NET35 Winforms application that connects to a SQL Server (can be 2000/2005/2008, sometimes express editions). The SQL Server is on the same LAN. Our application has a very specific UI and we must code each form following those guidelines. There’s lots of GDI+ to give it the look and feel we want. For that reason, using a 3rd party application is not a very good idea. We need to charge the customer on a monthly/annual basis for the service. Preferably, the customer doesn’t need to care about bandwidth and storage space. It must be transparent. Given the above reqs., my first thoughts are: Solution 1: Code some sort of FTP basic functionality with behind the scenes SQL Backup mechanism, then hire a Hosting service and compress-transfer the .BAK to the Hosting. Maintain a series of Folders (for each customer). They won’t see what’s happening. They will just see a list of their files and a big “Backup now” button that will perform the SQL backup, compress it and upload it (and update the file list) ;) Pros: Not very complicated to implement, simple to use, fairly simple to configure (could have a dedicated ftp user/pass) Cons: Finding a “ftp” only hosting plan is not probably going to be easy, they usually come with a bunch of stuff. FTP is not always the best protocol. more? Solution 2: Similar to 1, but instead of FTP, find a cloud computing service like Amazon S3, Mosso or similar. Pros: Cloud Storage is fast, reliable, etc. It’s kind of easy to implement (specially if there are APIs like AWS or Mosso). Cons: I have been unable to come up with a service optimized for resellers where I can give multiple sub-accounts (one for each customer). Billing is going to be a nightmare cuz these services bill per/GB and with One account it’s impossible to differentiate each customer. Solution 3: Similar to 2, but letting the user create their own account on Amazon S3 (for example). Pros: You forget about billing and such. Cons: A mess for the customer who has to open the Amazon (or whatever) account, will be charged for that and not from you. You can’t really charge the customer (since you’re just not doing anything). Solution 4: Use one of the many backup online solutions that use the tech in cloud storage. Pros: many of these include SQL Server backup, and a lot of features that we’d have to implement. Plus web access and stuff like that will come included. Cons: Still have the billing problem described in number 2. Little of these companies (if any) offers “reseller” accounts. You have to eventually use their software (some offer certain branding). Any better approach? Summary: You have a software (.NET Winapp). You want your users to be able to backup their SQL Server databases online (and be able to retrieve the backups if needed). You ideally would like to charge the customer for this service (i.e. XX € a year).

    Read the article

  • Mercurial local repository backup

    - by Ricket
    I'm a big fan of backing things up. I keep my important school essays and such in a folder of my Dropbox. I make sure that all of my photos are duplicated to an external drive. I have a home server where I keep important files mirrored across two drives inside the server (like a software RAID 1). So for my code, I have always used Subversion to back it up. I keep the trunk folder with a stable copy of my application, but then I create a branch named with my username, and inside there is my working copy. I make very few changes between commits to that branch, with the understanding that the code in there is my backup. Now I'm looking into Mercurial, and I must admit I haven't truly used it yet so I may have this all wrong. But it seems to me that you have a server-side repository, and then you clone it to a working directory in the form of a local repository. Then as you work on something, you make commits to that local repository, and when things are in a state to be shared with others, you hg push to the parent repository on the server. Between pushes of stable, tested, bug-free code, where is the backup? After doing some thinking, I've come to the conclusion that it is not meant for backup purposes and it assumes you've handled that on your own. I guess I need to keep my Mercurial local repositories in my dropbox or some other backed-up location, since my in-progress code is not pushed to the server. Is this pretty much it, or have I missed something? If you use Mercurial, how do you backup your local repositories? If you had turned on your computer this morning and your hard drive went up in flames (or, more likely, the read head went bad, or the OS corrupted itself, ...), what would be lost? If you spent the past week developing a module, writing test cases for it, documenting and commenting it, and then a virus wipes your local repository away, isn't that the only copy? So then on the flip side, do you create a remote repository for every local repository and push to it all the time? How do you find a balance? How do you ensure your code is backed up? Where is the line between using Mercurial as backup, and using a local filesystem backup utility to keep your local repositories safe?

    Read the article

  • Web site backup in PHP?

    - by Pekka
    Does anybody know a clean PHP-based solution that can backup remote web sites using FTP? Must haves: Recursive FTP backups Possible to run through a cron job Easy to configure (easy adding of multiple sites) Local storage of backup files is sufficient Would be nice: Backed up sites are stored as zip files A nice interface to manage things Provides notification when backup has succeeded or failed Does incremental backups Does MySQL Database backups I need this to be PHP (or Perl) based because it's going to be used on shared hosting packages that do not allow usage of the standard GNU/Linux tools available.

    Read the article

  • Best choice for off-site backup: dd vs tar

    - by plok
    I have two 1TB single-partition hard disks configured as RAID1, of which I would like to make an off-site backup on a third disk, which I am still to buy. The idea is to store the backup at a relative's house, considerably far away from my place, in the hope that all the information will be safe in the case of a global thermonuclear apocalypse. Of course, this backup would be well encrypted. What I still have to decide is whether I am going to simply tar the entire partition or, instead, use dd to create an image of the disks. Is there any non-trivial difference between these two approaches that I could be overlooking? This off-site backup would be updated no more than two or three times a year, in the best of the cases, so performance should not be a factor to be pondered at all. What, and why, would you use if you were me? dd, tar, or a third option?

    Read the article

  • java database backup and restore

    - by jawath
    How do I backup / restore any kind of databases inside my java application to flate files.Are there any tools framework available to backup database to flat file like CSV, XML, or secure encrypted file, or restore from csv or xml files to databases, it should be also capable of dumping table vise restore and backup also.

    Read the article

  • Backup multiple Exchange Accounts without direct access to exchange server

    - by Mike Wallace
    For e-mail, we use Microsoft Exchange and it is hosted by 1and1.com. We have about 30 Exchange accounts that I would like to backup to a PST file. That is, for each account that we have (all 30), I would like to create a single PST file (1.pst thru 30.pst). I do not have direct access to the Exchange server. Basically, for each Exchange account, I can supply: The IP address for the Exchange server or the URL to the OWA. The Username The Password Is there a tool out there that can do this for me? It seems that Microsoft's "Online Services Migration Tools" comes awfully close, but it appears that its geared to pull data out of any Exchange server and push it into Microsoft Online. I don't believe it can be used to simply pull the data out and generate PST's.

    Read the article

  • Backup and Restore ADAM database

    - by kuoson
    Hi, I was trying to backup and restore an ADAM database to a different server the other day. I copied all files under "Program Files/Microsoft ADAM" folder to the same path in the destination server and started the ADAM service in the destination server up. Although the service come back up successfully and I was able to connect to the instance with ADAM ADSI Edit mmc snap-in, I found I had to reset every single user's password before they can login again. Has anyone got this issue before? Is the password encrypted with the server IP address or something like that?

    Read the article

  • Unlimited and multi-computer online storage solutions with automatic backup

    - by JRL
    As the title says, what are the existing online storage solutions that provide: unlimited storage automatic backup and allow for an unlimited number of computers (use not tied to a single computer)? There are several existing questions on this site related to online storage solutions, but none that is specifically targeted to what I want, so I thought I'd ask the question. This wikipedia article lists some of them, are there others? How do they compare in terms of price, feature set and ease of use? Update: Kinda disappointed no one has any answers to this so far. JungleDisk looks promising, anyone have experience with it? Update 2: To answer the comments, what I'm looking for definitely DOES exist. These solutions all seem to fit the bill: BackMii CrashPlan DataPreserve Humyo JungleDisk KeepVault SpiderOak And some of them are quite cheap (CrashPlan is $100 a year). For unlimited space and computers, I'd say that's pretty good. Does anyone have experience with CrashPlan or any other of the above solutions?

    Read the article

  • Backup and Restore ADAM database

    - by kuoson
    I was trying to backup and restore an ADAM database to a different server the other day. I copied all files under "Program Files/Microsoft ADAM" folder to the same path in the destination server and started the ADAM service in the destination server up. Although the service come back up successfully and I was able to connect to the instance with ADAM ADSI Edit mmc snap-in, I found I had to reset every single user's password before they can login again. Has anyone got this issue before? Is the password encrypted with the server IP address or something like that?

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • SQL Server Driver for PHP 2.0 CTP2 is now released

    - by The Official Microsoft IIS Site
    digg_url = "http://blogs.msdn.com/b/sqlphp/archive/2010/06/15/sql-server-driver-for-php-2-0-ctp2-is-now-released.aspx";digg_title = "SQL Server Driver for PHP 2.0 CTP2 is now released";digg_bgcolor = "#FFFFFF";digg_skin = "normal"; digg_url = undefined;digg_title = undefined;digg_bgcolor = undefined;digg_skin = undefined; It is our pleasure to announce the release of Community Technology Preview 2 (CTP2) of the SQL Server Driver for PHP 2.0! We would like to...(read more)

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >