Search Results

Search found 27337 results on 1094 pages for 'trv sql'.

Page 113/1094 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • SQL Sharding and SQL Azure&hellip;

    - by Dave Noderer
    Herve Roggero has just published a paper that outlines patterns for scaling using SQL Azure and the Blue Syntax (he and Scott Klein’s company) sharding api. You can find the paper at: http://www.bluesyntax.net/files/EnzoFramework.pdf Herve and Scott have also just released an Apress book Pro SQL Azure. The idea of being able to split (shard) database operations automatically and control them from a web based management console is very appealing. These ideas have been talked about for a long time and implemented in thousands of very custom ways that have been costly, complicated and fragile. Now, there is light at the end of the tunnel. Scaling database access will become easier and move into the mainstream of application development. The main cost is using an api whenever accessing the database. The api will direct the query to the correct database(s) which may be located locally or in the cloud. It is inevitable that the api will change in the future, perhaps incorporated into a Microsoft offering. Even if this is the case, your application has now been architected to utilize these patterns and details of the actual api will be less important. Herve does a great job of laying out the concepts which every developer and architect should be familiar with!

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • Formatting Keywords to UPPERCASE In Oracle SQL Developer

    - by thatjeffsmith
    I received this question from a customer today, and it took me more than a few minutes to remember where this preference was located in SQL Developer. This tells me that the topic is ripe for blogging How do I go FROM: select * from scott.emp where ename like '%JEFF%' TO SELECT * FROM scott.emp WHERE ename LIKE '%JEFF%' It’s all in the formatting You need to access the formatting preferences under the Tools menu. It takes a bit of navigating to get there, so bear with me: Tools Database SQL Formatter Oracle Formatting Click ‘Edit’ on the profile Other Case change: ‘Keywords Uppercase’ It’s easy to find once you know where to look? You can tell it to leave the case alone, upper everything, upper only the keywords, lower everything. Accessing the Formatter Options We allow separate formatting options for different RDBMS. You need to make sure you’re accessing the ‘Oracle Formatting’ page in the preferences. You can then choose to edit the default options OR you can do what I have done – save the defaults as a new set of options. I’ve called my profile ‘JeffCustom.’ I can now switch back and forth now through different sets of formatting options. You need to hit the ‘Edit’ button to get to the formatting options editor. A good number of people seem to miss this. Select your profile, then hit the ‘Edit’ button

    Read the article

  • Connecting to SQL database using SQLCMD

    - by kaleidoscope
    As we all know, there are a number of ways you can connect to your SQL Azure Database. One of the quick options is to try to connect to SQL server is SQLCMD. To start the SQLCMD utility and connect to a named instance of SQL Server Open a Command Prompt window, and type sqlcmd -S myServer\instanceName. Replace myServer\instanceName with the name of the computer and the instance of SQL Server that you want to connect to. Press ENTER. The sqlcmd prompt (1>) indicates that you are connected to the specified instance of SQL Server. SQL Management Studio offers the facility to use SQLCMD from within SQL scripts by using SQLCMD Mode. How to: Enable SQLCMD mode in the Transact-SQL Editor (About how to start the editor, see How to: Start the Transact-SQL Editor.) To toggle SQLCMD mode from the Data menu 1. Open the query in the Transact-SQL editor. 2. On the Data menu, point to Transact-SQL Editor, and click SQLCMD Mode. To toggle SQLCMD mode from the toolbar 1. Open the query in the Transact-SQL editor. 2. On the Transact-SQL Editor toolbar, click SQLCMD Mode. To toggle SQLCMD mode from the shortcut menu 1. Open the query in the Transact-SQL editor. 2. Right-click anywhere in the editor window, and then click SQLCMD Mode. For more information follow below link http://msdn.microsoft.com/en-us/library/ms170207.aspx   Geeta, G

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • SQL Server Windows Auth Login sees Domain as untrusted...

    - by Mr Shoubs
    I've had someone set up a domain controller on windows 2008 on one server, and sql server 2008 on another. The domain seems to be working fine, I'm logged on as a domain user on both servers, nothing seems to be a problem there. However, when I try to add a domain user/group to SQL Server Security (e.g. clicking ok from the create login screen) it says it can't find it (even though I've used the search to find the correct account in the first place), when I try to logon (even though I haven't added it yet) it says something about the account being part of an untrusted domain instead of saying I don't have permission to log on. Anyone have any ideas on what is set up incorrectly?

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Connecting MS SQL using freetds and unixodbc: isql - no default driver specified

    - by Dejan
    I am trying to connect to the MS SQL database using freetds and unixodbc. I have read various guides how to do it, but no one works fine for me. When I try to connect to the database using isql tool, I get the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Have anybody already successfully established the connection to the MS SQL database using freetds and unixodbc on Ubuntu 12.04? I would really appreciate some help. Below is the procedure I used to configure the freetds and unixodbc. Thanks for your help in advance! Procedure First, I have installed the following packages sudo apt-get unixodbc unixodbc-dev freetds-dev tdsodbc and configured freetds as follows: --- /etc/freetds/freetds.conf --- [TS] host = SERVER port = 1433 tds version = 7.0 client charset = UTF-8 Using tsql tool I can successfully connect to the database by executing tsql -S TS -U username -P password As I need an odbc connection I configured odbcinst.ini as follows: --- /etc/odbcinst.ini --- [FreeTDS] Description = FreeTDS Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so FileUsage = 1 CPTimeout = CPResuse = client charset = utf-8 and odbc.ini as follows: --- /etc/odbc.ini --- [TS] Description = "test" Driver = FreeTDS Servername = SERVER Server = SERVER Port = 1433 Database = DBNAME Trace = No Trying to connect to the database using isql tool with such a configuration results the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect

    Read the article

  • SSMS Tools Pack now supports Denali CTP1

    - by AaronBertrand
    Earlier today, Mladen Prajdic ( blog | twitter ) released an updated version of his SSMS Tools Pack (v.1.9.4), a free add-in for Management Studio that provides a ton of helpful functionality that isn't available with the native tools. I'm really glad this happened, because I've installed Denali on all of my VMs and have been using it for most of my work, and I've been missing some of the little things the tool adds. In addition to adding Denali support, Mladen also fixed a handful of minor bugs...(read more)

    Read the article

  • SQL in the City (Charlotte) Wrap Up

    - by drsql
    Ok, it has been quite a while since the event, two weeks and a day to be exact, but I needed a rest before hitting Windows Live Writer again. Speaking is exhausting, traveling is exhausting, and well, I replaced my laptop and had to get all of my software back together. (Between Windows 8.1 sync features, Dropbox and Skydrive, it has never been easier…but I digress.) There are plenty of great vendors out there, but one of my favorites has always been Red-Gate. I have written half of a book with them,...(read more)

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • T-SQL Tuesday #005: On Technical Reporting

    - by Adam Machanic
    Reports. They're supposed to look nice. They're supposed to be a method by which people can get vital information into their heads. And that's obvious, right? So obvious that you're undoubtedly getting ready to close this tab and go find something better to do with your life. "Why is Adam wasting my time with this garbage?" Because apparently, it's not obvious. In the world of reporting we have a number of different types of reports: business reports, status reports, analytical reports, dashboards,...(read more)

    Read the article

  • Always use dtexec.exe to test performance of your dataflows. No exceptions.

    - by jamiet
    Earlier this evening I posted a blog post entitled Investigation: Can different combinations of components effect Dataflow performance? where I compared the performance of three different dataflows all working to the same overall goal. I wanted to make one last point related to the results but I thought it warranted a blog post all of its own. Here is a screenshot of one of the dataflows that I was testing: Pretty complicated I’m sure you’ll agree. Now, when I executed this dataflow in the test it was executing in ~19seconds however in that case I was executing using the command-line tool dtexec. I also tried executing inside the BIDS development environment and in that case it took much longer – 139seconds. That’s more than seven times as long. The point I want to make is very simple. If you are testing your dataflows for performance please use dtexec. Nothing else will suffice. @Jamiet

    Read the article

  • SQL Saturday Birmingham #328 Database Design Precon In One Week

    - by drsql
    On September 22, I will be doing my "How to Design a Relational Database" pre-conference session in Birmingham, Alabama. You can see the abstract here if you are interested, and you can sign up there too, naturally. At just $100, which includes a free ebook copy of my database design book, it is a great bargain and I totally promise it will be a little over 7 hours of talking about and designing databases, which will certainly be better than what you do on a normal work day, even a Friday....(read more)

    Read the article

  • Does multiple files in SQL Server when using RAID help reduce conflicts in growth and file-locking?

    - by Dr Giles M
    I've been reading around and get the impression that if you are using RAID then using multiple SQL Server files within a filegroup won't yeild any more improvements, and the benefits are purely administrative (if you started to run out of space or wanted to partition off data into managable chunks for backups/balancing the data around your big server room). However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller databases that SQL Server will perform growth and locking operations (for writes) on a LOGICAL file basis, so even if you are using RAID, it seems to make sense to have multiple files in a file group to balance I/O, or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Is the SAN dying???

    - by RickHeiges
    Is the SAN dying? The reason that I ask this question is that MSFT has unleashed technologies this year that point in that direction Always ON Availability Groups shuns shared storage Windows 2012 has Storage Replication Technology that does not require a SAN Windows 2012 has Hyper-V Replica Technology that does not require a SAN PDW v2 continues to reinforce the approach to avoid shared storage I'm not saying that SAN technology does not have its place or does not have benefits inherent to the beast....(read more)

    Read the article

  • Defaults for Exporting Data in Oracle SQL Developer

    - by thatjeffsmith
    I was testing a reported bug in SQL Developer today – so the bug I was looking for wasn’t there (YES!) but I found a different one (NO!) – and I was getting frustrated by having to check the same boxes over and over again. What I wanted was INSERT STATEMENTS to the CLIPBOARD. Not what I want! I’m always doing the same thing, over and over again. And I never go to FILE – that’s too permanent for my type of work. I either want stuff to the clipboard or to the worksheet. Surely there’s a way to tell SQL Developer how to behave? Oh yeah, check the preferences So you can set the defaults for this dialog. Go to: Tools – Preferences – Database – Utilities – Export Now I will always start with ‘INSERT’ and ‘Clipboard’ – woohoo! Now, I can also go INTO the preferences for each of the different formats to save me a few more clicks. I prefer pointy hats (^) for my delimiters, don’t you? So, spend a few minutes and set each of these to what you’re normally doing and save yourself a bunch of time going forward.

    Read the article

  • SQL Server 2008 R2 Service Pack 2 CTP is available

    - by AaronBertrand
    You can download the Service Pack 2 CTP from the following URL: http://www.microsoft.com/en-us/download/details.aspx?id=29848 The build # is 10.50.3720. This service pack contains all of the fixes from Service Pack 1 & Cumulative Updates 1 through 5, and a couple of other minor fixes (a couple of SSRS bugs and a bug about an ALTER TABLE batch not being cached correctly). It does not include fixes from Service Pack 1 Cumulative Update #6, which I mentioned recently . You should *NOT* install this...(read more)

    Read the article

  • Oracle Database 12c By Example – SQL Developer and Multitenant

    - by thatjeffsmith
    As you may have heard, Oracle Database 12c is now available. In addition to the binaries and docs going out, we also published a few new Oracle By Example (OBE) chapters. You can find those links here on our product page. Do you know who found these, practically the minute they were published? An enterprising DBA-extraordinaire who was just happening to be presenting at the ODTUG KScope13 conference in New Orleans. He thought it would be a good idea to download the new software over a hotel WIFI, install and create a new multitenant database, watch a few OBEs, and then demo that live for his ‘SQL Developer for DBAs‘ session. Pretty crazy, right? Well, he did it, and I was there to watch. Way cool. You can listen to @leight0nn tell his story in his own words via this ODTUG interview with @oraclenered. In case you’re too giddy to sit through the video, I’ll give you a preview – he succesfully cloned a pluggable database in about a minute with only a couple of clicks using Oracle SQL Developer 3.2.20.09 while connected to a 12c database.

    Read the article

  • SQL Server Add Primary Key

    - by Derek D.
    Adding a primary key can be done either after a table is created, or at the same a table is created. It is important to note, that by default a primary key is clustered. This may or may not be the preferred method of creation. For more information on clustered vs non [...]

    Read the article

  • SQL Server 2000 need to prevent logons whilst performing a backup for a side by side migration

    - by pigeon
    I'm looking for a way to prevent logons from occurring in order to take a full backup of a Database to migrate from its current SQL Server 2000 instance to a new SQL 2005 instance. A friend of mine suggested running a script which would put the DB into a rollback state. Not being a DBA my DDL is very poor and running a script that I don't understand may not be the best idea. One option which might be easier is to simply detach and copy, to the new server. Any suggestions would be greatly appreciated.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >