Search Results

Search found 30014 results on 1201 pages for 'go yourself'.

Page 44/1201 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Why "Tailoring" Your Resume Is Bad

    - by Mike C
    I was just writing a response to a comment on my "Sell Yourself!" presentation ( http://sqlblog.com/blogs/michael_coles/archive/2010/12/05/sell-yourself-presentation.aspx#comments ), and it started getting a little lengthy so I decided to turn it into a blog post. The "Sell Yourself!" post got a couple of very good comments on the blog, and quite a few more comments offline. I think I'll start this one with a great exchange from the movie "The Princess Bride": Vizzini: HE DIDN'T FALL? INCONCEIVABLE....(read more)

    Read the article

  • EFS recovery given everything but the Registry

    - by Joel in Gö
    I have an unfortunate problem: my old Win Xp installation has died, probably due to the hard drive failing. The drive now fails all SMART tests, but I can get files off it OK. I have now installed Windows 7 on a new drive, and want to transfer files from the old drive. However, some sensitive files were in an encrypted folder (I think EFS?). How can I un-encrypt them, given that I have essentially my entire old XP installation on disk? Thanks!

    Read the article

  • Something about traveling in china?

    - by user79989
    Well, i am Chinese ,i am in China, if you want to go to trvalling to China , you must must go to Beijing and the city of Xi' an , because of if you go to China you have to go to Beijing , eveyone in China wants to go to Beijing to play at Tian an men Place .The back of the square is the home of the ancient Chinese emperors , and you must know about Chinese Chang Cheng ,you can also see it in Beijing , and don't need to talk too much ,you must know Beijing , and there also has many modern culutures ,such like 798 arts center , and the SANLITUN village , and many many many foregner love to go to NAN LUO GU XIANG. ChinaTour.com is a reliable China Travel Agency based in USA, which has specialized in inbound China travel for decades. We provide a spectrum of private China tours, China group tours, customized China tours, China hotel booking and China-USA air ticket booking service for individuals, groups, families, students etc.

    Read the article

  • Windows 7 taskbar items not grouping properly

    - by Joel in Gö
    I don't understand the Windows 7 taskbar behaviour. For some programs it will not group the running instances, or will groups some of them but not all. I have set taskbar items to "always combine", but this has not helped. It seems to possibly be two issues: with an app that has a different taskbar icon when running than for its launcher; and for VisualStudio, when starting by double clicking a project it groups separately from when starting the IDE from the .exe. Is there any way to force the items to combine? I quite like the Win7 taskbar, and would like it to work consistently...

    Read the article

  • Windows 7/Ubuntu 12 dual boot deleted for Windows 8 installation. How to make grub rescue go away?

    - by dimious
    I had a Windows 7/Ubuntu 12 dual boot and I decided to clean install Windows 8 over them. The problem is that after I deleted all partitions and installed windows I was getting an "Operation system not found", however after an "enter" the system will normally boot into Windows 8. I realized that Windows did their trick and put the system (not partition anymore?!?) "tag" (Disk Management) on my media hard drive. After trying to fix the boot/mbr to be able to boot from my main drive the "Operation system not found" changed to the "grub rescue" prompt. I know that I cannot use that because I have killed the grub files. Windows can still boot as long as I choose to boot from the media drive. The question is, is there any way to move the "system", whatever it is now, to the main drive and have the PC boot from there, while making grub disappear? And if that is possible after that, can I just make the Media drive inactive or I will have to somehow remove the "system" tag?

    Read the article

  • Can't change picture import settings in Windows 7

    - by Joel in Gö
    I have foolishly set the picture import settings wrongly for when a camera card is put in my computer. It correctly asks me whether I would like to import the pictures using Windows picture import, and when I click on this option it flashes up the "import pictures" dialog very briefly, and then immediately gets on with importing the pictures to the wrong place. I simply want to click on the "import settings" link on the dialog - but it's too quick for me, and even if I do manage it it just ignores me. Windows Help, useful as ever, tells me to click on the link :) Any ideas? Thanks!

    Read the article

  • Is it possible to increase the levels of abstraction I can hold in my head/reason about at once? How would I go about this? [closed]

    - by invaliduser
    I'd like to be able to read through fifteen pages of assembly code and know what it does. I'd like to be able to write programs that write programs that write programs that write programs. We've made a lot of strides in taking load off our brains by with good tools (Chrome dev mode/Firebug for web stuff, REPL's for many languages, IDEs), but I'd like my brain to be able to handle a bigger load, as opposed to paring the load down with tools.

    Read the article

  • In China. Want to set up my own private proxy. Already have website/webhosting. Help please! n00b with respect to coding/programming, go easy on me [closed]

    - by user1725461
    I am in China and have used freegate in the past -- http://en.wikipedia.org/wiki/Freegate Recently I've been having too many problems with that and some other web-based proxies I usually use. I have a website that is hosted in the US which I can access from China. Is there an easy way for me to setup my own secure private proxy? I'm sick of all my internet problems and looking for a new workable solution. Thank you! PS: and I really hope this is the right place for such a question...

    Read the article

  • Been doing .NET for several years and am thinking about a platform change. Where do people suggest I go?

    - by rsteckly
    Hi, I've been programming in .NET for several years now and am thinking maybe its time to do a platform switch. Any suggestions about which platform would be the best to learn? I've been thinking about going back to C++ development or just focusing on T-SQL within the Microsoft stack. I'm thinking of switching because: a) I feel that the .NET platform is increasingly becoming commodified--meaning that its more about learning a GUI and certain things to click around than really understanding programming. I'm concerned that this will lend itself to making developers on that stack increasingly paid less. b) It's very frustrating to spend your entire day essentially debugging something that should work but doesn't. Usually, Microsoft releases something that suggests anyone can just click here and there and poof there's your application. Most of the time it doesn't work and winds up sucking so much more time than it was supposed to save. c) I recently led a team in a small startup to build a WPF application. We were really hit hard with people complaining about having to download the runtime. Our code was also not portable to any other platform. Added to which, the ram usage and slowness to load of the app was remarkable for its size. I researched it and we could not find a way to optimize it. d) I'm a little concerned about being wedded to the Windows platform. What are the pros and cons of adding another platform and which platform do people suggest? Thanks!

    Read the article

  • Four Words That Go Together Well: Oracle. Technology. Network. Lounge.

    - by Oracle OpenWorld Blog Team
    Again this year the Oracle Technology Network (OTN) Lounge will be in the Howard Street Tent, on Howard Street (surprise!) between Moscone North and South. The OTN Lounge is a central meeting place for all Oracle OpenWorld and JavaOne attendees to network with Oracle experts, ACEs, and peers. And to discuss areas of common technology interest - or discuss anything at all. Check out this OTN blog post to get the details on exciting special activities, drawings, and contests happening at the lounge the week of the conference. Stop by - there could be a t-shirt in it for you. OTN Lounge Hours Sunday, September 30    7:00 p.m. - 8:30 p.m.Monday, October 1          8:00 a.m. - 7:00 p.m.Tuesday, October 2         8:00 a.m. - 7:00 p.m.Wednesday, October 3    8:00 a.m. - 5:00 p.m.Thursday, October 4        8:00 a.m. - 2:00 p.m.

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Odd behavior when recursively building a return type for variadic functions

    - by Dennis Zickefoose
    This is probably going to be a really simple explanation, but I'm going to give as much backstory as possible in case I'm wrong. Advanced apologies for being so verbose. I'm using gcc4.5, and I realize the c++0x support is still somewhat experimental, but I'm going to act on the assumption that there's a non-bug related reason for the behavior I'm seeing. I'm experimenting with variadic function templates. The end goal was to build a cons-list out of std::pair. It wasn't meant to be a custom type, just a string of pair objects. The function that constructs the list would have to be in some way recursive, with the ultimate return value being dependent on the result of the recursive calls. As an added twist, successive parameters are added together before being inserted into the list. So if I pass [1, 2, 3, 4, 5, 6] the end result should be {1+2, {3+4, 5+6}}. My initial attempt was fairly naive. A function, Build, with two overloads. One took two identical parameters and simply returned their sum. The other took two parameters and a parameter pack. The return value was a pair consisting of the sum of the two set parameters, and the recursive call. In retrospect, this was obviously a flawed strategy, because the function isn't declared when I try to figure out its return type, so it has no choice but to resolve to the non-recursive version. That I understand. Where I got confused was the second iteration. I decided to make those functions static members of a template class. The function calls themselves are not parameterized, but instead the entire class is. My assumption was that when the recursive function attempts to generate its return type, it would instantiate a whole new version of the structure with its own static function, and everything would work itself out. The result was: "error: no matching function for call to BuildStruct<double, double, char, char>::Go(const char&, const char&)" The offending code: static auto Go(const Type& t0, const Type& t1, const Types&... rest) -> std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> My confusion comes from the fact that the parameters to BuildStruct should always be the same types as the arguments sent to BuildStruct::Go, but in the error code Go is missing the initial two double parameters. What am I missing here? If my initial assumption about how the static functions would be chosen was incorrect, why is it trying to call the wrong function rather than just not finding a function at all? It seems to just be mixing types willy-nilly, and I just can't come up with an explanation as to why. If I add additional parameters to the initial call, it always burrows down to that last step before failing, so presumably the recursion itself is at least partially working. This is in direct contrast to the initial attempt, which always failed to find a function call right away. Ultimately, I've gotten past the problem, with a fairly elegant solution that hardly resembles either of the first two attempts. So I know how to do what I want to do. I'm looking for an explanation for the failure I saw. Full code to follow since I'm sure my verbal description was insufficient. First some boilerplate, if you feel compelled to execute the code and see it for yourself. Then the initial attempt, which failed reasonably, then the second attempt, which did not. #include <iostream> using std::cout; using std::endl; #include <utility> template<typename T1, typename T2> std::ostream& operator <<(std::ostream& str, const std::pair<T1, T2>& p) { return str << "[" << p.first << ", " << p.second << "]"; } //Insert code here int main() { Execute(5, 6, 4.3, 2.2, 'c', 'd'); Execute(5, 6, 4.3, 2.2); Execute(5, 6); return 0; } Non-struct solution: template<typename Type> Type BuildFunction(const Type& t0, const Type& t1) { return t0 + t1; } template<typename Type, typename... Rest> auto BuildFunction(const Type& t0, const Type& t1, const Rest&... rest) -> std::pair<Type, decltype(BuildFunction(rest...))> { return std::pair<Type, decltype(BuildFunction(rest...))> (t0 + t1, BuildFunction(rest...)); } template<typename... Types> void Execute(const Types&... t) { cout << BuildFunction(t...) << endl; } Resulting errors: test.cpp: In function 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]': test.cpp:33:35: instantiated from here test.cpp:28:3: error: no matching function for call to 'BuildFunction(const int&, const int&, const double&, const double&, const char&, const char&)' Struct solution: template<typename... Types> struct BuildStruct; template<typename Type> struct BuildStruct<Type, Type> { static Type Go(const Type& t0, const Type& t1) { return t0 + t1; } }; template<typename Type, typename... Types> struct BuildStruct<Type, Type, Types...> { static auto Go(const Type& t0, const Type& t1, const Types&... rest) -> std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> { return std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> (t0 + t1, BuildStruct<Types...>::Go(rest...)); } }; template<typename... Types> void Execute(const Types&... t) { cout << BuildStruct<Types...>::Go(t...) << endl; } Resulting errors: test.cpp: In instantiation of 'BuildStruct<int, int, double, double, char, char>': test.cpp:33:3: instantiated from 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]' test.cpp:38:41: instantiated from here test.cpp:24:15: error: no matching function for call to 'BuildStruct<double, double, char, char>::Go(const char&, const char&)' test.cpp:24:15: note: candidate is: static std::pair<Type, decltype (BuildStruct<Types ...>::Go(BuildStruct<Type, Type, Types ...>::Go::rest ...))> BuildStruct<Type, Type, Types ...>::Go(const Type&, const Type&, const Types& ...) [with Type = double, Types = {char, char}, decltype (BuildStruct<Types ...>::Go(BuildStruct<Type, Type, Types ...>::Go::rest ...)) = char] test.cpp: In function 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]': test.cpp:38:41: instantiated from here test.cpp:33:3: error: 'Go' is not a member of 'BuildStruct<int, int, double, double, char, char>'

    Read the article

  • Random directions, with no repeat.. (Bad Description)

    - by Neurofluxation
    Hey there, So I'm knocking together a random pattern generation thing. My code so far: int permutes = 100; int y = 31; int x = 63; while (permutes > 0) { int rndTurn = random(1, 4); if (rndTurn == 1) { y = y - 1; } //go up if (rndTurn == 2) { y = y + 1; } //go down if (rndTurn == 3) { x = x - 1; } //go right if (rndTurn == 4) { x = x + 1; } //go left setP(x, y, 1); delay(250); } My question is, how would I go about getting the code to not go back on itself? e.g. The code says "Go Left" but the next loop through it says "Go Right", how can I stop this? NOTE: setP turns a specific pixel on. Cheers peoples!

    Read the article

  • How to rename database without first stopping SQL instance to flush connections

    - by John Galt
    Is there a way to force a database into single user mode so a script can be run to rename databases? I find I have to Restart the instance of SQL (to force off any connections from a web app, etc.) and then I can run this script: USE master go sp_dboption MDS, "single user", true go sp_dboption StagingMDS, "single user", true go sp_renamedb MDS, LastMonthMDS go sp_renamedb StagingMDS, MDS go sp_dboption LastMonthMDS, "single user", false go sp_dboption MDS, "single user", false go After this script runs, I can restart IIS for my web app and it can connect to the new production database. All the above works well and we've been doing this for years but now we've upgraded to SQL 2008 and the SQL2008 instance also hosts other databases that support other web apps. So, rather than using a Restart of the whole SQL instance to enable subsequent single-user mode on 2 databases, is there a less intrusive way of accomplishing this? Thanks.

    Read the article

  • If a table has two xml columns, will inserting records be a lot slower?

    - by Lieven Cardoen
    Is it a bad thing to have two xml columns in one table? + How much slower are these xml columns in terms of updating/inserting/reading data? In profiler this kind of insert normally takes 0 ms, but sometimes it goes up to 160ms: declare @p8 xml set @p8=convert(xml,N'<interactions><interaction correct="false" score="0" id="0" gapid="0" x="61" y="225"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="1" gapid="1" x="64" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction><interaction correct="false" score="0" id="2" gapid="2" x="131" y="250"><feedback/><element id="0" position="0" elementtype="1"><asset/></element></interaction></interactions>') declare @p14 xml set @p14=convert(xml,N'<contentinteractions/>') exec sp_executesql N'INSERT INTO [dbo].[PackageSessionNodes]([dbo].[PackageSessionNodes].[PackageSessionId], [dbo].[PackageSessionNodes].[TreeNodeId],[dbo].[PackageSessionNodes].[Duration], [dbo].[PackageSessionNodes].[Score],[dbo].[PackageSessionNodes].[ScoreMax], [dbo].[PackageSessionNodes].[Interactions],[dbo].[PackageSessionNodes].[BrainTeaser], [dbo].[PackageSessionNodes].[DateCreated], [dbo].[PackageSessionNodes].[CompletionStatus], [dbo].[PackageSessionNodes].[ReducedScore], [dbo].[PackageSessionNodes].[ReducedScoreMax], [dbo].[PackageSessionNodes].[ContentInteractions]) VALUES (@ins_dboPackageSessionNodesPackageSessionId, @ins_dboPackageSessionNodesTreeNodeId, @ins_dboPackageSessionNodesDuration, @ins_dboPackageSessionNodesScore, @ins_dboPackageSessionNodesScoreMax, @ins_dboPackageSessionNodesInteractions, @ins_dboPackageSessionNodesBrainTeaser, @ins_dboPackageSessionNodesDateCreated, @ins_dboPackageSessionNodesCompletionStatus, @ins_dboPackageSessionNodesReducedScore, @ins_dboPackageSessionNodesReducedScoreMax, @ins_dboPackageSessionNodesContentInteractions) ; SELECT SCOPE_IDENTITY() as new_id This is the table: CREATE TABLE [dbo].[PackageSessionNodes]( [PackageSessionNodeId] [int] IDENTITY(1,1) NOT NULL, [PackageSessionId] [int] NOT NULL, [TreeNodeId] [int] NOT NULL, [Duration] [int] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [Interactions] [xml] NOT NULL, [BrainTeaser] [bit] NOT NULL, [DateCreated] [datetime] NULL, [CompletionStatus] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [ContentInteractions] [xml] NOT NULL, CONSTRAINT [PK_PackageSessionNodes] PRIMARY KEY CLUSTERED ( [PackageSessionNodeId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_PackageSessions] FOREIGN KEY([PackageSessionId]) REFERENCES [dbo].[PackageSessions] ([PackageSessionId]) ON UPDATE CASCADE ON DELETE CASCADE GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_PackageSessions] GO ALTER TABLE [dbo].[PackageSessionNodes] WITH CHECK ADD CONSTRAINT [FK_PackageSessionNodes_TreeNodes] FOREIGN KEY([TreeNodeId]) REFERENCES [dbo].[TreeNodes] ([TreeNodeId]) GO ALTER TABLE [dbo].[PackageSessionNodes] CHECK CONSTRAINT [FK_PackageSessionNodes_TreeNodes] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_Score] DEFAULT ((-1)) FOR [Score] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ScoreMax] DEFAULT ((-1)) FOR [ScoreMax] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_DateCreated] DEFAULT (getdate()) FOR [DateCreated] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScore] DEFAULT ((-1)) FOR [ReducedScore] GO ALTER TABLE [dbo].[PackageSessionNodes] ADD CONSTRAINT [DF_PackageSessionNodes_ReducedScoreMax] DEFAULT ((-1)) FOR [ReducedScoreMax] GO

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Limitation of the View 12

    - by pinaldave
    I have previously written on the subject SQL SERVER – The Limitations of the Views – Eleven and more…. This was indeed a very popular series and I had received lots of feedback on that topic. Today we are going to discuss something very interesting as well. During my recent performance tuning seminar in Hyderabad, I presented on the subject of Views. During the seminar, one of the attendees asked a question: We create a table and create a View on the top of it. On the same view, if we create Index, when querying View, will that index be used? The answer is NOT Always! (There is only one specific condition when it will be used. We will write about that later in the next post). Let us see the test case for the same. In our script we will do following: USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView GO Let us check the execution plan for the last SELECT statement. You can see from the execution plan. That even though we are querying View and the View has index, it is not really using that index. In the next post, we will see the significance of this View and where it can be helpful. Meanwhile, I encourage you to read my View series: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQL View, T SQL, Technology

    Read the article

  • SQL SERVER – Reduce the Virtual Log Files (VLFs) from LDF file

    - by pinaldave
    Earlier, I wrote a quite note on SQL SERVER – Detect Virtual Log Files (VLF) in LDF. Because of this I got responses suggesting too many VLFs are bad for log file. This prompts to a simple question: “How many is ‘too many’ VLFs?” I suggest that you go and read an article written by Kimberly over here. I am sure that you are going to have a clear understanding of what a good number for your VLFs is from that article. If you have lots of VLFs, you can reduce them right away using the following method: (I am just attempting to write a working script over here.) USE AdventureWorks GO BACKUP LOG AdventureWorks TO DISK='d:\adtlog.bak' GO -- Get Logical file name of the log file sp_helpfile GO DBCC SHRINKFILE(AdventureWorks_Log,TRUNCATEONLY) GO ALTER DATABASE AdventureWorks MODIFY FILE (NAME = AdventureWorks_Log,SIZE = 1GB) GO DBCC LOGINFO GO Again, here I have assumed that your initial log size is 1 GB, but in reality you should select the number based on your own ideal size of the log file. If your log file grows to 10 GB every day, you may want to put the value as 10 GB. For accuracy, read what Kimberly’s original article says over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >