Search Results

Search found 49435 results on 1978 pages for 'query string'.

Page 95/1978 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Find all substrings of a string - StringIndexOutOfBoundsException

    - by nazar_art
    I created class Word. Word has a constructor that takes a string argument and one method getSubstrings which returns a String containing all substring of word, sorted by length. For example, if the user provides the input "rum", the method returns a string that will print like this: r u m ru um rum I want to concatenate the substrings in a String, separating them with a newline ("\n"). Then return the string. Code: public class Word { String word; public Word(String word) { this.word = word; } /** * Gets all the substrings of this Word. * @return all substrings of this Word separated by newline */ public String getSubstrings() { String str = ""; int i, j; for (i = 0; i < word.length(); i++) { for (j = 0; j < word.length(); j++) { str = word.substring(i, i + j); str += "\n"; } } return str; } But it throws exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1 at java.lang.String.substring(String.java:1911) I stuck at this point. Maybe, you have other suggestions according this method signature public String getSubstrings(). How to solve this issue?

    Read the article

  • XSLT multiple string replacement with recursion

    - by John Terry
    I have been attempting to perform multiple (different) string replacement with recursion and I have hit a roadblock. I have sucessfully gotten the first replacement to work, but the subsequent replacements never fire. I know this has to do with the recursion and how the with-param string is passed back into the call-template. I see my error and why the next xsl:when never fires, but I just cant seem to figure out exactly how to pass the complete modified string from the first xsl:when to the second xsl:when. Any help is greatly appreciated. <xsl:template name="replace"> <xsl:param name="string" select="." /> <xsl:choose> <xsl:when test="contains($string, '&#13;&#10;')"> <xsl:value-of select="substring-before($string, '&#13;&#10;')" /> <br/> <xsl:call-template name="replace"> <xsl:with-param name="string" select="substring-after($string, '&#13;&#10;')"/> </xsl:call-template> </xsl:when> <xsl:when test="contains($string, 'TXT')"> <xsl:value-of select="substring-before($string, '&#13;TXT')" /> <xsl:call-template name="replace"> <xsl:with-param name="string" select="substring-after($string, '&#13;')" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$string"/> </xsl:otherwise> </xsl:choose> </xsl:template>

    Read the article

  • What's wrong with this code to un-camel-case a string?

    - by omair iqbal
    Here is my attempt to solve the About.com Delphi challenge to un-camel-case a string. unit challenge1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type check = 65..90; TForm1 = class(TForm) Edit1: TEdit; Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form1: TForm1; var s1,s2 :string; int : integer; implementation {$R *.dfm} procedure TForm1.Button1Click(Sender: TObject); var i: Integer; checks : set of check; begin s1 := edit1.Text; for i := 1 to 20 do begin int :=ord(s1[i]) ; if int in checks then insert(' ',s1,i-1); end; showmessage(s1); end; end. check is a set that contains capital letters so basically whenever a capital letter is encountered the insert function adds space before its encountered (inside the s1 string), but my code does nothing. ShowMessage just shows text as it was entered in Edit1. What have I done wrong?

    Read the article

  • How do I strip multiple (optional) parts of a SQL string using .NET Regular Expressions?

    - by Luc
    I've been working on this for a few hours now and can't find any help on it. Basically, I'm trying to strip a SQL string into various parts (fields, from, where, having, groupBy, orderBy). I refuse to believe that I'm the first person to ever try to do this, so I'd like to ask for some advise from the StackOverflow community. :) To understand what I need, assume the following SQL string: select * from table1 inner join table2 on table1.id = table2.id where field1 = 'sam' having table1.field3 > 0 group by table1.field4 order by table1.field5 I created a regular expression to group the parts accordingly: select\s+(?<fields>.+)\s+from\s+(?<from>.+)\s+where\s+(?<where>.+)\s+having\s+(?<having>.+)\s+group\sby\s+(?<groupby>.+)\s+order\sby\s+(?<orderby>.+) This gives me the following results: fields => * from => table1 inner join table2 on table1.id = table2.id where => field1 = 'sam' having => table1.field3 > 0 groupby => table1.field4 orderby => table1.field5 The problem that I'm faced with is that if any part of the SQL string is missing after the 'from' clause, the regular expression doesn't match. To fix that, I've tried putting each optional part in it's own (...)? group but that doesn't work. It simply put all the optional parts (where, having, groupBy, and orderBy) into the 'from' group. Any ideas?

    Read the article

  • How to split this string and identify first sentence after last '*'?

    - by DaveDev
    I have to get a quick demo for a client, so this is a bit hacky. Please don't flame me too much! :-) I'm getting a string similar to the following back from the database: The object of the following is to do: * blah 1 * blah 2 * blah 3 * blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I need to format this so that each * represents a bullet point, and the sentence after the last * goes onto a new line, ideally as follows: The object of the following is to do: blah 1 (StackOverflow wants to add bullet points here, but I just need '*') blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. It's easy enough to split the string by the * character and replace that with <br /> *. I'm using the following for that: string description = GetDescription(); description = description.Replace("*", "<br />*"); // it's going onto a web page. but the result this gives me is: The object of the following is to do: blah 1 blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I'm having a bit of difficulty identifying the fist sentence after the last '*' so I can put a break there too. Can somebody show me how to do this?

    Read the article

  • How to parse a string (by a "new" markup) with R ?

    - by Tal Galili
    Hi all, I want to use R to do string parsing that (I think) is like a simplistic HTML parsing. For example, let's say we have the following two variables: Seq <- "GCCTCGATAGCTCAGTTGGGAGAGCGTACGACTGAAGATCGTAAGGtCACCAGTTCGATCCTGGTTCGGGGCA" Str <- ">>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<." Say that I want to parse "Seq" According to "Str", by using the legend here Seq: GCCTCGATAGCTCAGTTGGGAGAGCGTACGACTGAAGATCGTAAGGtCACCAGTTCGATCCTGGTTCGGGGCA Str: >>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<. | | | | | | | || | +-----+ +--------------+ +---------------+ +---------------++-----+ | Stem 1 Stem 2 Stem 3 | | | +----------------------------------------------------------------+ Stem 0 Assume that we always have 4 stems (0 to 3), but that the length of letters before and after each of them can very. The output should be something like the following list structure: list( "Stem 0 opening" = "GCCTCGA", "before Stem 1" = "TA", "Stem 1" = list(opening = "GCTC", inside = "AGTTGGGA", closing = "GAGC" ), "between Stem 1 and 2" = "G", "Stem 2" = list(opening = "TACGA", inside = "CTGAAGA", closing = "TCGTA" ), "between Stem 2 and 3" = "AGGtC", "Stem 3" = list(opening = "ACCAG", inside = "TTCGATC", closing = "CTGGT" ), "After Stem 3" = "", "Stem 0 closing" = "TCGGGGC" ) I don't have any experience with programming a parser, and would like advices as to what strategy to use when programming something like this (and any recommended R commands to use). What I was thinking of is to first get rid of the "Stem 0", then go through the inner string with a recursive function (let's call it "seperate.stem") that each time will split the string into: 1. before stem 2. opening stem 3. inside stem 4. closing stem 5. after stem Where the "after stem" will then be recursively entered into the same function ("seperate.stem") The thing is that I am not sure how to try and do this coding without using a loop. Any advices will be most welcomed.

    Read the article

  • Winforms connection strings from App.config

    - by Geo Ego
    I have a Winforms app that I am developing in C# that will serve as a frontend for a SQL Server 2005 database. I rolled the executable out to a test machine and ran it. It worked perfectly fine on the test machine up until the last round of changes that I made. However, now on the test machine, it throws the following exception immediately upon opening: System.NullReferenceException: Object reference not set to an instance of an object. at PSRD_Specs_Database_Administrat.mainMenu.mainMenu_Load(Object sender, EventArgs e) at System.Windows.Forms.Form.OnLoad(EventArgs e) at System.Windows.Forms.Form.OnCreateControl() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ContainerControl.WndProc(Message& m) at System.Windows.Forms.Form.WmShowWindow(Message& m) at System.Windows.Forms.Form.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) The only thing that I changed in this version that pertains to mainMenu_Load is the way that the connection string to the database is called. Previously, I had set a string with the connection string on every form that I needed to call it from, like: string conString = "Data Source = SHAREPOINT;Trusted_Connection = yes;" + "database = CustomerDatabase;connection timeout = 15"; As my app grew and I added forms to it, I decided to add an App.config to the project. I defined the connection string in it: <connectionStrings> <add name="conString" providerName="System.Data.SqlClient" connectionString="Data Source = SHAREPOINT;Trusted_Connection = yes;database = CustomerDatabase;connection timeout = 15" /> </connectionStrings> I then created a static string that would return the conString: public static string GetConnectionString(string conName) { string strReturn = string.Empty; if (!(string.IsNullOrEmpty(conName))) { strReturn = ConfigurationManager.ConnectionStrings[conName].ConnectionString; } else { strReturn = ConfigurationManager.ConnectionStrings["conString"].ConnectionString; } return strReturn; } I removed the conString variable and now call the connection string like so: PublicMethods.GetConnectionString("conString").ToString() It appears that this is giving me the error. I changed these instances to directly call the connection string from App.config without using GetConnectionString. For instance, in a SQLConnection: using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["conString"].ConnectionString)) This also threw the exception. However, when I went back to using the conString variable on each form, I had no issues. What I don't understand is why all three methods work fine on my development machine, while using the App.config directly or via the static string I created throw exceptions.

    Read the article

  • StringIndexOutOfBoundsException error in main method

    - by Ro Siv
    I am obtaining a StringIndexOutOfBoundsError when running my main method. Here is the output of my program in the command line. "Please enter the shift, 1 for day, 2 for night" 1 "you entered a number for the shift" "Please enter the hourly pay Rate" 2 "you entered a number for the pay Rate" "Please enter the employees name" brenda "cat6b" "your value you entered is correct 0-9 or a - z" "Please enter the employee number" 100e "cat41" "your value you entered is correct 0-9 or a - z" "Please enter current date in XXYYZZZZ format, X is day, Y is month, Z is year" 10203933 "cat81 " "your value you entered is correct 0-9 or a - z" 90 1 valye of array is 1 81 0 value of array is 0 82 2 value of array is 2 83 0 value of array is 0 84 3 value of array is 3 85 9 value of array is 9 86 3 value of array is 3 87 3 value of array is 3 "Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 8 at java.lang.String.charAt(String.java:658) at ProductionWorker<init>(ProductionWorker.java:66) at labBookFiftyFour.main(labBookFiftyFour.java:58)" "Press any key to continue . . ." Ignore the cat parts in the code, i was using a println statement to test the code. Ignore the value of array output as well, as i wanted to use an array in the program later on. Here is my main method. import java.math.*; import java.text.DecimalFormat; import java.io.*; import java.util.*; public class labBookFiftyFour { public static void main(String[] args) { Scanner myInput = new Scanner(System.in); int shift = -1; double pRate = -2; String name = " "; String number = " "; String date = " "; while(shift < 0 || pRate < 0 ) { System.out.println("Please enter the shift, 1 for day, 2 for night"); if(myInput.hasNextInt()){ System.out.println("you entered a number for the shift"); shift = myInput.nextInt(); } System.out.println("Please enter the hourly pay Rate"); if(myInput.hasNextDouble()){ System.out.println("you entered a number for the pay Rate"); pRate = myInput.nextDouble(); } else if (myInput.hasNext()) { System.out.println("Please enter a proper value"); myInput.next(); } else { System.err.println("No more input"); System.exit(1); } } myInput.nextLine(); //consume newLine System.out.println("Please enter the employees name"); name = myInput.nextLine(); //use your isValid method if(isValid(name)) { System.out.println("your value you entered is correct 0-9 or a - z "); } System.out.println("Please enter the employee number"); number = myInput.nextLine(); //use your isValid method if(isValid(number)) { System.out.println("your value you entered is correct 0-9 or a - z "); } System.out.println("Please enter current date in XXYYZZZZ format, X is day, Y is month, Z is year"); date = myInput.nextLine(); //use your isValid method if(isValid(date)) { System.out.println("your value you entered is correct 0-9 or a - z "); } ProductionWorker myWorker = new ProductionWorker(shift, pRate, name, number, date); //int day and night , double payRate System.out.println("THis is the shift " + myWorker.getShift() + " This is the pay Rate " + myWorker.getPRate() + " " + myWorker.getName() + " " + myWorker.getNumber() + " " + myWorker.getDate()); } //Made this method for testing String input for 0-9 or a - z values , put AFTER main method, but before end of class public static boolean isValid(String stringName) //This method has to be static, for some reason? { System.out.println("cat" + stringName.length() + stringName.charAt(0)); boolean flag = true; int index = 0; while(index < stringName.length()) { if(Character.isLetterOrDigit(stringName.charAt(index))) { flag = true; } else { flag = false; } ++index; } return flag; } } Here is my employeeOne. java Superclass public class employeeOne { private String name; private String number; private String date; public employeeOne(String name, String number, String date) { this.name = name; this.number = number; this.date = date; } public String getName() { return name; } public String getNumber() { return number; } public String getDate() { return date; } } Here is my ProductionWorker.java subclass, which extends employeeOne public class ProductionWorker extends employeeOne { private int shift; //shift represents day or night, day = 1, night = 2 private double pRate; //hourly pay rate public ProductionWorker(int shift, double pRate, String name, String number, String date) { super(name, number, date); this.shift = shift; if(this.shift >= 3 || this.shift <= 0) { System.out.println("You entered an out of bounds shift date, enter 1 for day or 2 for night, else shift will be day"); this.shift = 1; } this.pRate = pRate; boolean goodSoFar = true; int indexNum = 0; int indexDate = 0; if(name.length() <= 10 && number.length() <= 4 && date.length() < 9 ) { goodSoFar = true; } else { goodSoFar = false; } while(goodSoFar && indexNum < 3) //XXXL XXX digits 1-9, L is a letter A -M { if(Character.isDigit(number.charAt(indexNum))) { goodSoFar = true; } else { goodSoFar = false; } ++indexNum; } while(goodSoFar && indexNum < 4) { if(Character.isLetter(number.charAt(indexNum))) { goodSoFar = true; } else if(Character.isDigit(number.charAt(indexNum))) { goodSoFar = false; } else if(Character.isDigit(number.charAt(indexNum)) == false && Character.isLetter(number.charAt(indexNum)) == false) { goodSoFar = false; } ++indexNum; } int[] dateValues = new int[date.length()]; while(goodSoFar && indexDate <= date.length()) //XXYYZZZZ { System.out.println("" + date.length() + indexDate + " " + date.charAt(indexDate)); if(Character.isDigit(date.charAt(indexDate))) { dateValues[indexDate] = Character.getNumericValue(date.charAt(indexDate)); System.out.println("value of array is " + dateValues[indexDate]); ++indexDate; } else { goodSoFar = false; } } if(goodSoFar) { System.out.println("your input is good so far"); } else { System.out.println("your input is wrong for name or number or date"); } } public int getShift() { return shift; } public double getPRate() { return pRate; } }

    Read the article

  • SQLAuthority News – Feature Pack for Microsoft SQL Server 2005 SP4

    - by pinaldave
    If you are still using SQL Server 2005 – I suggest that you consider migrating to later version of the SQL Server 2008/2008 R2. Due to any reason, you wanted to continue using SQL Server 2005, I suggest that you take a look at the Feature Pack for Microsoft SQL Server 2005 SP4. There are many different tools and features available in pack, which can be very handy and can solve issues. Microsoft ADOMD.NET Microsoft Core XML Services (MSXML) 6.0 Microsoft OLEDB Provider for DB2 Microsoft SQL Server Management Pack for MOM 2005 Microsoft SQL Server 2000 PivotTable Services Microsoft SQL Server 2000 DTS Designer Components Microsoft SQL Server Native Client Microsoft SQL Server 2005 Analysis Services 9.0 OLE DB Provider Microsoft SQL Server 2005 Backward Compatibility Components Microsoft SQL Server 2005 Command Line Query Utility Microsoft SQL Server 2005 Datamining Viewer Controls Microsoft SQL Server 2005 JDBC Driver Microsoft SQL Server 2005 Management Objects Collection Microsoft SQL Server 2005 Compact Edition Microsoft SQL Server 2005 Notification Services Client Components Microsoft SQL Server 2005 Upgrade Advisor Microsoft .NET Data Provider for mySAP Business Suite, Preview Version Reporting Add-In for Microsoft Visual Web Developer 2005 Express Microsoft Exception Message Box Data Mining Managed Plug-in Algorithm API for SQL Server 2005 Microsoft SQL Server 2005 Reporting Services Add-in for Microsoft SharePoint Technologies Microsoft SQL Server 2005 Data Mining Add-ins for Microsoft Office 2007 SQL Server 2005 Performance Dashboard Reports SQL Server 2005 Best Practices Analyzer Download Feature Pack for Microsoft SQL Server 2005 SP4 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Service Pack, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Find Max Worker Count using DMV – 32 Bit and 64 Bit

    - by pinaldave
    During several recent training courses, I found it very interesting that Worker Thread is not quite known to everyone despite the fact that it is a very important feature. At some point in the discussion, one of the attendees mentioned that we can double the Worker Thread if we double the CPU (add the same number of CPU that we have on current system). The same discussion has triggered this quick article. Here is the DMV which can be used to find out Max Worker Count SELECT max_workers_count FROM sys.dm_os_sys_info Let us run the above query on my system and find the results. As my system is 32 bit and I have two CPU, the Max Worker Count is displayed as 512. To address the previous discussion, adding more CPU does not necessarily double the Worker Count. In fact, the logic behind this simple principle is as follows: For x86 (32-bit) upto 4 logical processors  max worker threads = 256 For x86 (32-bit) more than 4 logical processors  max worker threads = 256 + ((# Procs – 4) * 8) For x64 (64-bit) upto 4 logical processors  max worker threads = 512 For x64 (64-bit) more than 4 logical processors  max worker threads = 512+ ((# Procs – 4) * 8) In addition to this, you can configure the Max Worker Thread by using SSMS. Go to Server Node >> Right Click and Select Property >> Select Process and modify setting under Worker Threads. According to Book On Line, the default Worker Thread settings are appropriate for most of the systems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • SQL SERVER – Tricks to Comment T-SQL in SSMS – SQL in Sixty Seconds #019 – Video

    - by pinaldave
    Code commeting is the one of the most common tasks developers perform. There are two major reasons why developer comment code. 1) During Debug 2) Documenting the code. While debugging the T-SQL code I have often seen developers struggling to comment code.  They spend (or waste) more time in commenting and uncommenting  than doing actual debugging of the procedure.  When I see developer struggling to comment the code I feel little uncomfortable as commenting should be a very easy task over. Today we will see three quick method to comment T-SQL code in Query Editor. There are three different method to comment and uncomment statements in SQL Server Management Studio Using Keyboard Shortcuts Using Tool Bar Using Menu Bar Method 1: Using Keyboard Shortcuts Commenting the statement – CTRL+K, CTRL+C Commenting the statement – CTRL+K, CTRL+U Method 2: Using Tool Bar Using Tool bar buttons. (See Video) Method 3: Using Menu Bar Commenting the statement – Menu Bar >> Edit >> Advanced >> Click on Comment Selection. Unommenting the statement – Menu Bar >> Edit >> Advanced >> Click on Uncomment Selection. More on Importing CSV Data: Two Different Ways to Comment Code – Explanation and Example I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis

    - by pinaldave
    Every weekend brings creative ideas and accidents brings best unknown secrets in front of us. Just a day while working with complex SQL Server code in SSMS I came across very interesting shortcut which I have never used before and instantly fell in love with it. It is totally possible that you are familiar with this but for me it was the first time and I was surprised that I did know know this short cut so far. Shortcut key is CTRL+SHIFT+]. This key can be very useful when dealing with multiple subqueries, CTE or query with multiple parentheses. When exercised this shortcut key it selects T-SQL code between two parentheses. Let us see the examples to understand the same. In each of the examples I have put the cursor at the position displayed and pressed CTRL+SHIFT+] and it has selected the code between two corresponding parentheses. Cursor position 1 Cursor position 2 Cursor position 3 If you are a developer and have to code with complex queries, you will totally appreciate that this feature can save so much of the time for development. I often remember my experience as a developer when I have lost a lot of hours to just balance parentheses. As I said yesterday I found this shortcut accidently. How many of you were aware of this feature? Is there any other useful feature you would like to share with us? Please leave a comment and if I have not covered it earlier, I will share it due credit on this blog. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Shortcut

    Read the article

  • SQL SERVER – What is Page Life Expectancy (PLE) Counter

    - by pinaldave
    During performance tuning consultation there are plenty of counters and values, I often come across. Today we will quickly talk about Page Life Expectancy counter, which is commonly known as PLE as well. You can find the value of the PLE by running following query. SELECT [object_name], [counter_name], [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Manager%' AND [counter_name] = 'Page life expectancy' The recommended value of the PLE counter is 300 seconds. I have seen on busy system this value to be as low as even 45 seconds and on unused system as high as 1250 seconds. Page Life Expectancy is number of seconds a page will stay in the buffer pool without references. In simple words, if your page stays longer in the buffer pool (area of the memory cache) your PLE is higher, leading to higher performance as every time request comes there are chances it may find its data in the cache itself instead of going to hard drive to read the data. Now check your system and post back what is this counter value for you during various time of the day. Is this counter any way relates to performance issues for your system? Note: There are various other counters which are important to discuss during the performance tuning and this counter is not everything. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – Author Visit to Nepal TechMela – 2 Technical Sessions

    - by pinaldave
    Microsoft MDP Nepal is going to organize a Tech Mela for the IT community of Nepal on March 29 & 30, 2010 (2066 Chaitra 16 & 17), Monday and Tuesday,  at the Russian Center for Science & Culture, Kamalpokhari, Kathmandu. The objective of the event is to enhance and exchange knowledge about Information Technology, as well as Microsoft products and technologies, with the IT community. I am very excited to attend this one-of-a-kind event in Nepal. I will be giving two presentations in the said event, which includes: 1) Become An Efficient Developer – Learn The Tricks of SQL Server Management Studio (SSMS) There are so many features in SQL Server Management Studio that we may not know or use all of them. This presentation will impart tricks and tips of SQL Server Management Studio to the event goers. This aims to make you an efficient developer by having an edge over the other developers. 2) Good, Bad and Ugly: The Story of Index Index is often considered as the sure shot tool of improving the performance of any query. Learn the basics with examples and discover the good, bad and ugly sides of Index. This session will help you efficiently write queries in future. I am very excited to attend this special event as this is the very first time I will be presenting in technical sessions in Nepal. If you are in Nepal, I strongly suggest that you go to this once-in a lifetime IT fair. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – SELECT TOP Shortcut in SQL Server Management Studio (SSMS)

    - by pinaldave
    This is tool is pretty old, yet always comes as a handy tip. I had a great trip at TechEd in India. And, during one of my presentations, I was asked if there are any shortcuts to SELECT only TOP 100 records from SSMS. I immediately told him that if he explores the table in SSMS, he can just right click on it and SELECT TOP 1000 records. If he wanted only 100 records, then he could edit that 1000 to 100 by means of going to Options. Go to Options, then hover the mouse over the SQL Server Object Explorer, then proceed to Commands. Afterwards, change the Value for Select Top <n> Audit Records. After narrating the steps, he told me that he was not looking for the right click option; rather he was asking if there is any kind of keyboard shortcut for convenience’s sake. Actually, a keyboard shortcut is also possible. SQL Server Management Studio (SSMS) lets you configure the settings you want using a shortcut. Here is how you can do it. Go to Options, then to Environment. Proceed to Keyboard, and from there, configure your T-SQL with the desired keyword. Now, open SSMS New Query Window, and then click and type in any table name.  After that, just hit the shortcut you just made earlier. Doing this should display TOP 100 records in the Result window. I am sure this trick is quite old, but it is still helpful to many. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Find Most Expensive Queries Using DMV

    - by pinaldave
    The title of this post is what I can express here for this quick blog post. I was asked in recent query tuning consultation project, if I can share my script which I use to figure out which is the most expensive queries are running on SQL Server. This script is very basic and very simple, there are many different versions are available online. This basic script does do the job which I expect to do – find out the most expensive queries on SQL Server Box. SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_logical_reads DESC -- logical reads -- ORDER BY qs.total_logical_writes DESC -- logical writes -- ORDER BY qs.total_worker_time DESC -- CPU time You can change the ORDER BY clause to order this table with different parameters. I invite my reader to share their scripts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology Tagged: SQL DMV

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Online Session on What is New in Denali – Today Online

    - by pinaldave
    I will be presenting today on subject Inside of Next Generation SQL Server – Denali online at Zeollar.com. This sessions are really fun as they are online, downloadable, and 100% demo oriented. I will be using SQL Server ‘Denali’ CTP 1 to present on the subject of What is New in Denali. The webcast will start at 12:30 PM sharp and will end at 1 PM India Time. It will be 100% demo oriented and no slides. I will be covering following topics in the session. SQL SERVER – Denali Feature – Zoom Query Editor SQL SERVER – Denali – Improvement in Startup Options SQL SERVER – Denali – Clipboard Ring – CTRL+SHIFT+V SQL SERVER – Denali – Multi-Monitor SSMS Windows SQL SERVER – Denali – Executing Stored Procedure with Result Sets SQL SERVER – Performance Improvement with of Executing Stored Procedure with Result Sets in Denali SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases SQL SERVER – Denali – ObjectID in Negative – Local TempTable has Negative ObjectID SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison SQL SERVER – Denali – SEQUENCE is not IDENTITY SQL SERVER – Denali – Introduction to SEQUENCE – Simple Example of SEQUENCE If time permits we will cover few more topics as well. The session will be recorded as well. My earlier session on the Topic of Best Practices Analyzer is also available to watch online here: SQL SERVER – Video – Best Practices Analyzer using Microsoft Baseline Configuration Analyzer Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. When we developed the course – we made sure that this course remains practical and demo heavy instead of just theories on this subject. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • SQLAuthority News – Fast Track Data Warehouse 3.0 Reference Guide

    - by pinaldave
    http://msdn.microsoft.com/en-us/library/gg605238.aspx I am very excited that Fast Track Data Warehouse 3.0 reference guide has been announced. As a consultant I have always enjoyed working with Fast Track Data Warehouse project as it truly expresses the potential of the SQL Server Engine. Here is few details of the enhancement of the Fast Track Data Warehouse 3.0 reference architecture. The SQL Server Fast Track Data Warehouse initiative provides a basic methodology and concrete examples for the deployment of balanced hardware and database configuration for a data warehousing workload. Balance is measured across the key components of a SQL Server installation; storage, server, application settings, and configuration settings for each component are evaluated. Description Note FTDW 3.0 Architecture Basic component architecture for FT 3.0 based systems. New Memory Guidelines Minimum and maximum tested memory configurations by server socket count. Additional Startup Options Notes for T-834 and setting for Lock Pages in Memory. Storage Configuration RAID1+0 now standard (RAID1 was used in FT 2.0). Evaluating Fragmentation Query provided for evaluating logical fragmentation. Loading Data Additional options for CI table loads. MCR Additional detail and explanation of FTDW MCR Rating. Read white paper on fast track data warehousing. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Auto Recovery File Settings in SSMS – SQL in Sixty Seconds #034 – Video

    - by pinaldave
    Every developer once in a while facing an unfortunate situation where they have not yet saved the work and their SQL Server Management Studio crashes. Well, you can minimize the loss by optimizing auto recovery settings. In this video we can see how to set the auto recovery settings. Go to SSMS >> Tools >> Options >> Environment >> AutoRecover There are two different settings: 1) Save AutoRecover Information Every Minutes This option will save the SQL Query file at certain interval. Set this option to minimum value possible to avoid loss. If you have set this value to 5, in the worst possible case, you can loose last 5 minutes of the work. 2) Keep AutoRecover Information for Days This option will preserve the AutoRecovery information for specified days. Though, I suggest in case of accident open SQL Server Management Studio right away and recover your file. Do not procrastinate this important task for future dates. Related Tips in SQL in Sixty Seconds: Manage Help Settings – CTRL + ALT + F1 SSMS 2012 Reset Keyboard Shortcuts to Default A Cool Trick – Restoring the Default SQL Server Management Studio – SSMS Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video Clear Drop Down List of Recent Connection From SQL Server Management Studio SELECT TOP Shortcut in SQL Server Management Studio (SSMS) What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Copy Statistics from One Server to Another Server

    - by pinaldave
    I was recently working on a performance tuning project in Dubai (yeah I was able to see the tallest tower from the window of my work place). I had a very interesting learning experience there. There was a situation where we wanted to receive the schema of original database from a certain client. However, the client was not able to provide us any data due to privacy issues. The schema was very important because without having an access to underlying data, it was a bit difficult to judge the queries etc. For example, without any primary data, all the queries are running in 0 (zero) milliseconds and all were using nested loop as there were no data to be returned. Even though we had CPU offending queries, they were not doing anything without the data in the tables. This was really a challenge as I did not have access to production server data and I could not recreate the scenarios as production without data. Well, I was confused but Ruben from Solid Quality Mentors, Spain taught me new tricks. He suggested that when table schema is generated, we can create the statistics consequently. Here is how we had done that: Once statistics is created along with the schema, without data in the table, all the queries will work as how they will work on production server. This way, without access to the data, we were able to recreate the same scenario as production server on development server. When observed at the script, you will find that the statistics were also generated along with the query. You will find statistics included in WITH STATS_STREAM clause. What a very simple and effective script. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – Demo Script – Keeping CPU Busy

    - by pinaldave
    Recently face very interesting situation, during presentations at event, I was asked very famous questions: “My CPU is very high all the time, how can I reduce it?” This is very interesting question and there are many answers and a single blog post is not good enough to justify this subject. I presented few situation to the person who asked the question. The member of the audience who asked question came to me afterwords and asked me few detailed questions. To answer him, I quickly wrote query which simulate high CPU. Here is the script which I wrote which increased CPU from 10% to 80%. I was wondering if there is any similar script which can simulate high CPU usage. If you have share with me and I will publish with due credit. Here is my script for the same: USE AdventureWorks GO DECLARE @Flag INT SET @Flag = 1 WHILE(@Flag < 1000) BEGIN ALTER INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID] ON [Sales].[SalesOrderDetail] REBUILD SET @Flag = @Flag + 1 END GO   Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MySQL – Introduction to User Defined Variables

    - by Pinal Dave
    MySQL supports user defined variables to have some data that can be used later part of your query. You can save a value to a variable using a SELECT statement and later you can access its value. Unlike other RDBMSs, you do not need to declare the data type for a variable. The data type is automatically assumed when you assign a value. A value can be assigned to a variable using a SET command as shown below SET @server_type:='MySQL'; When you above command is executed, the value, MySQL is assigned to the variable called @server_type. Now you can use this variable in the later part of the code. Suppose if you want to display the value, you can use SELECT statement. SELECT @server_type; The result is MySQL. Once the value is assigned it remains for the entire session until changed by the later statements. So unlike SQL Server, you do not need to have this as part the execution code every time. (Because in SQL Server, the variables are execution scoped and dropped after the execution). You can give column name as below SELECT @server_type AS server_type; You can also SELECT statement to DECLARE and SELECT the values for a variable. SELECT @message:='Welcome to MySQL' AS MESSAGE; The result is Message -------- Welcome to MySQL You can make use of variables to effectively apply many logics. One of the useful method is to generate the row number as shown in this post MySQL – Generating Row Number for Each Row using Variable. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >