Search Results

Search found 23955 results on 959 pages for 'insert query'.

Page 55/959 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • SQL SERVER – Demo Script – Keeping CPU Busy

    - by pinaldave
    Recently face very interesting situation, during presentations at event, I was asked very famous questions: “My CPU is very high all the time, how can I reduce it?” This is very interesting question and there are many answers and a single blog post is not good enough to justify this subject. I presented few situation to the person who asked the question. The member of the audience who asked question came to me afterwords and asked me few detailed questions. To answer him, I quickly wrote query which simulate high CPU. Here is the script which I wrote which increased CPU from 10% to 80%. I was wondering if there is any similar script which can simulate high CPU usage. If you have share with me and I will publish with due credit. Here is my script for the same: USE AdventureWorks GO DECLARE @Flag INT SET @Flag = 1 WHILE(@Flag < 1000) BEGIN ALTER INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID] ON [Sales].[SalesOrderDetail] REBUILD SET @Flag = @Flag + 1 END GO   Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MySQL – Introduction to User Defined Variables

    - by Pinal Dave
    MySQL supports user defined variables to have some data that can be used later part of your query. You can save a value to a variable using a SELECT statement and later you can access its value. Unlike other RDBMSs, you do not need to declare the data type for a variable. The data type is automatically assumed when you assign a value. A value can be assigned to a variable using a SET command as shown below SET @server_type:='MySQL'; When you above command is executed, the value, MySQL is assigned to the variable called @server_type. Now you can use this variable in the later part of the code. Suppose if you want to display the value, you can use SELECT statement. SELECT @server_type; The result is MySQL. Once the value is assigned it remains for the entire session until changed by the later statements. So unlike SQL Server, you do not need to have this as part the execution code every time. (Because in SQL Server, the variables are execution scoped and dropped after the execution). You can give column name as below SELECT @server_type AS server_type; You can also SELECT statement to DECLARE and SELECT the values for a variable. SELECT @message:='Welcome to MySQL' AS MESSAGE; The result is Message -------- Welcome to MySQL You can make use of variables to effectively apply many logics. One of the useful method is to generate the row number as shown in this post MySQL – Generating Row Number for Each Row using Variable. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28

    - by pinaldave
    I have received many comments, email, suggestions and motivations for my current series of wait types and wait statistics. One of the questions which I keep on receiving almost every other day is whether all of the discussions I have presented so far are also applicable to SQL Server 2000. Additionally, I receive another question asking me if wait statistics matters in SQL Server 2000. If it is, then the asker wants to know how to measure wait types for SQL Server 2000. In SQL Server, you can run the following command to get a list of all the wait types: DBCC SQLPERF(waitstats) The query above will work in SQL Server 2005/2008/R2  because of backup compatibility. As you might have noticed, I have been discussing everything keeping SQL Server 2005+ in mind, but I have given little consideration on SQL Server 2000. However, I am pretty sure that most of the suggestions I have provided are applicable to SQL Server 2000. The wait types I have been discussing mostly exist in SQL Server 2000 as well. But the difference of the 2000 version is that it gets late recent releases, but it is worth it. Wait types are very essential to measure performance bottleneck. Because of this, I do not have to state that I am big fan of them just so I could identify performance bottleneck. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • NLP - Queries using semantic wildcards in full text searching, maybe with Lucene?

    - by Zsolt
    Let's say I have a big corpus (for example in english or an arbitrary language), and I want to perform some semantic search on it. For example I have the query: "Be careful: [art] armada of [sg] is coming to [do sg]!" And the corpus contains the following sentence: "Be careful: an armada of alien ships is coming to destroy our planet!" It can be seen that my query string could contain "semantic placeholders", such as: [art] - some placeholder for articles (for example a / an in English) [sg], [do sg] - some placeholders for NPs and VPs (subjects and predicates) I would like to develop a library which would be capable to handle these queries efficiently. I suspect that some kind of POS-tagging would be necessary for parsing the text, but because I don't want to fully reimplement an already existing full-text search engine to make it work, I'm considering that how could I integrate this behaviour into a search engine like Lucene? I know there are SpanQueries which could behave similarly in some cases, but as I can see, Lucene doesn't do any semantic stuff with stored texts. It is possible to implement a behavior like this? Or do I have to write an own search engine?

    Read the article

  • SQL SERVER – Order By Numeric Values Formatted as String

    - by pinaldave
    When I was writing this blog post I had a hard time to come up with the title of the blog post so I did my best to come up with one. Here is the reason why? I wrote a blog post earlier SQL SERVER – Find First Non-Numeric Character from String. One of the questions was that how that blog can be useful in real life scenario. This blog post is the answer to that question. Let us first see a problem. We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with example. Prepare a sample data: -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO The above query will give following result set. Now let us use ORDER BY COL1 and observe the result along with Original SELECT. -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO The result of the table is not as per expected. We need the result in following format. Here is the good example of how we can use PATINDEX. -- Use of PATINDEX SELECT ID, LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) 'Numeric Character', Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO We can use PATINDEX to identify the length of the digit part in the alphanumeric string (Remember: Our string has a first part as an int always. It will not work in any other scenario). Now you can use the LEFT function to extract the INT portion from the alphanumeric string and order the data according to it. You can easily clean up the script by dropping following table. DROP TABLE MyTable GO Here is the complete script so you can easily refer it. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO -- Use of PATINDEX SELECT ID, Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO DROP TABLE MyTable GO Well, isn’t it an interesting solution. Any suggestion for better solution? Additionally any suggestion for changing the title of this blog post? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Contest – Summary of 5 Day and Additional Information

    - by pinaldave
      I am overwhelmed with the response of our contest ran earlier this week. Every day we are giving away USD 198 worth give aways to readers in USA and India. If you have not participated so far, I encourage you to participate today itself. Here are links to our 5 day contest. The winner of the contest will be announced on August 20th. Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5 Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5 Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5 Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5 Understanding XML – Contest Win Joes 2 Pros Combo (USD 198) – Day 5 of 5 Here are a few important notes related to the contest. A few people asked me what should they do as they have forgotten to mention their country in the response. Please resubmit with correct data, we will only consider latest entry from one person. What if you are not from the USA or India? Participate in the Bonus Quiz. Leave a comment for each of the questions above with your favorite article and you may be eligible for winning something cool. What if I am winner of two contests out of 5 contests? Well, in that case, we will send you one set of Combo Kit and Amazon Gift Card of USD 100 for another contest which you won. Can I exchange my kit with other stuff? No, if you do not want kit, give it to someone who needs it. Btw, I strongly suggest that you participate in the Bonus Quiz. There is something cool for everyone! Reference: Pinal Dave (http://blog.sqlauthority.com)         Filed under: Database, DBA, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – Whitepaper Download – Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries

    - by pinaldave
    Size of the database is growing every day. Many organizations now a days have more than TB of the Data in their system. Performance is always part of the issue. Microsoft is really paying attention to the same and also focusing on improving performance for Data Warehousing. Microsoft has recently released whitepaper on the performance tuning subject of Data Warehousing. Here is the abstract about the whitepaper from official site: In this white paper we discuss two of the new features introduced in SQL Server 2008, Star Join and Few-Outer-Row optimizations. These two features are in SQL Server 2008 R2 as well.  We test the performance of SQL Server 2008 on a set of complex data warehouse queries designed to highlight the effect of these two features and observed a significant performance gain over SQL Server 2005 (without these two features). The results observed also apply to SQL Server 2008 R2.  On average, about 75 percent of the query execution time has been reduced, compared to SQL Server 2005. We also include data that shows a reduction in the number of rows processed and improved balance in parallel queries, both of which highlight the important role the Star Join and Few Outer-Row features played. I encouraged all of those interested in Data Warehouse to read it and see if they can learn the tricks. Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – A Cool Trick – Restoring the Default SQL Server Management Studio – SSMS

    - by pinaldave
    “I do not know where my windows went!” “I just closed my object explorer and now I cannot find it.” “How do I get my original windows layout back in SQL Server Management Studio?” “How do I get the window which was there in left side back again?” Since last 2-3 years, every single day I receive more than 5 emails on SSMS and its layout. For the beginners it is very common to get confused when they attempt to change SQL Server Management Studio’s windows layout. They often change the layout and are not able to get the original layout back. Often people do not change the layout whole of their life, leading to uncomfortable feeling when they go to another’s computer where the windows are differently placed. Today’s blog post is dedicated all the beginners in SQL Server. It is extremely simple to reset the SSMS layout to default layout. The default layout involves 2 major things 1) Object Explorer on left side 2) Query Windows on right side (80% screen estate). Personally I am so used to this as well that if there is any other changes in the same, I do not enjoy working on the environment. Well, the solution to rest the SSMS layout is very simple. One can do it in split seconds.  To restore the default configuration, on the Window menu, click Reset Window Layout. Have you ever used this feature? Do you feel uncomfortable when SSMS layout is not in default state? How do you address this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Validating Unique Columnname Across Whole Database

    - by pinaldave
    I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. Asker: “Our business requirement is when we add new column we want it unique across current database.” Pinal: “Why do you have such requirement?” Asker: “Do you know the solution?” Pinal: “Sure I can come up with the answer but it will help me to come up with an optimal answer if I know the business need.” Asker: “Thanks – what will be the answer in that case.” Pinal: “Honestly I am just curious about the reason why you need your column name to be unique across database.” (Silence) Pinal: “Alright – here is the answer – I guess you do not want to tell me reason.” Option 1: Check if Column Exists in Current Database IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn') BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END Option 2: Check if Column Exists in Current Database in Specific Table IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn' AND OBJECT_ID = OBJECT_ID(N'tableName')) BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END I guess user did not want to share the reason why he had a unique requirement of having column name unique across databases. Here is my question back to you – have you faced a similar situation ever where you needed unique column name across a database. If not, can you guess what could be the reason for this kind of requirement?  Additional Reference: SQL SERVER – Query to Find Column From All Tables of Database Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MySQL – Export the Resultset to CSV file

    - by Pinal Dave
    In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods. Method 1 : Make use of Work Bench If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench SELECT db_names FROM mysql_testing; The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file. Method 2 : Make use of OUTFILE command You can do the export using a query with OUTFILE command as shown below SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n'; After the execution of the above code, you can find a file named testing.csv in C drive of the server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL Tagged: CSV

    Read the article

  • Ubuntu 12.04 MySQL 5.5 MyODBC 5.1 or 3.1 query hangs

    - by jorgearr
    I have been able to install Ubuntu 12.04 with LAMP MySQL version 5.5.x It works fine within linux, it allows me to connect from myodbc windows vista or windows 7 I have configured networking access and have been able to access from windows vista using putty and other tcp connections like mysql query browser. I have also configured or disabled ufw firewall and apparmor. The connection works fine until I query data from the tables. It lets me query small amounts of data like: SELECT name FROM users limit 20 but if I do a SELECT * FROM users, it goes on a never-ending loop. This happens even on tables with very few records like 5 or even less. The problems occur with windows because I tried ssh from linux mint and it worked fine. I need to be able to work using MyODBC either 3.51 or 5.1 since my client program is made in VB6 and connects to mysql server via tcp/ip. The server is an HP PROLIANT ML350G6 with Intel Xeon 64 bits. I tried several ubuntu server version (12.04 64bit, 10.10 64bit, 11.04 32bit) and none has worked I even tried CentOS 6.3 and the same. As a reference, it works fine with onother ubuntu server version 6.x on HP Proliant 150 and mysql 5.0.x that is like 7 years old and never updated. Help Please.

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • SQL SERVER – A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    Before continuing this blog post – please read the first part of the SEQUENCE Puzzle here A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value. Where we played a simple guessing game about predicting next value. The answers the of puzzle is shared on the blog posts as a comment. Now here is the next puzzle based on yesterday’s puzzle. First execute the script which I have written here. The only difference between yesterday’s script is that I have removed the MINVALUE as 1 from the syntax. Now guess what will be the next value as requested in the query. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS BIGINT START WITH 3 INCREMENT BY 1 MAXVALUE 5 CYCLE NO CACHE; GO -- Following will return 3 SELECT next value FOR dbo.SequenceID; -- Following will return 4 SELECT next value FOR dbo.SequenceID; -- Following will return 5 SELECT next value FOR dbo.SequenceID; -- Following will return which number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Above script gave me following resultset. 3 is the starting value and 5 is the maximum value. Once Sequence reaches to maximum value what happens? and WHY? I (kindly) suggest you try to attempt to answer this question without running this code in SQL Server 2012. I am very confident that irrespective of SQL Server version you are running you will have great learning. I will follow up of the answer in comments below. Recently my friend Vinod Kumar wrote excellent blog post on SQL Server 2012: Using SEQUENCE, you can head over there for learning sequence in details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Query a Log4Net-database

    - by pinhack
    So if you use Log4Net to log into a database (i.e. using the AdoNetAppender), how can you conveniently get an overview of what has happend ? Well, you could try the following Query ( T-SQL ):   SELECT convert(varchar(10),LogDB.Date,121) as Datum, LogDB.Level, LogDB.Logger,COUNT(LogDB.Logger) as Counter From Log4Net.dbo.Log as LogDB  where Level <> 'DEBUG' AND convert(varchar(10),LogDB.Date,121) like '2010-03-25' GROUP BY convert(varchar(10),LogDB.Date,121),LogDB.Level,LogDB.Logger ORDER BY counter desc This query will give you the number of events by the Logger at a specified date - and it's easy to customize, just adjust the Date and the Level to your needs. You need a bit more information than that? How about this query:  Select  convert(varchar(10),LogDB.Date,121) as Datum,LogDB.Level,LogDB.Message,LogDB.Logger ,count(LogDB.Message) as counter From Log4Net.dbo.Log as LogDB where Level <> 'DEBUG' AND convert(varchar(10),LogDB.Date,121) like '2010-03-25' GROUP BY convert(varchar(10),LogDB.Date,121),LogDB.Level,LogDB.Message,LogDB.Logger ORDER BY counter desc Similar to the first one, but inclusive the Message - which will return a much larger resultset.

    Read the article

  • Database and query to store and retreive friend list [migrated]

    - by amr Kamboj
    I am developing a module in website to save and retreive friend list. I am using Zend Framework and for DB handling I am using Doctrine(ORM). There are two models: 1) users that stores all the users 2) my_friends that stores the friend list (that is refference table with M:M relation of user) the structure of my_friends is following ...id..........user_id............friend_id........approved.... ...10.........20 ..................25...................1.......... ...10.........21 ..................25...................1.......... ...10.........22 ..................30...................1.......... ...10.........25 ..................30...................1.......... The Doctrine query to retreive friend list id follwing $friends = Doctrine_Query::create()->from('my_friends as mf') ->leftJoin('mf.users as friend') ->where("mf.user_id = 25") ->andWhere("mf.approved = 1"); Suppose I am viewing the user no.- 25. With this query I am only getting the user no.- 30. where as user no.- 25 is also approved friend of user no.- 20 and 21. Please guide me, what should be the query to find all friend and is there any need to change the DB structure.

    Read the article

  • Query something and return the reason if nothing has been found

    - by Daniel Hilgarth
    Assume I have a Query - as in CQS that is supposed to return a single value. Let's assume that the case that no value is found is not exceptional, so no exception will be thrown in this case. Instead, null is returned. However, if no value has been found, I need to act according to the reason why no value has been found. Assuming that the Query knows the reason, how would I communicate it to the caller of the Query? A simple solution would be not return the value directly but a container object that contains the value and the reason: public class QueryResult { public TValue Value { get; private set; } public TReason ReasonForNoValue { get; private set; } } But that feels clumsy, because if a value is found, ReasonForNoValue makes no sense and if no value has been found, Value makes no sense. What other options do I have to communicate the reason? What do you think of one event per reason? For reference: This is going to be implemented in C#.

    Read the article

  • How to deal with transactions when creating a database connection for each query

    - by webnoob
    In line with this post here I am going to change my website to create a connection per query to take advantage of .NET's connection pooling. With this in mind, I don't know how I should deal with transactions. At the moment I do something like (psuedo code): GlobalTransaction = GlobalDBConnection.BeginTransaction(); try { ExecSQL("insert into table ..") ExecSQL("update some_table ..") .... GlobalTransaction.Commit(); }catch{ GlobalTransaction.Rollback(); throw; } ExecSQL would be like this: using (SqlCommand Command = GlobalDBConnection.CreateCommand()) { Command.Connection = GlobalDBConnection; Command.Transaction = GlobalTransaction; Command.CommandText = SQLStr; Command.ExecuteNonQuery(); } I'm not quite sure how to change this concept to deal with transactions if the connection is created within ExecSQL because I would want the transaction to be shared between both the insert and update routines.

    Read the article

  • Dynamic Query Generation : suggestion for better approaches

    - by Gaurav Parmar
    I am currently designing a functionality in my Web Application where the verified user of the application can execute queries which he wishes to from the predefined set of queries with where clause varying as per user's choice. For example,Table ABC contains the following Template query called SecretReport "Select def as FOO, ghi as BAR from MNO where " SecretReport can have parameters XYZ, ILP. Again XYZ can have values as 1,2 and ILP can have 3,4 so if the user chooses ILP=3, he will get the result of the following query on his screen "Select def as FOO, ghi as BAR from MNO where ILP=3" Again the user is allowed permutations of XYZ / ILP My initial thought is that User will be shown a list of Report names and each report will have parameters and corresponding values. But this approach although technically simple does not appear intuitive. I would like to extend this functionality to a more generic level. Such that the user can choose a table and query based on his requirements. Of course we do not want the end user to take complete control of DB. But only tables and fields that are relevant to him. At present we are defining what is relevant in the code. But I want the Admin to take over this functionality such that he can decide what is relevant and expose the same to the user. On user's side it should be intuitive what is available to him and what queries he can form. Please share your thoughts what is the most user friendly way to provide this feature to the end user.

    Read the article

  • MySQL returning slow queries with result sets bigger than 30 rows

    - by josephs8
    When ever I run a query that exceeds 30 queries the time for the query to run goes from less than a second to over 10 seconds to get data. Example I run a query to return 29 rows, it takes .1 seconds, I run a query to return 31 rows it takes 11.2 seconds. I am running mySQL on Windows 2008 Server Dual Core 2.6Ghz with 3GB of Memory. The machine doesn't run anything else. It does have a instance of MSSQL running on the server but that does not get used at all. This only happens via PHP right now, If I manually run the query on the server it returns it in less than a second. The queries are not complicated either I have included one below: SELECT Name, Value FROM `bis_co`.`departments` LIMIT 31 What would be causing this issue and how can I correct this? Am I missing a configuration setting in MySQL or something. Thanks

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >