Search Results

Search found 5101 results on 205 pages for 'ssms 2005'.

Page 52/205 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Getting average from 3 columns in MS SQL

    - by barbarian
    I have table with 3 columns(smallint) in MS SQL 2005. Table Ratings ratin1 smallint, ratin2 smallint ratin3 smallint These columns can have values from 0 to 5 How to select average value of these fields, but only compare fields where value is greater then 0. So if column values are 1,3,5 - average had to be 3 if values are 0,3,5 - average had to be 4

    Read the article

  • How to filter a duration in SQL Profiler

    - by user341460
    Hi Friends, I need to run a profiler on SQL 2005 to capture the SPs with which took longer than 1/10th of a second. Can you please let me know how can I do that. I dont see the option. Also in the duration is that measured in second or minute. I would apprecaite your help. Thanks,

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Tricks to Comment T-SQL in SSMS – SQL in Sixty Seconds #019 – Video

    - by pinaldave
    Code commeting is the one of the most common tasks developers perform. There are two major reasons why developer comment code. 1) During Debug 2) Documenting the code. While debugging the T-SQL code I have often seen developers struggling to comment code.  They spend (or waste) more time in commenting and uncommenting  than doing actual debugging of the procedure.  When I see developer struggling to comment the code I feel little uncomfortable as commenting should be a very easy task over. Today we will see three quick method to comment T-SQL code in Query Editor. There are three different method to comment and uncomment statements in SQL Server Management Studio Using Keyboard Shortcuts Using Tool Bar Using Menu Bar Method 1: Using Keyboard Shortcuts Commenting the statement – CTRL+K, CTRL+C Commenting the statement – CTRL+K, CTRL+U Method 2: Using Tool Bar Using Tool bar buttons. (See Video) Method 3: Using Menu Bar Commenting the statement – Menu Bar >> Edit >> Advanced >> Click on Comment Selection. Unommenting the statement – Menu Bar >> Edit >> Advanced >> Click on Uncomment Selection. More on Importing CSV Data: Two Different Ways to Comment Code – Explanation and Example I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Change Database Access to Single User Mode Using SSMS

    - by pinaldave
    I have previously written about how using T-SQL Script we can convert the database access to single user mode before backup. I was recently asked if the same can be done using SQL Server Management Studio. Yes! You can do it from database property (Write click on database and select database property) and follow image. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Code formatter for SSMS

    - by blakmk
      I was searching recently for a code formatter for T-Sql and I came accross this nice little utility that I wanted to share: http://www.wangz.net/cgi-bin/pp/gsqlparser/sqlpp/sqlformat.tpl I've been dealing with a lot of legacy code latley and there is nothing I find more infuriating than unformatted code. This tool seems to work quite well. Just one click and it formats everything nicely. There is also a free web version.                                           This Web Page Created with PageBreeze Free HTML Editor

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SSMS Tools Pack 1.8 is out!

    - by Mladen Prajdic
    This is a release that fixes all known major bugs and most of the minor ones. The main feature list hasn’t changed. The only addition is the ability to export and import only SQL snippets. Before you could only export/import all settings which included the snippets. You can download the new version here. Enjoy it!

    Read the article

  • SQL SERVER Generate Report for Index Physical Statistics SSMS

    Few days ago, I wrote about SQL SERVER Out of the Box Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SSMS hanging without error when connecting to SQL

    - by Rob Farley
    Scary day for me last Thursday. I had gone up to Brisbane, and was due to speak at the Queensland SQL User Group on Thursday night. Unfortunately, disaster struck about an hour beforehand. Nothing to do with the recent floods (although we were meeting in a different location because of them). It was actually down to the fact that I’d been fiddling with my machine to get Virtual Server running on Windows 7, and SQL had finally picked up a setting from then. I could run Management Studio, but it couldn’t connect at all. No error, it just seemed to hang. One of the things you have to do to get Virtual Server installed is to tweak the Group Policy settings. I’d used gpupdate /force to get Windows to pick up the new setting, which allowed me to get Virtual Server running properly, but at the time, SQL was still using the previous settings. Finally when in Brisbane, my machine picked up the new settings, and caused me pain. Dan Benediktson describes the situation. If the SQL client picks up the wrong value out of the GetOverlappedResult API (which is required for various changes in Windows 7 behaviour), then Virtual Server can be installed, but SQL Server won’t allow connections. Yay. Luckily, it’s easy enough to change back using the Group Policy editor (gpedit.msc). Then restarting the machine (again!, as gpupdate /force didn’t cut it either, because SQL had already picked up the value), and finally I could reconnect. On Thursday I simply borrowed another machine for my talk. Today, one of my guys had seen and remembered Dan’s post. Thanks, both of you.

    Read the article

  • SSMS hanging without error when connecting to SQL

    - by Rob Farley
    Scary day for me last Thursday. I had gone up to Brisbane, and was due to speak at the Queensland SQL User Group on Thursday night. Unfortunately, disaster struck about an hour beforehand. Nothing to do with the recent floods (although we were meeting in a different location because of them). It was actually down to the fact that I’d been fiddling with my machine to get Virtual Server running on Windows 7, and SQL had finally picked up a setting from then. I could run Management Studio, but it couldn’t connect at all. No error, it just seemed to hang. One of the things you have to do to get Virtual Server installed is to tweak the Group Policy settings. I’d used gpupdate /force to get Windows to pick up the new setting, which allowed me to get Virtual Server running properly, but at the time, SQL was still using the previous settings. Finally when in Brisbane, my machine picked up the new settings, and caused me pain. Dan Benediktson describes the situation. If the SQL client picks up the wrong value out of the GetOverlappedResult API (which is required for various changes in Windows 7 behaviour), then Virtual Server can be installed, but SQL Server won’t allow connections. Yay. Luckily, it’s easy enough to change back using the Group Policy editor (gpedit.msc). Then restarting the machine (again!, as gpupdate /force didn’t cut it either, because SQL had already picked up the value), and finally I could reconnect. On Thursday I simply borrowed another machine for my talk. Today, one of my guys had seen and remembered Dan’s post. Thanks, both of you.

    Read the article

  • RegEx-Based Finding and Replacing of Text in SSMS

    So often, one sees developers doing repetitive coding in SQL Server Management Studio or Visual Studio that could be made much quicker and easier by using the Regular-Expression-based Find/Replace functionality. It is understandable, since the syntax is odd and some features are missing, but it is still worth knowing about. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Backup a Single Table in SQL Server using SSMS

    - by Greg Low
    Our buddy Buck Woody made an interesting post about a common question: "How do I back up a single table in SQL Server?" That got me thinking about what a backup of a table really is. BCP is often used to get the data but you want the schema as well. For reasonable-sized tables, the easiest way to do this now is to create a script using SQL Server Management Studio. To do this, you: 1. Right-click the database (note not the table) 2. Choose Tasks > Generate Scripts 3. In the Choose Objects pane,...(read more)

    Read the article

  • Does running a SQL Server 2005 database in compatibility level 80 have a negative impact on performa

    - by Templar
    Our software must be able to run on SQL Server 2000 and 2005. To simplify development, we're running our SQL Server 2005 databases in compatibility level 80. However, database performance seems slower on SQL 2005 than on SQL 2000 in some cases (we have not confirmed this using benchmarks yet). Would upgrading the compatibility level to 90 improve performance on the SQL 2005 servers?

    Read the article

  • Visual studio 2008 problem

    - by Thomas Manalil
    i am using visual studio team system 2008 and VSS 2005. I took a latest copy of a project from VSS. Now when i try to open that project, it is showing error "This version of visual studio does not support source control" and " Unexpected error enocountered. Restart the application Error : no such interfaces are supported File : vsee\internal\vscomptr.inl". When i open solution explorer, all projects are showing as unavailable. I tried VS--Tools--options-- source control--plugin selection and set plug in to Microsoft "Visual source safe" and when i open "Environment" tab it is showing "an error occured while loading this property page" Can someone help me???

    Read the article

  • "Place code in separate file" in Visual Studio 2008 with ASP.NET

    - by Ben McCormack
    I'm reading a book that recommends clicking a check box that says "Place code in separate file" when adding a new Web Form to an existing ASP.NET project. The book is using Visual Studio 2005 and there is a check box for "Place code in separate file" when you open the "Add New Item" dialog. I am using Visual Studio 2008 and I do not see the "Place code in separate file" checkbox when I try to add a new item. Is there something I need to do to enable this? Was this functionality no longer possible/important in VS 2008?

    Read the article

  • Subtotal error on calculated field in a Reporting Services Matrix

    - by peacedog
    I've got a Reporting Services report that has two row groups: Category and SubCategory. For columns it has LastYearDataA, ThisYearDataA, LastYearDataB, ThisYearDataB. I added two columns (one for A and one for B) to handle an expression calculation (to show a percentage different from LastYear to ThisYear for each). That's working. The problem comes in the SubTotal for each category. The raw numbers are totaling correctly. If SubCat1 has 10//5 for LastYear/ThisYear A, and SubCat2 has 5//1, then I get 15/5 for the totals. But I get the percentage reported in the total column as "50%", matching SubCat1. Percentages for each Subcategory are being calculated correctly (according to my backup math, anyway). But the sub total % always matches the first SubCategory in the group. Is this impossible to do in Reporting Services 2005?

    Read the article

  • Msg 64, Level 20, State 0, Line 0 SQL Server Error

    - by Brettski
    I am running a sproc on an SQL Server 2005 server which is resulting in the following error: Msg 64, Level 20, State 0, Line 0 A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) Once the error occurs I loose my connection to the server, but able to reconnect. There is nothing in the Event logs. The database is still functional and running its website fine. EDIT: This occurs every time I run this sproc, or it's called by an application. Any suggestions on what may be causing this error?

    Read the article

  • SQL Server deadlocks between select/update or multiple selects

    - by RobW
    All of the documentation on SQL Server deadlocks talks about the scenario in which operation 1 locks resource A then attempts to access resource B and operation 2 locks resource B and attempts to access resource A. However, I quite often see deadlocks between a select and an update or even between multiple selects in some of our busy applications. I find some of the finer points of the deadlock trace output pretty impenetrable but I would really just like to understand what can cause a deadlock between two single operations. Surely if a select has a read lock the update should just wait before obtaining an exclusive lock and vice versa? This is happening on SQL Server 2005 not that I think this makes a difference.

    Read the article

  • Linq to Entities : using ToLower() on NText fields

    - by Julien N
    I'm using SQL Server 2005, with a case sensitive database.. In a search function, I need to create a Linq To Entities (L2E) query with a "where" clause that compare several strings with the data in the database with these rules : The comparison is a "Contains" mode, not strict compare : easy as the string's Contains() method is allowed in L2E The comparison must be case insensitive : I use ToLower() on both elements to perform an insensitive comparison. All of this performs really well but I ran into the following Exception : "Argument data type ntext is invalid for argument 1 of lower function" on one of my fields. It seems that the field is a NText field and I can't perform a ToLower() on that. What could I do to be able to perform a case insensitive Contains() on that NText field ?

    Read the article

  • SQL Server: Profiling statements inside a User-Defined Function

    - by Craig Walker
    I'm trying to use SQL Server Profiler (2005) to track down some application performance problems. One of the calls being made is to a table-valued user-defined function. This function wraps a select that joins several tables together. In SQL Server Profiler, the call to the UDF is logged. However, the select that underlies the UDF isn't being logged at all. Because of this, I'm not getting useful data on which tables & indexes are being hit. I'd like to feed this info into the Database Tuning Advisor for some indexing advice. Is there any way (short of unwrapping the queries themselves) to log the tables called by UDFs in Profiler?

    Read the article

  • TransactionScope and Connection Pooling

    - by Graham
    Hi, I'm trying to get a handle on whether we have a problem in our application with database connections using incorrect IsolationLevels. Our application is a .Net 3.5 database app using SQL Server 2005. I've discovered that the IsolationLevel of connections are not reset when they are returned to the connection pool (see here) and was also really surprised to read in this blog post that each new TransactionScope created gets its own connection pool assigned to it. Our database updates (via our business objects) take place within a TransactionScope (a new one is created for each business object graph update). But our fetches do not use an explicit transaction. So what I'm wondering is could we ever get into the situation where our fetch operations (which should be using the default IsolationLevel - Read Committed) would reuse a connection from the pool which has been used for an update, and inherit the update IsolationLevel (RepeatableRead)? Or would our updates be guaranteed to use a different connection pool seeing as they are wrapped in a TransactionScope? Thanks in advance, Graham

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >