Search Results

Search found 27352 results on 1095 pages for 'sql sentry'.

Page 82/1095 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • SQL Server 2005 Import from Excel

    - by user327045
    I'd like to know what my best option would be to import data from an excel file on a weekly or monthly basis. At first, I thought I would use SSIS, but after much struggle with seemingly simple tasks, I'm starting to rethink my plan. Would it be better/easier to just write the SQL by hand or use the services of an SSIS package? The basic process will be as follows: A separate process will download an .xls file to a local fileshare. The xls file will have a filename like: 'myfilename MON YY'. I will need to read the month and year from the the filename, reformat it to a sql date and then query a DimDate table to find the corresponding date key. For each row (after the first 2 header rows), insert the data with the date key, unless the row is a total row, then ignore. Here are some of the issues I've been encountering with SSIS: I can parse the date string from a flat file datasource, but can't seem to do it with an excel data source. Also, once parsed, i cannot seem to convert the string to a date in order to perform the lookup for the date key. For example, I want to do something like this: select DateKey from DimDate where ActualDate = convert(datetime, '01-' + 'JAN-10', 120) but i don't think it is possible to use the 'convert' or 'datetime' keywords in an expression builder. I have been also unable to find where I can edit the SQL to ignore the first 2 rows of data. I'm very skeptical of using SSIS because it seems like a Kludgy way of doing something that can probably be accomplished more efficiently writing the SQL yourself, but I may be forced to use SSIS. Thoughts?

    Read the article

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • SQL putting two single quotes around datetime fields and fails to insert record

    - by user82613
    I am trying to INSERT into an SQL database table, but it doesn't work. So I used the SQL server profiler to see how it was building the query; what it shows is the following: declare @p1 int set @p1=0 declare @p2 int set @p2=0 declare @p3 int set @p3=1 exec InsertProcedureName @ConsumerMovingDetailID=@p1 output, @UniqueID=@p2 output, @ServiceID=@p3 output, @ProjectID=N'0', @IPAddress=N'66.229.112.168', @FirstName=N'Mike', @LastName=N'P', @Email=N'[email protected]', @PhoneNumber=N'(254)637-1256', @MobilePhone=NULL, @CurrentAddress=N'', @FromZip=N'10005', @MoveInAddress=N'', @ToZip=N'33067', @MovingSize=N'1', @MovingDate=''2009-04-30 00:00:00:000'', /* Problem here ^^^ */ @IsMovingVehicle=0, @IsPackingRequired=0, @IncludeInSaveologyPlanner=1 select @p1, @p2, @p3 As you can see, it puts a double quote two pairs of single quotes around the datetime fields, so that it produces a syntax error in SQL. I wonder if there is anything I must configure somewhere? Any help would be appreciated. Here is the environment details: Visual Studio 2008 .NET 3.5 MS SQL Server 2005 Here is the .NET code I'm using.... //call procedure for results strStoredProcedureName = "usp_SMMoverSearchResult_SELECT"; Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand(strStoredProcedureName); dbCommand.CommandTimeout = DataHelper.CONNECTION_TIMEOUT; database.AddInParameter(dbCommand, "@MovingDetailID", DbType.String, objPropConsumer.ConsumerMovingDetailID); database.AddInParameter(dbCommand, "@FromZip", DbType.String, objPropConsumer.FromZipCode); database.AddInParameter(dbCommand, "@ToZip", DbType.String, objPropConsumer.ToZipCode); database.AddInParameter(dbCommand, "@MovingDate", DbType.DateTime, objPropConsumer.MoveDate); database.AddInParameter(dbCommand, "@PLServiceID", DbType.Int32, objPropConsumer.ServiceID); database.AddInParameter(dbCommand, "@FromAreaCode", DbType.String, pFromAreaCode); database.AddInParameter(dbCommand, "@FromState", DbType.String, pFromState); database.AddInParameter(dbCommand, "@ToAreaCode", DbType.String, pToAreaCode); database.AddInParameter(dbCommand, "@ToState", DbType.String, pToState); DataSet dstSearchResult = new DataSet("MoverSearchResult"); database.LoadDataSet(dbCommand, dstSearchResult, new string[] { "MoverSearchResult" });

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • C# Sql Connection Best Practises 2013

    - by Pete Petersen
    With the new year approaching I'm drawing up a development plan for 2013. I won't bore you with the details but I started thinking about whether the way I do things is actually the 'correct' way. In particular how I'm interfacing with SQL. I create predominantly create WPF desktop applications and often some Silverlight Web Applications. All of my programs are very Data-Centric. When connecting to SQL from WPF I tend to use Stored Procedures stored on the server and fetch them using ADO.NET (e.g. SQLConnection(), .ExecuteQuery()). However with Silverlight I have a WCF service and use LINQ to SQL (and I'm using LINQ much more in WPF). My question is really is am I doing anything wrong in a sense that it's a little old fashioned? I've tried to look this up online but could find anything useful after about 2010 and of those half were 'LINQ is dead!' and the other 'Always use LINQ' Just want to make sure going forward I'm doing the right things the right way, or at least the advised way :). What principles are you using when connecting to SQL? Is it the same for WPF and Silverlight/WCF?

    Read the article

  • '<=' operator is not working in sql server 2000

    - by Lalit
    Hello, Scenario is, database is in the maintenance phase. this database is not developed by ours developer. it is an existing database developed by the 'xyz' company in sql server 2000. This is real time database, where i am working now. I wanted to write the stored procedure which will retrieve me the records From date1 to date 2.so query is : Select * from MyTableName Where colDate>= '3-May-2010' and colDate<= '5-Oct-2010' and colName='xyzName' whereas my understanding I must get data including upper bound date as well as lower bound date. but somehow I am getting records from '3-May-2010' (which is fine but) to '10-Oct-2010' As i observe in table design , for ColDate, developer had used 'varchar' to store the date. i know this is wrong remedy by them. so in my stored procedure I have also used varchar parameters as @FromDate1 and @ToDate to get inputs for SP. this is giving me result which i have explained. i tried to take the parameter type as 'Datetime' but it is showing error while saving/altering the stored procedure that "@FromDate1 has invalid datatype", same for "@ToDate". situation is that, I can not change the table design at all. what i have to do here ? i know we can use user defined table in sql server 2008 , but there is version sql server 2000. which does not support the same. Please guide me for this scenario. **Edited** I am trying to write like this SP: CREATE PROCEDURE USP_Data (@Location varchar(100), @FromDate DATETIME, @ToDate DATETIME) AS SELECT * FROM dbo.TableName Where CAST(Dt AS DATETIME) >=@fromDate and CAST(Dt AS DATETIME)<=@ToDate and Location=@Location GO but getting Error: Arithmetic overflow error converting expression to data type datetime. in sql server 2000 What should be that ? is i am wrong some where ? also (202 row(s) affected) is changes every time in circular manner means first time sayin (122 row(s) affected) run again saying (80 row(s) affected) if again (202 row(s) affected) if again (122 row(s) affected) I can not understand what is going on ?

    Read the article

  • SQL Table design question

    - by Projapati
    Please ignore this question if it sounds stupid to you. I have SQL table (SQL Server) for photo albums and it has 20+ columns & it will hold millions of albums. I need to designate some albums as Promoted and some as Featured every week. I also need a very efficient way to get these albums (page by page) when I show it to users. How should I design this? option 1: I can create another table just to store the ids of the promoted and featured albums like this and then join the main albums table to get the set of columns I need. table designated_albums: album_id promoted_featured 1 1 5 0 7 1 15 0 The query for promoted will return 1, 7 The query for featured will return 5, 15 Option 2: I can add 1 column store 1 if promoted and 0 if featured. Otherwise it is null I can then query to check for 1 in that column for promoted albums & 0 for featured. Option 3: I can add 2 bit columns: one for promoted (0/1) and one for featured(0/1) Which way would perform better? EDIT: The design should be efficient in SQL 2008 as well. Right now I have SQL 2005.

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • SQL Server 2008 to SQL Server 2008

    - by Sakhawat Ali
    I have an MDF and LDF file of SQL Server 2005. i attached it with SQL Server 2008 and did some change in data. now when i attached it back to sql server 2005 Express Edition it gives version error. The database 'E:\DB\JOBPERS.MDF' cannot be opened because it is version 655. This server supports version 612 and earlier. A downgrade path is not supported. Could not open new database 'E:\DB\JOBPERS.MDF'. CREATE DATABASE is aborted. An attempt to attach an auto-named database for file E:\DB\Jobpers.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.

    Read the article

  • Connection Pool error LINQ to SQL

    - by Ved
    Guys, In Our project I have used LINQ to SQL for every kind of database interaction. Now from what I know , When I create and use DataContext object : it opens the connection , crates a transaction , perform the query and closes the connection. Every now and then we are getting Connection pool error on our services and Server. " The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. " Any idea ? Am I completely off here ? Is this related to SQL Server itself ? We are using LINQ , .Net 3.5 and Sql Server 2008 NOTE: NO DATAREADER IS BEING USED ANYWHERE IN SYSTEM. Thanks

    Read the article

  • Removing offline/defunct files in SQL server 2008

    - by philox
    How to remove traces of files marked as OFFLINE or DEFUNCT in Microsoft SQL server 2008? I have been playing around with a setup where I create a database with 3 file-groups which are: Primary, FileGroupData and FileGroupIndex. The clustered index is using FileGroupData and a non-clustered index is set to use FileGroupIndex. To simulate a disk failure I've shut down SQL server and manually deleted the files in index file-group. To start the database I'll mark the files 'OFFLINE', but after that I can't delete the index files, which are now offline. I don't have backup of the files as they are merely indices, but that has the implication that I can't restore the files and have their status as "ONLINE". How would you recommend removing the files and the file-group as they still show up in management studio under files/file-groups. Management studio is not able to delete them. As far as I can tell this is different from the question posted in : http://stackoverflow.com/questions/462637/how-do-i-remove-offline-files-from-a-sql-server-2005-database /Philip

    Read the article

  • sql server 2008 export to mdb issue

    - by Vivek Chandraprakash
    Hi i'm trying to import records from a mdb file to sql server 2008. i have a separate db server and application server. wrote a stored proc to import data from mdb. But it fails as the mdb is on a network. It works if it's on the db server itself. i'm using sql server account. if i login using windows account and try the same sp it works. How to make it work for sql server account? -Vivek

    Read the article

  • SQL Server 2005 sp_send_dbmail

    - by Jit
    Hi Friends, When we use sp_send_dbmail to send email with attachment, the attachment gets copied into a folder inside C:\Windows\Temp. As we have many emails to be sent every day, the temp folder grows rapidly. This is the case with SQL Server 2005. We noticed that, with SQL Server 2008, we dont see these file under temp folder. Is there any setting to turn the above behavior off? Does SQL Server 2008 store the files in any other folder and not in temp? Appreciate your help and time. Thanks.

    Read the article

  • SQL Server 2008 SQLCLR Issue

    - by Joe Bloggs
    I have a SQLCLR that was originally running on SQL Server 2005 DB. I have since rebuilt the CLR to run on SQL Server 2008. The CLR requires a reference to WindowsBase.dll. I added WindowsBase.dll as an assembly to the SQL Server 2008 DB. When I try to use the rebuilt CLR I get the message Could not load file or assembly 'WindowsBase, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Assembly in host store has a different signature than assembly in GAC. (Exception from HRESULT: 0x80131050) I tried to drop and recreate the WindowsBase assembly (making sure to use the dll from the GAC) but I still get the above error. Any suggestions on how to resolve this issue would be greatly appreciated. Cheers

    Read the article

  • SQL Server 2005, Sudden increase of connections - SharePoint 2007

    - by CrazyNick
    We observed that sudden increase of SQL connections during a specific hour, it is a backend of a SharePoint 2007 Farm. From SharePoint 2007 Perspective: 1. Incremental crawling is scheduled at that time and few of the Timer jobs (normal timer jobs) are scheduled to run every mins / per 10mins. 2. Number of user requests are less. From SQL Server 2005 Perspective: 1. Transaction log backup is scheduled at that time 2. No other scheduled jobs are running at that time. so, how to narrow down the issue, what would be causing the sudden SQL connection increase?

    Read the article

  • How to handle encrypted column in SQL Server Database in LINQ to SQL?

    - by Raja
    Hi Guys, We are in the process of encrypting all the SSN columns (had to do it now due to security audit). We already built the DBML and now we have to change it. Can you guys let me know if there is a good way of handling this in LINQ to SQL? We use stored procedures for Insert, Update and Delete but use LINQ to SQL for all our selects (ease of use). Do I have to change the format for those tables with SSN or Is there any other manner we can handle it in LINQ to SQL? Thanks, Raja

    Read the article

  • Upgrading from SQL Server 2008 Express to 2008 Developer

    - by josecortesp
    Hey Guys, this one is a quick Question: What is the best (or THE) way to change my SQL Express 2008 (with advance...) installation to a 2008 Developer edition? I need to keep the databases, along with the logins and so on. I need to upgrade because, I Want to use all the features in TFS 2010. Do I have to make backups of all the data and uninstall express - install developer? Is there a quicker way? Thanks in advance SOLVED: In the SQL Server Installation Center, there a Edition Upgrade options found under Maintenance. The only thing is that you have to choose processor type as x86 (as Express is only x86) in the options in the Installation Center. Now my SQL Server is developer...

    Read the article

  • SQL Server Account will work for SQL Server but fails to ReportServer

    - by Larry
    Setting up SSRS. For Service Account, I set a domain account as the Report Server Service Account. click apply SQL Server Connection Dialog pops up. Need to specify an account with admin privileges. set Credentials Type = SQL Server Account I use sa account (which works, verified many ways with sql server management studio) I fails with the following: System.InvalidOperationException: Cannot start service ReportServer on computer 'DEVDB5'. --- System.ComponentModel.Win32Exception: The service did not start due to a logon failure --- End of inner exception stack trace --- at System.ServiceProcess.ServiceController.Start(String[] args) at System.ServiceProcess.ServiceController.Start() at ReportServicesConfigUI.Panels.WindowsServiceIdentityPanel.StartWindowsServicePostChangeWindowsServiceIdentity(ServiceController rsService) What am I doing wrong. btw: it was working fine as of yesterday. I originally set up SSRS a few days ago using the sa account

    Read the article

  • transaction log shipping sql server 2005 to 2008

    - by Andrew Jahn
    I have a reporting setup with SSRS on our sql server 2005 database. Because sql server 2008 is not supported by the main program which populates our database we are stuck with 2005 on our prod database. Unfortunately when I run our weekly check reports the web interface constantly times out because the server cant do the conversion to PDF. I've read that sql server 2008's SSRS is ALOT better with memory management. I was wondering if I can do some kind transact log shipping subscription publication from 2005 to 2008? Am I chasing a dream here.

    Read the article

  • SQL 2008 Database tuning advisor won't start

    - by Andrew Hancox
    For some reason I can't get DTA to connect to my development machine. It connects to a remote DB just fine but when I point it to my dev machine I get an error saying: Failed to initialize MSDB database for tuning (exit code: -1073741819). I'm pretty sure it's not a permissions issue since I've used profiler to capture what it's doing and all of the commands it's run so far look fine and are being run under my account which is associated with the sysadmin role, when I run them in sql management studio they go through fine. I'm pretty convinced that the problem is related to creating the objects in MSDB that are used by DTA but I tried creating these manually (I found scripts on the web) and it just seems to push the problem along the line slightly. I'm going out of my mind - have even tried reinstalling SQL but that's not fixed it. I'm using SQL 2008 with SP1 (10.0.2531) on windows server 2008 (patched up to date). SAVE ME!!!!!

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >