Search Results

Search found 3343 results on 134 pages for 'sqlserver 2000'.

Page 15/134 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Possible uncommitted transactions causing "System.Data.SqlClient.SqlException: Timeout expired" erro

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Large transactions causing "System.Data.SqlClient.SqlException: Timeout expired" error?

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • If record exists in database, UPDATE a single column

    - by Doug
    I have a bulk uploading object in place that is being used to bulk upload roughly 25-40 image files at a time. Each image is about 100-150 kb in size. During the upload, I've created a for each loop that takes the file name of the image (minus the file extension) to write it into a column named "sku". Also, for each file being uploaded, the date is recorded to a column named DateUpdated, as well as some image path data. Here is my c# code: protected void graphicMultiFileButton_Click(object sender, EventArgs e) { //graphicMultiFile is the ID of the bulk uploading object ( provided by Dean Brettle: http://www.brettle.com/neatupload ) if (graphicMultiFile.Files.Length > 0) { foreach (UploadedFile file in graphicMultiFile.Files) { //strip ".jpg" from file name (will be assigned as SKU) string sku = file.FileName.Substring(0, file.FileName.Length - 4); //assign the directory where the images will be stored on the server string directoryPath = Server.MapPath("~/images/graphicsLib/" + file.FileName); //ensure that if image existes on server that it will get overwritten next time it's uploaded: file.MoveTo(directoryPath, MoveToOptions.Overwrite); //current sql that inserts a record to the db SqlCommand comm; SqlConnection conn; string connectionString = ConfigurationManager.ConnectionStrings["DataConnect"].ConnectionString; conn = new SqlConnection(connectionString); comm = new SqlCommand("INSERT INTO GraphicsLibrary (sku, imagePath, DateUpdated) VALUES (@sku, @imagePath, @DateUpdated)", conn); comm.Parameters.Add("@sku", System.Data.SqlDbType.VarChar, 50); comm.Parameters["@sku"].Value = sku; comm.Parameters.Add("@imagePath", System.Data.SqlDbType.VarChar, 300); comm.Parameters["@imagePath"].Value = "images/graphicsLib/" + file.FileName; comm.Parameters.Add("@DateUpdated", System.Data.SqlDbType.DateTime); comm.Parameters["@DateUpdated"].Value = DateTime.Now; conn.Open(); comm.ExecuteNonQuery(); conn.Close(); } } } After images are uploaded, managers will go back and re-upload images that have previously been uploaded. This is because these product images are always being revised and improved. For each new/improved image, the file name and extension will remain the same - so that when image 321-54321.jpg was first uploaded to the server, the new/improved version of that image will still have the image file name as 321-54321.jpg. I can't say for sure if the file sizes will remain in the 100-150KB range. I'll assume that the image file size will grow eventually. When images get uploaded (again), there of course will be an existing record in the database for that image. What is the best way to: Check the database for the existing record (stored procedure or SqlDataReader or create a DataSet ...?) Then if record exists, simply UPDATE that record so that the DateUpdated column gets today's date. If no record exists, do the INSERT of the record as normal. Things to consider: If the record exists, we'll let the actual image be uploaded. It will simply overwrite the existing image so that the new version gets displayed on the web. We're using SQL Server 2000 on hosted environment (DiscountAsp). I'm programming in C#. The uploading process will be used by about 2 managers a few times a month (each) - which to me is not a allot of usage. Although I'm a jr. developer, I'm guessing that a stored procedure would be the way to go. Just seems more efficient - to do this record check away from the for each loop... but not sure. I'd need extra help writing a sproc, since I don't have too much experience with them. Thank everyone...

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Query.fetch(limit=2000) only moves cursor forward by 1000 entities?

    - by Liron
    Let's say I have 2500 MyModel entities in my datastore, and I run this code: query = MyModel.all() first_batch = query.fetch(2000) len(first_batch) # 2000 next_query = MyModel.all().with_cursor(query.cursor()) next_batch = next_query.fetch(2000) What do you think len(next_batch) is? 500, right? Nope - it's 1500. Apparently the query cursor never moves forward by more than 1000, even when the query itself returns more than 1000 entities. Should I do something different or is it just an App Engine bug?

    Read the article

  • Ikoula : 500 serveurs virtuels dédiés offerts, disponibles sur deux configurations et incluent MySQL et SQLServer

    Ikoula : 500 serveurs virtuels dédiés offerts Disponibles sur deux configurations et incluent MySQL et SQLServer Ikoula, leader du Cloud Computing en France propose une promotion limitée sur son offre de serveurs virtuels dédiés. 500 Flex'Servers sont en effet offerts aux premiers inscrits à cette promotion valable jusqu'au 31 janvier prochain, et ce durant 3 mois et sans engagement de durée. [IMG]http://idelways.developpez.com/news/images/flex_image.gif[/IMG] Construits sur une architecture haut de gamme (Dell, Cisco, ...) robuste et performante, les Flex'Servers de iKoula s'adaptent potentiellement à tous les besoins. Le nombre de pro...

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • How do I send a newsletter to 2000 emails on a shared server?

    - by Bogdan
    Consider this scenario: I'm running Magento 1.4 Community Edition I have a newsletter that I'm sending out 2 times per week My list has 2000 subscribers I'm on a shared hosting plan (linux on apache 2.2 with Cpanel) My hosting provider limits me to 6 emails/minute How can I send 2000 emails x 2 times/week x 4 weeks/month in the above scenario? Is there a server configuration I can ask my hosting company to make? Any software that can temporize the email send?

    Read the article

  • How to convert lots of database file from MSSQL 2000 to MSSQL 2005?

    - by Tech
    Hi all, I am moving the SQL Server from MSSQL 2000 to MSSQL 2005, and I found the article in the web like this: http://www.aspfree.com/c/a/MS-SQL-Server/Moving-Data-from-SQL-Server-2000-to-SQL-Server-2005/ It works, but the problem is, it only move database one by one. Because I have so many database, is there any easy way to do so? or is there provides any batches / untitlty allow me to do so? thz u.

    Read the article

  • How to convert lots of database file from MSSQL 2000 to MSSQL 2005?

    - by Tech
    Hi all, I am moving the SQL Server from MSSQL 2000 to MSSQL 2005, and I found the article in the web like this: http://www.aspfree.com/c/a/MS-SQL-Server/Moving-Data-from-SQL-Server-2000-to-SQL-Server-2005/ It works, but the problem is, it only move database one by one. Because I have so many database, is there any easy way to do so? or is there provides any batches / untitlty allow me to do so? thz u.

    Read the article

  • How do I position a 2D camera in OpenGL?

    - by Elfayer
    I can't understand how the camera is working. It's a 2D game, so I'm displaying a game map from (0, 0, 0) to (mapSizeX, 0, mapSizeY). I'm initializing the camera as follow : Camera::Camera(void) : position_(0.0f, 0.0f, 0.0f), rotation_(0.0f, 0.0f, -1.0f) {} void Camera::initialize(void) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glTranslatef(position_.x, position_.y, position_.z); gluPerspective(70.0f, 800.0f/600.0f, 1.0f, 10000.0f); gluLookAt(0.0f, 6000.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); } So the camera is looking down. I currently see the up right border of the map in the center of my window and the map expand to the down left border of my window. I would like to center the map. The logical thing to do should be to move the camera to eyeX = mapSizeX / 2 and the same for z. My map has 10 x 10 cases with CASE = 400, so I should have : gluLookAt((10 / 2) * CASE /* = 2000 */, 6000.0f, (10 / 2) * CASE /* = 2000 */, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f); But that doesn't move the camera, but seems to rotate it. Am I doing something wrong? EDIT : I tried that: gluLookAt(2000.0f, 6000.0f, 0.0f, 2000.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f); Which correctly moves the map in the middle of the window in width. But I can't move if correctly in height. It always returns the axis Z. When I go up, It goes down and the same for right and left. I don't see the map anymore when I do : gluLookAt(2000.0f, 6000.0f, 2000.0f, 2000.0f, 0.0f, 2000.0f, 0.0f, 1.0f, 0.0f);

    Read the article

  • CONTAINSTABLE with wildcard works different in SQLServer 2005 and SQLServer 2008?

    - by musuk
    I have two same databases one on SQLServer 2005 and one on SqlServer 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQLServer 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQLServer 2005 works perfectly and finds necessary record. SQLServer 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

  • SQL Profiler: Read/Write units

    - by Ian Boyd
    i've picked a query out of SQL Server Profiler that says it took 1,497 reads: EventClass: SQL:BatchCompleted TextData: SELECT Transactions.... CPU: 406 Reads: 1497 Writes: 0 Duration: 406 So i've taken this query into Query Analyzer, so i may try to reduce the number of reads. But when i turn on SET STATISTICS IO ON to see the IO activity for the query, i get nowhere close to one thousand reads: Table Scan Count Logical Reads =================== ========== ============= FintracTransactions 4 20 LCDs 2 4 LCTs 2 4 FintracTransacti... 0 0 Users 1 2 MALs 0 0 Patrons 0 0 Shifts 1 2 Cages 1 1 Windows 1 3 Logins 1 3 Sessions 1 6 Transactions 1 7 Which if i do my math right, there is a total of 51 reads; not 1,497. So i assume Reads in SQL Profiler is an arbitrary metric. Does anyone know the conversion of SQL Server Profiler Reads to IO Reads? See also SQL Profiler CPU / duration unit Query Analyzer VS. Query Profiler Reads, Writes, and Duration Discrepencies

    Read the article

  • com0com silent install (test signed com0com.sys shows up as signed in explorer but not in Device Manager)

    - by Andrew
    My goal is to have the com0com serial driver install without popping up the install wizard on both WinXP and Win2000. I am working on WinXP x86. I have followed the test signing instructions for the com0com driver, replacing amd64 with i386 at line 60. I have added my test certificate as both a root and trustedprovider using the following commands: certmgr /add com0com.cer /r localMachine root certmgr /add com0com.cer /r localMachine trustedprovider And verified that it is listed under both locations. I then run the newly built setup.exe. This installs the signed com0com.sys file into C:\WINDOWS\system32\DRIVERS and sets up a pair of virtual serial ports and a bus between them. Using explorer, I go to the DRIVERS directory, right click on the com0com.sys file and verify that it has the "test" digital signature. I then go into Device Manager, open the "com0com serial port emulators" entry, pick an entry and do Properties-Driver and see that it says "Not digitally signed". I click details for the driver and can see that it is referring to the com0com.sys driver file that I just confirmed is signed. I found what might be a related issue but I'm not sure. Does WinXP demand a WHQL signature? If so, does that explain why the com0com.sys file is signed but the device driver entries say they aren't signed?

    Read the article

  • Trigger Code on a table in my ERP Database

    - by David Stein
    My ERP Vendor has the following trigger on a table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[SOItem_DeleteCheck] ON [dbo].[soitem] FOR DELETE AS BEGIN DECLARE @RecCnt int, @LogInfo varchar(256) SET @RecCnt = (SELECT COUNT(*) FROM deleted) IF @RecCnt > 150 BEGIN RAISERROR (54010, 18, 1, 'SOItem') WITH LOG ROLLBACK TRANSACTION END SET @LogInfo = 'Deleting ' + LTRIM(STR(@RecCnt)) + ' Rows From SOItem' EXEC LogDeletes @LogInfo END GO This seems very inefficient to me. Doesn't select count(*) take longer than Count(specific field)?

    Read the article

  • SQL Server: Can INFORMATION_SCHEMA Tell Me When a SQL Object Was Created?

    - by Jim G.
    Given: I added a non-nullable foreign key to a table. I settled on a way to populate the foreign key with default values. I checked in both changes to a re-runnable DB creation script. Other developers ran this script, and now they have the foreign key populated with default values on their developer machines. A few days later however... I changed my mind. Now I'd like to populate the foreign key's default values differently. My Question: Can SQL Server or INFORMATION_SCHEMA tell me when SQL objects were created? Ideally, I'd like to drop and re-add the foreign key if it was created before a certain date/time. Any help or alternative strategies would be greatly appreciated. Obviously, I'd like to avoid going to each developer's cube, asking them to drop the foreign key manually.

    Read the article

  • SQL Server 2005 Table Alter History

    - by Kayes
    Hi. Does SQL Server maintains any history to track table alterations like column add, delete, rename, type/ length change etc? I found many suggest to use stored procedures to do this manually. But I'm curious if SQL Server keeps such history in any system tables? Thanks.

    Read the article

  • SQL Server: Must numbers all be specified with latin numeral digits?

    - by Ian Boyd
    Does SQL server expect numbers to be specified with digits from the latin alphabet, e.g.: 0123456789 Is it valid to give SQL Server digits in other alphabets? Rosetta Stone: Latin: 01234567890 Arabic: ?????????? Bengali: ?????????? i know that the client (ADO) will convert 8-bit strings to 16-bit unicode strings using the current culture. But the client is also converting numbers to strings using their current culture, e.g.: SELECT * FROM Inventory WHERE Quantity > ???,?? Which throws SQL Server for fits. i know that the server/database has it's defined code page and locale, but that is for strings. Will SQL Server interpret numbers using the active (or per-login specified) locale, or must all numeric values be specifid with latin numeral digits?

    Read the article

  • What can be a cookie? How to set with OUTPUT? RETURNVALUE?

    - by Ronnie Chester Lynwood
    hello. i think i got some problems with setting a cookie data. for this code: Set cmdDB = Server.CreateObject("ADODB.Command") With cmdDB .ActiveConnection = ADOConM .CommandText = "usp_jaljava_member_select" .CommandType = adCmdStoredProc .Parameters.Append .CreateParameter("RETURN_VALUE", adInteger, adParamReturnValue, 0) .Parameters.Append .CreateParameter("@TLoginName", adVarChar, adParamInput, 15,lcase(TLoginName)) .Parameters.Append .CreateParameter("@TPassword", adVarChar, adParamInput, 20,TPassword) .Parameters.Append .CreateParameter("@retval", adVarChar, adParamOutput, 50) ' .Parameters.Append .CreateParameter("@TPinCode", adVarChar, adParamInput, 15,TPinCode) .Execute,,adExecuteNoRecords RetVal = .Parameters("@retval") Ret = Trim(.Parameters("RETURN_VALUE")) 'Set .ActiveConnection = Nothing End With Set cmdDB = Nothing UTid = RetVal if Ret = 100 then deleteInvalidLogin(TLoginName) SetDomainCookie "UTid",UTid SetDomainCookie "Uid", TLoginName if redirect_domain <> "" then Response.Write "<form name=frm action=" & urlserver & " method=post><input type=hidden name=loginname value='" & TLoginName & "'><input type=hidden name=id value=""" & Request.Cookies("UTID") & """></form><script>frm.submit();</script>" Response.End else%> <% Response.Redirect ("kologin.asp?id=OK") Response.End end if RETURN_VALUE is returns as 100. But I don't know.. UTID! What is UTID have to be? If I set UTID same as UID will it work? thanks..

    Read the article

  • Delphi Application using COMMIT and ROLLBACK for Multiple SQL Updates

    - by Matt
    Is it possible to use the SQL BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION when embedding SQL Queries into an application with mutiple calls to the SQL for Table Updates. For example I have the following code: Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; Q.Close; Q.SQL.Clear; Q.SQL.ADD(<Select Some Data>); Q.Open; Set Some Variables Q.Close; Q.SQL.Clear; Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; What I would like to do is if the second update fails I want to roll back the first transaction. If I set a unique notation for the BEGIN, COMMIT, ROLLBACK so as to specify what is being committed or rolled back, is it feasible. i.e. before the first Update specify BEGIN TRANSACTION_A then after the last update specify COMMIT TRANSACTION_A I hope that makes sense. If I was doing this in a SQL Stored Procedure then I would be able to specify this at the start and end of the procedure, but I have had to break the code down into manageable chunks due to process blocks and deadlocks on a heavy loaded SQL Server.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >