Search Results

Search found 33316 results on 1333 pages for 'sql team'.

Page 527/1333 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Why trigger query fails in the sqlite with Qt?

    - by dexterous_stranger
    I am a beginner in the SQL. I am using SQLite, Qt - on embedded systems. I want to put a trigger here. The trigger is that whenever the primary key Id is greater than 32145, then channelNum=101 should be set. I want to set the attrib name - text also, but I got the compilation issue. I believe that the setting of trigger is the part of DDL - Data definition language. Please let me know that if I am wrong here. Here is my create db code. I get the sql query error. Also please do suggest how to set the text attrib = "COmedy". /** associate db with query **/ QSqlQuery query ( m_demo_db ); /** Foreign keys are disabled by default in sqlite **/ /** Here is the pragma to turn them on first **/ query.exec("PRAGMA foreign_keys = ON;"); if ( false == query.exec()) { qDebug()<<"Pragma failed"; } /** Create Table for storing user preference LCN for DTT **/ qDebug()<<"Create Table postcode.db"; query.prepare(" CREATE TABLE dttServiceList (Id INTEGER PRIMARY KEY, attrib varchar(20), channelNum integer )" ); if ( false == query.exec()) { qDebug()<<"Create dttServiceList table failed"; } /** Try placing trigger here **/ triggerQuery = "CREATE TRIGGER upd_check BEFORE INSERT ON dttServiceList \ FOR EACH ROW \ BEGIN \ IF Id > 32145 THEN SET channelNum=101; \ END IF; \ END; "; query.prepare(triggerQuery); if ( false == query.exec()) { qDebug()<<"Trigger failed !!"; qDebug() << query.lastError(); } Also, how to set the text name in the trigger - I want to SET attrib = "Comedy".

    Read the article

  • Why use Entity Framework over Linq2SQL if...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason that I should use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • How to check data in a column separated by a colon (:)

    - by tonsils
    Hi, I have an Oracle database column, say col1, that has the following values: Col1 (A:B:C) I now need to come along and add to this Col1, only if it doesn’t exist, additional values but unsure how to go about checking to see if Col1 already contains these values. Scenario might be as follows: 1) Need to add B => Outcome=> check Col1 – B exists, do not add. 2) Need to add A:C => Outcome=> check Col1 – A and C exists, do not add. 3) Need to add C:D => Outcome=> check Col1 – C exists but D doesn’t, do not add C but need to add D 4) Need to add G => Outcome=> check Col1 – G doesn’t, need to add G Using Oracle sql or pl/sql I am unsure how to go about processing the above to ensure whether items exist or don’t exist and whether to add or not to add to Col1 Hoping someone can assist with a means of checking the data based on the above. Thanks

    Read the article

  • UPDATE Table SET Field

    - by davlyo
    This is my Very first Post! Bear with me. I have an Update Statement that I am trying to understand how SQL Server handles it. UPDATE a SET a.vField3 = b.vField3 FROM tableName a INNER JOIN tableName b ON a.vField1 = b.vField1 AND b.nField2 = a.nField2 – 1 This is my query in its simplest form. vField1 is a Varchar nField2 is an int (autonumber) vField3 is a Varchar I have left the WHERE clause out so understand there is logic that otherwise makes this a nessessity. Say vField1 is a Customer Number and that Customer has 3 records The value in nField2 is 1, 2, and 3 consecutively. vField3 is a Status When the Update comes to a.nField2 = 1 there is no a.nField2 -1 so it continues When the Update comes to a.nField2 = 2, b.nField2 = 1 When the Update comes to a.nField2 = 3, b.nField2 = 2 So when the Update is on a.nField2 = 2, alias b reflects what is on the line prior (b.nField2 = 1) And it SETs the Varchar Value of a.vField3 = b.vField3 When the Update is on a.nField2 = 3, alias b reflects what is on the line prior (b.nField2 = 2) And it (should) SET the Varchar Value of a.vField3 = b.vField3 When the process is complete –the Second of three records looks as expected –hence the value in vField3 of the second record reflects the value in vField3 from the First record However, vField3 of the Third record does not reflect the value in vField3 from the Second record. I think this demonstrates that SQL Server may be producing a transaction of some sort and then an update. Question: How can I get the DB to Update after each transaction so I can reference the values generated by each transaction?

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • Dynamic Data Extract Tools

    - by Kevin McGovern
    I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file. Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated. We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.

    Read the article

  • Displaying a message after adding duplicate records in database

    - by user1770370
    I wrote program in C# winforms and SQL server and LINQ to SQL. I use user control instead of form. In my user control, I put 3 textbox, txtStartNumber, txtEndNumber, txtQuantity. user define value of textboxes, when clicked button, it will insert some records according to the value of txtQuantity. I want to when duplicate number is created, it won't add to database and display message. how do i do? I must write code in code behind or server side? i must set this in store procedure or trigger? private void btnSave_Click(object sender, EventArgs e) { long from = Convert.ToInt64(txt_barcode_f.Text); long to = Convert.ToInt64(txt_barcode_t.Text); long quantity = Convert.ToInt64(to - from); int card_Type_ID=Convert.ToInt32(cmb_BracodeType .SelectedValue); long[] arrCardNum = new long[(to - from)]; arrCardNum[0]=from; for (long i = from; i < to; i++) { for(int j=0; j<(to-from) ;j++) { arrCardNum[j]=from+j; string r = arrCardNum[j].ToString(); sp.SaveCards(r, 2, card_Type_ID, SaveDate, 2); } } } Stored Procedure code. ALTER PROCEDURE dbo.SaveCards @Barcode_Num int ,@Card_Status_ID int ,@Card_Type_ID int ,@SaveDate varchar(10) ,@Save_User_ID int AS BEGIN INSERT INTO [Parking].[dbo].[TBL_Cards] ([Barcode_Num] ,[Card_Status_ID] ,[Card_Type_ID] ,[Save_User_ID]) VALUES (@Barcode_Num ,@Card_Status_ID ,@Card_Type_ID ,@Save_User_ID) END

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • What does Postgres do when BEGIN is run on a connection in autocommit mode?

    - by DNS
    I'm trying to better understand the concept of 'autocommit' when working with a Postgres (psycopg) connection. Let's say I have a fresh connection, set its isolation level to ISOLATION_LEVEL_AUTOCOMMIT, then run this SQL directly, without using the cursor begin/rollback methods (as an exercise; not saying I actually want to do this): INSERT A INSERT B BEGIN INSERT C INSERT D ROLLBACK What happens to INSERTs C & D? Is autocommit is purely an internal setting in psycopg that affects how it issues BEGINs? In that case, the above SQL is unafected; INSERTs A & B are committed as soon as they're done, while C & D are run in a transaction and rolled back. What isolation level is that transaction run under? Or is autocommit a real setting on the connection itself? In that case, how does it affect the handling of BEGIN? Is it ignored, or does it override the autocommit setting to actually start a transaction? What isolation level is that transaction run under? Or am I completely off-target?

    Read the article

  • A good(elegant) way to retrieve records with counts.

    - by user93422
    Context: ASP.NET MVC 2.0, C#, SQL Server 2007, IIS7 I have 'scheduledMeetings' table in the database. There is a one-to-many relationship: scheduledMeeting - meetingRegistration So that you could have 10 people registered for a meeting. meetingRegistration has fields Name, and Gender (for example). I have a "calendar view" on my site that shows all coming events, as well as gender count for each event. At the moment I use Linq to Sql to pull the data: var meetings = db.Meetings.Select( m => new { MeetingId = m.Id, Girls = m.Registrations.Count(r => r.Gender == 0), Boys = m.Registrations.Count(r=>r.Gender == 1) }); (actual query is half-a-page long) Because there is anonymous type use going on I cant extract it into a method (since I have several different flavors of calendar view, with different information on each, and I dont want to create new class for each). Any suggestions on how to improve this? Is database view is the answer? Or should I go ahead and create named-type? Any feedback/suggestions are welcome. My DataLayer is huge, I want to trim it, just dont know how. Pointers to a good reading would be good too.

    Read the article

  • android : SQL Exception while querying

    - by Ram
    Team, Can you please help me to understand why I m getting the following exception. 05-07 10:57:20.652: ERROR/AndroidRuntime(470): android.database.sqlite.SQLiteException: near "1": syntax error: , while compiling: SELECT Id,Name FROM act WHERE Id 1-IJUS-1 Thanks in advance,

    Read the article

  • Duplicate information from sql result

    - by puddleJumper
    I looked in about 18 other posts on here an most people are asking how to delete the records not just hide them. So my problem: I have a database with staff members who are associated with locations. Many of the staff members are associated with more than one location. What I want to do is to only display the first location listed in the mysql result and skip over the others. I have the sql query linking the tables together and it works aside from it showing the same information for those staff members that are in those other locations multiple times so example would be like this: This is the sql statement I have currently SELECT staff_tbl.staffID, staff_tbl.firstName, staff_tbl.middleInitial, staff_tbl.lastName, location_tbl.locationID, location_tbl.staffID, officelocations_tbl.locationID, officelocations_tbl.officeName, staff_title_tbl.title_ID, staff_title_tbl.staff_ID, titles_tbl.titleID, titles_tbl.titleName FROM staff_tbl INNER JOIN location_tbl ON location_tbl.staffID = staff_tbl.staffID INNER JOIN officelocations_tbl ON location_tbl.locationID = officelocations_tbl.locationID INNER JOIN staff_title_tbl ON staff_title_tbl.staff_ID = staff_tbl.staffID INNER JOIN titles_tbl ON staff_title_tbl.title_ID = titles_tbl.titleID and my php is <?php do { ?> <tr> <td><?php echo $row_rs_Staff_Info['firstName']; ?>&nbsp; <?php echo $row_rs_Staff_Info['lastName']; ?></td> <td><?php echo $row_rs_Staff_Info['titleName']; ?>&nbsp; </td> <td><?php echo $row_rs_Staff_Info['officeName']; ?>&nbsp; </td> </tr> <?php } while ($row_mysqlResult = mysql_fetch_assoc($rs_mysqlResult)); ?> What I would like to know is there a way using php to select only the first entry listed for each person and display that and just skip over the other two. I was thinking it could be done by possibly adding the staffID's to an array and if they are in there to skip over the next one listed in the staff_title_tbl but wasn't quite sure how to write it that way. Any help would be great thank you in advance.

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • Categorize data without consolidating?

    - by sqlnoob
    I have a table with about 1000 records and 2000 columns. What I want to do is categorize each row such that all records with equal column values for all columns except 'ID' are given a category ID. My final answer would look like: ID A B C ..... Category ID 1 1 0 3 1 2 2 1 3 2 3 1 0 3 1 4 2 1 3 2 5 4 5 6 3 6 4 5 6 3 where all columns (besides ID) are equal for IDs 1,3 so they get the same category ID and so on. I guess my thought was to just write a SQL query that does a group by on every single column besides 'ID' and assign a number to each group and then join back to my original table. My current input is a text file, and I have SAS, MS Access, and Excel to work with. (I could use proc sql from within SAS). Before I go this route and construct the whole query, I was just wondering if there was a better way to do this? It will take some work just to write the query, and I'm not even sure if it is practical to join on 2000 columns (never tried), so I thought I'd ask for ideas before I got too far down the wrong path. EDIT: I just realized my title doesn't really make sense. What I was originally thinking was "Is there a way I can group by and categorize at the same time without actually consolidating into groups?"

    Read the article

  • Data Type Not Consistent In MS Access? (Set new field as "TEXT" but system treats it as "Yes/No" field)

    - by user3522506
    I already have an SQL command that will insert any string in the field. But it doesn't accept any string, giving me "No value given for one or more required parameters". But if my string is "Yes" or "No", it will update successfully. And in MS Access, will appear as 0 or -1 even though I set the field as text even in the beginning. Could there be any configuration I have made in my MS Access 2007? con = New OleDbConnection(cs) con.Open() Dim cb As String = "Update FS_Expenses set FS_Date=#" & dtpDate2.Text & "#,SupplierID='" & txtSupplierID.Text & "', TestField=" & Label1.Text & " where ID=" & txtID2.Text & "" cmd = New OleDbCommand(cb) cmd.Connection = con cmd.ExecuteReader() MessageBox.Show("Successfully updated!", "Record", MessageBoxButtons.OK, MessageBoxIcon.Information) con.Close() TestField is already a TEXT data type, Label1.Text value is "StringTest", will give the error. However, set Label1.Text value as = "Yes", SQL will execute successfully. Therefore, field must have not been saved as TEXT.

    Read the article

  • TFS API Add Favorites programmatically

    - by Tarun Arora
    01 – What are we trying to achieve? In this blog post I’ll be showing you how to add work item queries as favorites, it is also possible to use the same technique to add build definition as favorites. Once a shared query or build definition has been added as favorite it will show up on the team web access.  In this blog post I’ll be showing you a work around in the absence of a proper API how you can add queries to team favorites. 02 – Disclaimer There is no official API for adding favorites programmatically. In the work around below I am using the Identity service to store this data in a property bag which is used during display of favorites on the team web site. This uses an internal data structure that could change over time, there is no guarantee about the key names or content of the values. What is shown below is a workaround for a missing API. 03 – Concept There is no direct API support for favorites, but you could work around it using the identity service in TFS.  Favorites are stored in the property bag associated with the TeamFoundationIdentity (either the ‘team’ identity or the users identity depending on if these are ‘team’ or ‘my’ favorites).  The data is stored as json in the property bag of the identity, the key being prefixed by ‘Microsoft.TeamFoundation.Framework.Server.IdentityFavorites’. References - Microsoft.TeamFoundation.WorkItemTracking.Client - using Microsoft.TeamFoundation.Client; - using Microsoft.TeamFoundation.Framework.Client; - using Microsoft.TeamFoundation.Framework.Common; - using Microsoft.TeamFoundation.ProcessConfiguration.Client; - using Microsoft.TeamFoundation.Server; - using Microsoft.TeamFoundation.WorkItemTracking.Client; Services - IIdentityManagementService2 - TfsTeamService - WorkItemStore 04 – Solution Lets start by connecting to TFS programmatically // Create an instance of the services to be used during the program private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; private static TfsTeamService _tts; private static TeamSettingsConfigurationService _teamConfig; private static IIdentityManagementService2 _ids; // Connect to TFS programmatically public static bool ConnectToTfs() { var isSelected = false; var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } Lets get all the work item queries from the selected team project static readonly Dictionary<string, string> QueryAndGuid = new Dictionary<string, string>(); // Get all queries and query guid in the selected team project private static void GetQueryGuidList(IEnumerable<QueryItem> query) { foreach (QueryItem subQuery in query) { if (subQuery.GetType() == typeof(QueryFolder)) GetQueryGuidList((QueryFolder)subQuery); else { QueryAndGuid.Add(subQuery.Name, subQuery.Id.ToString()); } } }   Pass the name of a valid Team in your team project and a name of a valid query in your team project. The team details will be extracted using the team name and query GUID will be extracted using the query name. These details will be used to construct the key and value that will be passed to the SetProperty method in the Identity service.           Key           “Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..<TeamProjectURI>.<TeamId>.WorkItemTracking.Queries.<newGuid1>”           Value           "{"data":"<QueryGuid>","id":"<NewGuid1>","name":"<QueryKey>","type":"Microsoft.TeamFoundation.WorkItemTracking.QueryItem”}"           // Configure a Work Item Query for the given team private static void ConfigureTeamFavorites(string teamName, string queryName) { _ids = _tfs.GetService<IIdentityManagementService2>(); var g = Guid.NewGuid(); var guid = string.Empty; var teamDetail = _tts.QueryTeams(_selectedTeamProject.Uri).FirstOrDefault(t => t.Name == teamName); foreach (var q in QueryAndGuid.Where(q => q.Key == queryName)) { guid = q.Value; } if(guid == string.Empty) { Console.WriteLine("Query '{0}' - Not found!", queryName); return; } var key = string.Format( "Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..{0}.{1}.WorkItemTracking.Queries{2}", new Uri(_selectedTeamProject.Uri).Segments.LastOrDefault(), teamDetail.Identity.TeamFoundationId, g); var value = string.Format( @"{0}""data"":""{1}"",""id"":""{2}"",""name"":""{3}"",""type"":""Microsoft.TeamFoundation.WorkItemTracking.QueryItem""{4}", "{", guid, g, QueryAndGuid.FirstOrDefault(q => q.Value==guid).Key, "}"); teamDetail.Identity.SetProperty(IdentityPropertyScope.Local, key, value); _ids.UpdateExtendedProperties(teamDetail.Identity); Console.WriteLine("{0}Added Query '{1}' as Favorite", Environment.NewLine, queryName); }   If you have any questions or suggestions leave a comment. Enjoy!

    Read the article

  • How do I programatically determine which port a SQL Server is running on?

    - by Ralph Willgoss
    How do I programatically determine which port a SQL Server is running on?/*===== Param ref for xp_readerrorlog ===1. Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...2. Log file type: 1 or NULL = error log, 2 = SQL Agent log3. Search string 1: String one you want to search for4. Search string 2: String two you want to search for to further refine the results5. Search from start time6. Search to end time7. Sort order for results: N'asc' = ascending, N'desc' = descendingHow many error logs do I have?SMSStudio -> Management -> SQL Server Logs -> (right click) -> configure = see values*/USE MasterGO--  get log countDECLARE @logcount intDROP TABLE #ResultCREATE TABLE #Result (ArchiveNo int, Date datetime, Size int)INSERT INTO #ResultEXEC xp_enumerrorlogsSET @logcount = (SELECT COUNT(*) FROM #Result)-- search logsDECLARE @counter intSET @counter = 0WHILE @counter <= @logcountBEGIN    EXEC xp_readerrorlog @counter, 1, N'Server is listening on', 'any', NULL, NULL, N'asc'    SET @counter = @counter + 1ENDGO

    Read the article

  • CTP 4 de Juneau disponible : la boite à outils de développement pour SQL Server apporte un nouvel explorateur d'objets pour Visual Studio

    CTP 4 de Juneau disponible : la boite à outils de développement pour SQL Server apporte un nouvel explorateur d'objets pour Visual Studio Microsoft vient de publier une nouvelle version des outils de développement pour SQL Server (SSDT). Encore au stade de CTP 4 T( Community Technology Preview), le SDK pour le gestionnaire de base de données de Microsoft baptisé Juneau, apporte plusieurs améliorations et de nouvelles fonctionnalités pour la prise en charge de Denali. Juneau intègre un explorateur d'objets pour Visual Studio, qui permet d'explorer les tables et les vues de la base de données auquel le développeur est connecté. Dénommée SQL Server Object Explorer (SSOX), cette extension fonctionne de façon simi...

    Read the article

  • Fluent NHibernate/SQL Server 2008 insert query problem

    - by Mark
    Hi all, I'm new to Fluent NHibernate and I'm running into a problem. I have a mapping defined as follows: public PersonMapping() { Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.FirstName).Not.Nullable().Length(50); Map(p => p.MiddleInitial).Nullable().Length(1); Map(p => p.LastName).Not.Nullable().Length(50); Map(p => p.Suffix).Nullable().Length(3); Map(p => p.SSN).Nullable().Length(11); Map(p => p.BirthDate).Nullable(); Map(p => p.CellPhone).Nullable().Length(12); Map(p => p.HomePhone).Nullable().Length(12); Map(p => p.WorkPhone).Nullable().Length(12); Map(p => p.OtherPhone).Nullable().Length(12); Map(p => p.EmailAddress).Nullable().Length(50); Map(p => p.DriversLicenseNumber).Nullable().Length(50); Component<Address>(p => p.CurrentAddress, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); Map(p => p.EyeColor).Nullable().Length(3); Map(p => p.HairColor).Nullable().Length(3); Map(p => p.Gender).Nullable().Length(1); Map(p => p.Height).Nullable(); Map(p => p.Weight).Nullable(); Map(p => p.Race).Nullable().Length(1); Map(p => p.SkinTone).Nullable().Length(3); HasMany(p => p.PriorAddresses).Cascade.All(); } public PreviousAddressMapping() { Table("PriorAddress"); Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.EndEffectiveDate).Not.Nullable(); Component<Address>(p => p.Address, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); } My test is [Test] public void can_correctly_map_Person_with_Addresses() { var myPerson = new Person("Jane", "", "Doe"); var priorAddresses = new[] { new PreviousAddress(ObjectMother.GetAddress1(), DateTime.Parse("05/13/2010")), new PreviousAddress(ObjectMother.GetAddress2(), DateTime.Parse("05/20/2010")) }; new PersistenceSpecification<Person>(Session) .CheckProperty(c => c.FirstName, myPerson.FirstName) .CheckProperty(c => c.LastName, myPerson.LastName) .CheckProperty(c => c.MiddleInitial, myPerson.MiddleInitial) .CheckList(c => c.PriorAddresses, priorAddresses) .VerifyTheMappings(); } GetAddress1() (yeah, horrible name) has Line2 == null The tables seem to be created correctly in sql server 2008, but the test fails with a SQLException "String or binary data would be truncated." When I grab the sql statement in SQL Profiler, I get exec sp_executesql N'INSERT INTO PriorAddress (Line1, Line2, City, State, Zip, EndEffectiveDate, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6)',N'@p0 nvarchar(18),@p1 nvarchar(4000),@p2 nvarchar(10),@p3 nvarchar(2),@p4 nvarchar(5),@p5 datetime,@p6 int',@p0=N'6789 Somewhere Rd.',@p1=NULL,@p2=N'Hot Coffee',@p3=N'MS',@p4=N'09876',@p5='2010-05-13 00:00:00',@p6=1001 Notice the @p1 parameter is being set to nvarchar(4000) and being passed a NULL value. Why is it setting the parameter to nvarchar(4000)? How can I fix it? Thanks!

    Read the article

  • PHP Export Date range

    - by menormedia
    I have a working database export to xls but I need it to export a particular date range based on the 'closed' date. (See code below). For example, I'd like it to export all 'closed' dates for last month and/or this month (Range: Sept 1, 2012 to Sept 30, 2012 or Oct 1, 2012 to Oct 31, 2012) <?PHP //EDIT YOUR MySQL Connection Info: $DB_Server = "localhost"; //your MySQL Server $DB_Username = "root"; //your MySQL User Name $DB_Password = ""; //your MySQL Password $DB_DBName = "ost_helpdesk"; //your MySQL Database Name $DB_TBLName = "ost_ticket"; //your MySQL Table Name //$DB_TBLName, $DB_DBName, may also be commented out & passed to the browser //as parameters in a query string, so that this code may be easily reused for //any MySQL table or any MySQL database on your server //DEFINE SQL QUERY: //edit this to suit your needs $sql = "Select ticketID, name, company, subject, closed from $DB_TBLName ORDER BY closed DESC"; //Optional: print out title to top of Excel or Word file with Timestamp //for when file was generated: //set $Use_Titel = 1 to generate title, 0 not to use title $Use_Title = 1; //define date for title: EDIT this to create the time-format you need $now_date = DATE('m-d-Y'); //define title for .doc or .xls file: EDIT this if you want $title = "MDT Database Dump For Table $DB_TBLName from Database $DB_DBName on $now_date"; /* Leave the connection info below as it is: just edit the above. (Editing of code past this point recommended only for advanced users.) */ //create MySQL connection $Connect = @MYSQL_CONNECT($DB_Server, $DB_Username, $DB_Password) or DIE("Couldn't connect to MySQL:<br>" . MYSQL_ERROR() . "<br>" . MYSQL_ERRNO()); //select database $Db = @MYSQL_SELECT_DB($DB_DBName, $Connect) or DIE("Couldn't select database:<br>" . MYSQL_ERROR(). "<br>" . MYSQL_ERRNO()); //execute query $result = @MYSQL_QUERY($sql,$Connect) or DIE("Couldn't execute query:<br>" . MYSQL_ERROR(). "<br>" . MYSQL_ERRNO()); //if this parameter is included ($w=1), file returned will be in word format ('.doc') //if parameter is not included, file returned will be in excel format ('.xls') IF (ISSET($w) && ($w==1)) { $file_type = "msword"; $file_ending = "doc"; }ELSE { $file_type = "vnd.ms-excel"; $file_ending = "xls"; } //header info for browser: determines file type ('.doc' or '.xls') HEADER("Content-Type: application/$file_type"); HEADER("Content-Disposition: attachment; filename=MDT_DB_$now_date.$file_ending"); HEADER("Pragma: no-cache"); HEADER("Expires: 0"); /* Start of Formatting for Word or Excel */ IF (ISSET($w) && ($w==1)) //check for $w again { /* FORMATTING FOR WORD DOCUMENTS ('.doc') */ //create title with timestamp: IF ($Use_Title == 1) { ECHO("$title\n\n"); } //define separator (defines columns in excel & tabs in word) $sep = "\n"; //new line character WHILE($row = MYSQL_FETCH_ROW($result)) { //set_time_limit(60); // HaRa $schema_insert = ""; FOR($j=0; $j<mysql_num_fields($result);$j++) { //define field names $field_name = MYSQL_FIELD_NAME($result,$j); //will show name of fields $schema_insert .= "$field_name:\t"; IF(!ISSET($row[$j])) { $schema_insert .= "NULL".$sep; } ELSEIF ($row[$j] != "") { $schema_insert .= "$row[$j]".$sep; } ELSE { $schema_insert .= "".$sep; } } $schema_insert = STR_REPLACE($sep."$", "", $schema_insert); $schema_insert .= "\t"; PRINT(TRIM($schema_insert)); //end of each mysql row //creates line to separate data from each MySQL table row PRINT "\n----------------------------------------------------\n"; } }ELSE{ /* FORMATTING FOR EXCEL DOCUMENTS ('.xls') */ //create title with timestamp: IF ($Use_Title == 1) { ECHO("$title\n"); } //define separator (defines columns in excel & tabs in word) $sep = "\t"; //tabbed character //start of printing column names as names of MySQL fields FOR ($i = 0; $i < MYSQL_NUM_FIELDS($result); $i++) { ECHO MYSQL_FIELD_NAME($result,$i) . "\t"; } PRINT("\n"); //end of printing column names //start while loop to get data WHILE($row = MYSQL_FETCH_ROW($result)) { //set_time_limit(60); // HaRa $schema_insert = ""; FOR($j=0; $j<mysql_num_fields($result);$j++) { IF(!ISSET($row[$j])) $schema_insert .= "NULL".$sep; ELSEIF ($row[$j] != "") $schema_insert .= "$row[$j]".$sep; ELSE $schema_insert .= "".$sep; } $schema_insert = STR_REPLACE($sep."$", "", $schema_insert); //this corrects output in excel when table fields contain \n or \r //these two characters are now replaced with a space $schema_insert = PREG_REPLACE("/\r\n|\n\r|\n|\r/", " ", $schema_insert); $schema_insert .= "\t"; PRINT(TRIM($schema_insert)); PRINT "\n"; } } ?>

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Odd Profiler Results with EF4

    - by AjarnMark
    I have been doing some testing of using the Microsoft Entity Framework 4 with stored procedures and ran across some really odd results in SQL Server Profiler. The application that is running which uses Entity Framework 4 is a simple Web Application written in C#, and the Entity Data Model is actually contained in a referenced class library of its own.  I’ll write more about my experiences with this later.  For now the question is, why does SQL Profiler think that the stored procedure is running in Master, and not in my application database? While analyzing the effects of using custom helper methods on my EDM classes to call the stored procedure, I decided to run Profiler while I stepped through the code so that I had a clear understanding of exactly when and what calls were made to the SQL Server.  I ran Profiler switching back and forth between the TSQL and TSQL_SP templates.  However, to reduce the amount of results rows I needed to wade through, I set a filter on DatabaseID to be equal to my application’s database.  Each time I ran this, the only thing that I saw was an Audit:Login to the database, but no procedure or T-SQL statements executed, yet I was definitely getting results back to my web page.  I tried other Profiler templates, still filtering on DatabaseID (tangent: I found, at least back in SQL 2000 Profiler, that filtering on DatabaseID was more reliable than filtering on DatabaseName.  Even though I’m now running SQL 2008, that habit sticks with me).  Still no results other than the Login.  Very weird! Finally, I decided to run Profiler with no filtering and discovered that that lines which represent my stored procedure and its T-SQL commands are all marked with DatabaseID = 1, which is Master.  Why in the world would that be?  My procedure is definitely in the application database, and not in Master, and there is nothing funny about the call to the procedure evident in Profiler (i.e. it is not called as MyAppDB.dbo.MyProcName, but rather just dbo.MyProcName).  There must be something funny with the way the Entity Framework is wrapping this call, and I don’t like it…I don’t like it one bit.  My primary PROD server contains 40+ databases on it, and when I need to profile something, I expect to be able to filter based on DatabaseID (for the record, I displayed DatabaseName in my results, too, and it also shows Master). I find the same pattern of everything except the Login showing up as being in Master when I run my version that uses standard LINQ to Entities instead of stored procedures, so that suggests it is not my code, but rather something funny with SQL Server 2008 Profiler or the Entity Framework. If you have any ideas about why this might be so, please comment below.

    Read the article

  • Presenting Designing an SSIS Execution Framework to Steel City SQL 18 Jan 2011!

    - by andyleonard
    I'm honored to present Designing an SSIS Execution Framework (Level 300) to Steel City SQL - the Birmingham Alabama chapter of PASS - on 18 Jan 2011! The meeting starts at 6:00 PM 18 Jan 2011 and will be held at: New Horizons Computer Learning Center 601 Beacon Pkwy. West Suite 106 Birmingham, Alabama, 35209 ( Map for directions ) Abstract In this “demo-tastic” presentation, SSIS trainer, author, and consultant Andy Leonard explains the what, why, and how of an SSIS framework that delivers metadata-driven...(read more)

    Read the article

  • New Book! SQL Server 2012 Integration Services Design Patterns!

    - by andyleonard
    SQL Server 2012 Integration Services Design Patterns has been released! The book is done and available thanks to the hard work and dedication of a great crew: Michelle Ufford ( Blog | @sqlfool ) – co-author Jessica M. Moss ( Blog | @jessicammoss ) – co-author Tim Mitchell ( Blog | @tim_mitchell ) – co-author Matt Masson ( Blog | @mattmasson ) – co-author Donald Farmer ( Blog | @donalddotfarmer ) – foreword David Stein ( Blog | @made2mentor ) – technical editing Mark Powers – editing Jonathan Gennick...(read more)

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >