Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 555/1123 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • What to do if 2 (or more) relationship tables would have the same name?

    - by primehunter326
    So I know the convention for naming M-M relationship tables in SQL is to have something like so: For tables User and Data the relationship table would be called UserData User_Data or something similar (from here) What happens then if you need to have multiple relationships between User and Data, representing each in its own table? I have a site I'm working on where I have two primary items and multiple independent M-M relationships between them. I know I could just use a single relationship table and have a field which determines the relationship type, but I'm not sure whether this is a good solution. Assuming I don't go that route, what naming convention should I follow to work around my original problem?

    Read the article

  • Hibernate mysql groupe by day whith 0 entities

    - by Touhami
    I have this function i need to do that in sql (hibernate or mysql) or a java function that interpert the array of results select DAY(affaire.docCreationDate), count(affaire.docfullName) from CRMAffaireCode.AffaireClass as affaire where affaire.docfullName like 'CRMAffaire.Affaire%' and affaire.docCreationDate = '" + startDate + " 00:00:00' and affaire.docCreationDate <= '" + endDate + " 23:59:59' GROUP BY DAY(affaire.docCreationDate)" i have this entitie count in my table 2012-10-05 3 2012-10-06 0 2012-10-07 7 2012-10-08 13 2012-10-09 9 2012-10-10 0 2012-10-11 0 2012-10-12 3 the request return me this values 5 3 7 7 8 13 9 9 12 3 in this way i loose three lignes that have 0 as value, i need a request that return me this 5 3 6 0 7 7 8 13 9 9 10 0 11 0 12 3

    Read the article

  • Make SQL query more efficient

    - by Webnet
    I currently have this query which runs 2 of the exact same sub queries but pull different data. When I make the values comma separated it throws an SQL error saying the sub query can return only one value. Is there anything else I can do to avoid running multiple sub queries? SELECT product_id, ( SELECT COUNT(listing_id) FROM ebay_archive_product_listing_assoc WHERE product_id = product_master.product_id) as listing_count, sku, type_id, ( SELECT AVG(ebay_archive_listing.current_price), AVG(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing WHERE id IN ( SELECT listing_id FROM ebay_archive_product_listing_assoc WHERE product_id = product_master.product_id ) AND ebay_archive_listing.start_time >= '.$startTimestamp.' AND ebay_archive_listing.start_time <= '.$endTimestamp.' AND ebay_archive_listing.current_price > 0 ) as average_bid_price, ( SELECT FROM ebay_archive_listing WHERE id IN ( SELECT listing_id FROM ebay_archive_product_listing_assoc WHERE product_id = product_master.product_id ) AND ebay_archive_listing.start_time >= '.$startTimestamp.' AND ebay_archive_listing.start_time <= '.$endTimestamp.' AND ebay_archive_listing.buy_it_now_price > 0 ) as average_buyout_price FROM product_master I'm aware of the syntax error... I'm selecting 2 seperate averages and am wondering if I can do it any simpler way.

    Read the article

  • help with stored procedure

    - by I__
    i am looking at this site: http://cloudexchange.cloudapp.net/stackoverflow/s/84/rising-stars-top-50-users-ordered-on-rep-per-day set nocount on DECLARE @endDate date SELECT @endDate = max(CreationDate) from Posts set nocount off SELECT TOP 50 Id AS [User Link], Reputation, Days, Reputation/Days AS RepPerDays FROM ( SELECT *, CONVERT(int, @endDate - CreationDate) as Days FROM Users ) AS UsersAugmented WHERE Reputation > 5000 ORDER BY RepPerDays DESC i am also a beginner at SQL. i have the following questions about this code: is this mysql or mssql? what does this do? set nocount off why is this in brackets? [User Link] what does this do? CONVERT(int, @endDate - CreationDate) as Days thanks!

    Read the article

  • Expanded securityadmin

    - by user80652
    I'm aware that sysadmin is documented as the server role necessary for creating logins (SQL/Windows-integrated); nevertheless, I'm tasked to find out if there's any other server role (built-in or otherwise) that can be used. To be specific, I'm looking to setup one or two logins with access to create logins, create [database] users, assign users to [database] roles. Potentially reset passwords, but most of the logins are Windows-integrated and it's not necessary. Cannot have access to data at all, nor can these logins have rights to update tables nor create/update roles. Seems my only options so far are to set these 2 logins with securityadmin server role and for the specific databases, configure with db_securityadmin and db_accessadmin... but this configuration doesn't allow for creating logins.

    Read the article

  • Approach to Selecting top item matching a criteria

    - by jkelley
    I have a SQL problem that I've come up against routinely, and normally just solved w/ a nested query. I'm hoping someone can suggest a more elegant solution. It often happens that I need to select a result set for a user, conditioned upon it being the most recent, or the most sizeable or whatever. For example: Their complete list of pages created, but I only want the most recent name they applied to a page. It so happens that the database contains many entries for each page, and only the most recent one is desired. I've been using a nested select like: SELECT pg.customName, pg.id FROM ( select id, max(createdAt) as mostRecent from pages where userId = @UserId GROUP BY id ) as MostRecentPages JOIN pages pg ON pg.id = MostRecentPages.id AND pg.createdAt = MostRecentPages.mostRecent Is there a better syntax to perform this selection?

    Read the article

  • Storing SQL Tables for use in visual studio

    - by Raven Dreamer
    Greetings. I'm trying to create a windows form application that manipulates data from several tables stored on a SQL server. 1) What's the best way to store the data locally, while the application is running? I had a previous program that only modified one table, and that was set up to use a datagridview. However, as I don't necessarily want to view all the tables, I am looking for another way to store the data retrieved by the SELECT * FROM ... query. 2) Is it better to load the tables, make changes within the C# application, and then update the modified tables at the end, or simply perform all operations on the database, remotely (retrieving the tables each time they are needed)? Thank you.

    Read the article

  • Help with my application please! Can’t open image(s) with error: External component has thrown an ex

    - by Brandon
    I have an application written in C# I believe and it adds images to a SQL Server 2005 Database. It requires .NET 3.5 to be installed on my computer. I installed .NET 3.5 and setup a database. It runs fine but then once it gets to image 100 when running on one computer, It stops and gives me this error: Can't open image(s) with error: External component has thrown an exception.... When I run the program on my own computer I am able to reach 300 images but then it stops after 300 images and gives me Can't open image(s) with error: External component has thrown an exception.... error once again. please help!

    Read the article

  • Update multiple values in a single statement

    - by Kluge
    I have a master / detail table and want to update some summary values in the master table against the detail table. I know I can update them like this: update MasterTbl set TotalX = (select sum(X) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) update MasterTbl set TotalY = (select sum(Y) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) update MasterTbl set TotalZ = (select sum(Z) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) But, I'd like to do it in a single statement, something like this: update MasterTbl set TotalX = sum(DetailTbl.X), TotalY = sum(DetailTbl.Y), TotalZ = sum(DetailTbl.Z) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID group by MasterID but that doesn't work. I've also tried versions that omit the "group by" clause. I'm not sure whether I'm bumping up against the limits of my particular database (Advantage), or the limits of my SQL. Probably the latter. Can anyone help?

    Read the article

  • Getting Results from a Web SQL database

    - by andrew8088
    I'm playing around with the new Web SQL databases. Is there a way to return results from a SELECT statement? Here's my example: function getTasks (list) { db.transaction(function (tx) { list = list || 'inbox'; tx.executeSql("SELECT * FROM tasklist WHERE list = ?", [list], function (tx, results) { var retObj = [], i, len = results.rows.length; for ( i = 0; i < len; i++ ) { retObj[i] = results.rows.item(i); } return retObj; }); }); } The getTasks function is returning before the success callback does; is there a way to get the results out of the executeSql method, or do I have to do all the processing within the callback?

    Read the article

  • Getting deadlocks in MySQL

    - by at
    We're very frustratingly getting deadlocks in MySQL. It isn't because of exceeding a lock timeout as the deadlocks happen instantly when they do happen. Here's the SQL code that is executing on 2 separate threads (with 2 separate connections from the connection pool) that produces a deadlock: UPDATE Sequences SET Counter = LAST_INSERT_ID(Counter + 1) WHERE Sequence IS NULL Sequences table has 2 columns: Sequence and Counter The LAST_INSERT_ID allows us to retrieve this updated counter value as per MySQL's recommendation. That works perfect for us, but we get these deadlocks! Why are we getting them and how can we avoid them?? Thanks so much for any help with this.

    Read the article

  • Select products with users

    - by Ploppe
    I have not worked with SQL for quite a long time, and I need some help for a basic query. I have the three following tables: users (id, name) products (id, name) owners (userid, productid, date) One product can be sold by user A to user B and then back to A. Now, I want the list of all products currently owned by every single user with the date of transaction. Currently, my query is this one, but I'm stuck with old data (first association of one product to one user, and not the newest one): SELECT users.name, products.name, date FROM products JOIN owners ON products.id = owners.id JOIN users ON owners.id = user.id GROUP BY product.id Do you have some hints? Thanks

    Read the article

  • Copying just the data from one Database to another

    - by monksy
    I'm not sure if this is the site for this question or not [if so put in the comment or vote to move it] How can I copy only the data from one database to another within the same server on SQL Server 2005? The two databases have the same schema but not the same data. I'm trying to get the data from one database to another. I am not able to restore from a snapshot [that screws over the security settings on the database]. I'm not able to use the import data wizard, because that is trying to copy over schema data as well.

    Read the article

  • Query "where clause" fails when calling a function

    - by guest1
    Hi All, I have a function in Access VBA that takes four parameters.The fourth parameter is a "where clause" that I use in an SQL statement inside the function. The function fails when I include the fourth parameter (where clause). When I remove this fourth parameter, the function just works fine. I am not sure if there is anything wrong with the syntax of the fourth parameter ? Please help. here is the function as called in the Query FunctionA('Table1','Field1',0.3,'Field2=#' & [Field2] & '# and Value3="' & [Value3] & '"') AS Duration_Field

    Read the article

  • How to extract the latest row

    - by Bob
    Hi, I have a table like this: Table A Date Time ID Ref 110217 91703 A001 A1100056 110217 91703 A001 A1100057 110217 91703 A001 A1100058 110217 91703 A001 A1100059 110217 132440 A001 A1100057 110217 132440 A001 A1100058 110217 132440 A001 A1100060 I wish to have the latest data only & the final result should look like this using SQL: Date Time ID Ref 110217 132440 A001 A1100057 110217 132440 A001 A1100058 110217 132440 A001 A1100060 The database will self-update by itself at certain time. The problem is: I do not know the exact time, hence I do not know which record is the latest. Thanks.

    Read the article

  • Select multiple unique lines in MySQL

    - by MartinW
    Hi, I've got a table with the following columns: ID, sysid, x, y, z, timereceived ID is a unique number for each row. sysid is an ID number for a specific device (about 100 different of these) x, y and z is data received from the device. (totally random numbers) timereceived is a timestamp for when the data was received. I need a SQL query to show me the last inserted row for device a, device b, device c and so on. I've been playing around with a lot of different Select statements, but never got anything that works. I manage to get unique rows by using group by, but the rest of the information is random (or at least it feels very random). Anyone able to help me? There could be hundreds of thousands records in this table.

    Read the article

  • FastObjects.NET(is an OODB from Versant) performence in Real Scenerios?

    - by Lalit
    FastObjects.NET Saves the whole class object(if marked with attribute Persistent) at once in file system(using serilization or similar technology). They are promissing that it is even faster then normal SQL DB approach. My team also thought it is better and faster to save the whole object once instead of each field one by one. Defination of their website: FastObjects .NET 10.0 fully conforms to the Microsoft.NET 2.0 framework. Tightly integrated with Visual Studio 2005, it offers a developer-friendly, object-oriented alternative to a relational database for .NET persistence. I want to have your experiences of using FastObjects in production scenerio? They are promising for Indexing/Transaction/clustoring/replication.

    Read the article

  • Concurrency handling

    - by Lijo
    Hi, Suppose, I am about to start a project using ASP.NET and SQL Server 2005. I have to design the concurrency requirement for this application. I am planning to add a TimeStamp column in each table. While updating the tables I will check that the TimeStamp column is same, as it was selected. Will this approach be suffice? Or is there any shortcomings for this approach under any circumstances? Please advice. Thanks Lijo

    Read the article

  • Use variables entered in login page usable in multiple pages?

    - by deception1
    I have a Login page that captures User input like this. MD5calc ss = new DBCon.MD5calc(); string gs = ss.CalculateMD5Hash(password.Password); int unitID = Convert.ToInt32(Unit_ID.Text); logBO.UnitID = unitID; logBO.UserID = User_name.Text; logBO.UserPass = gs; How would i make them assignable to any other page i created.My Common sense says that creating a static class would be enough,but will it?If i do create a static class where would i put it and how would i call it?I actually need those variable to use in my Sql Stored procedures.

    Read the article

  • Why does do this code do if(sz !=sz2) sz = sz2 !?!

    - by acidzombie24
    For the first time i created a linq to sql classes. I decided to look at the class and found this. What... why is it doing if(sz !=sz2) { sz = sz2; }. I dont understand. Why isnt the set generated as this._Property1 = value? private string _Property1; [Column(Storage="_Property1", CanBeNull=false)] public string Property1 { get { return this._Property1; } set { if ((this._Property1 != value)) { this._Property1 = value; } } }

    Read the article

  • SQLite delete the last 25% of records in a database.

    - by Steven smethurst
    I am using a SQLite database to store values from a data logger. The data logger will eventually fills up all the available hard drive space on the computer. I'm looking for a way to remove the last 25% of the logs from the database once it reaches a certain limit. Using the following code: $ret = Query( 'SELECT id as last FROM data ORDER BY id desc LIMIT 1 ;' ); $last_id = $ret[0]['last'] ; $ret = Query( 'SELECT count( * ) as total FROM data' ); $start_id = $last_id - $ret[0]['total'] * 0.75 ; Query( 'DELETE FROM data WHERE id < '. round( $start_id, 0 ) ); A journal file gets created next to the database that fills up the remaining space on the drive until the script fails. How/Can I stop this journal file from being created? Anyway to combined all three SQL queries in to one statement?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • Speaking at Triangle SQL Server User Group 16 Mar 2010!

    - by andyleonard
    I'm excited to present Applied SSIS Design Patterns to the Triangle SQL Server User Group 16 Mar 2010! This is a reprise of my PASS Summit 2009 spotlight session. If you read this blog and make the meeting, introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Presenting to the New England SQL Server Users Group 10 Jun 2010!

    - by andyleonard
    I am honored to present Applied SSIS Design Patterns to the New England SQL Server Users Group on 10 Jun 2010! This is a reprise of the spotlight session presented at the PASS Summit 2009. Abstract "Design Patterns" is more than a trendy buzz phrase; design patterns are a way of breaking down complex development projects into manageable tasks. They lend themselves to several development methodologies and apply to SSIS development. Chances are you're using your own design patterns now! In this spotlight...(read more)

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >