Search Results

Search found 4632 results on 186 pages for 'kevin major'.

Page 9/186 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • GIT Exclude Specific Files when Pushing to Specific Repository

    - by Kevin Sylvestre
    Is it possible to exclude specific files (*.ai, *.psd) when pushing to certain repositories with GIT? My need comes from trying to use GIT for both version control and deployment to Heroku. If I include my graphic assets in the deploy, they slug size is larger than desired. However, I do need to include all project files in my main github repository. Thanks, Kevin

    Read the article

  • MBProgressHUD for Mac? - Cocoa

    - by Kevin
    Hey there everyone, So I'm used to the iPhone API, and I used MBProgressHUD a lot in my iPhone applications, but since I started developing apps for the Mac I noticed that I can't use MBProgressHUD. Is there an alternative that will do the same job as the MBProgressHUD? I notice that apple has it in their OS as well. If anyone could help me find an alternative for MBProgressHUD, I would greatly appreciate it!! Thanks, Kevin

    Read the article

  • How can you remove a field from a word document?

    - by Kevin van Zanten
    Dear reader, I'm working on a project where the user can insert data into a document using fields, document properties and variables. The user also needs to be able to remove the data from the document. So far, I've managed to remove the document property and variable, but I'm not sure how I would go about removing the field (that's already inserted into the document). Please advise. Yours sincerely, Kevin

    Read the article

  • How to get a reference to a control in the view?

    - by Kevin
    If I have a UIScrollView set up in the view via the Interface Builder, how do I get a reference to it in the ViewController implementation? I want to programmatically add labels to the scroll view. For example, in C# if you have a textbox declared in the UI/form, you can access it by simply using the ID declared for that textbox. It doesn't seem this simple in objective c. Thanks Kevin

    Read the article

  • PHP - Retrieve Data From mySQL Server

    - by Kevin
    Hello, Does anyone know how to retrieve a piece of data and display the results in php file? A similar query that I would enter is something like this: SELECT 'email' FROM 'users' WHERE 'username' = 'bob' Thus, the result would be just the email. Thanks, Kevin

    Read the article

  • How to add words to an already loaded grammar using System.Speech and SAPI 5.3

    - by Kim Major
    Given the following code, Choices choices = new Choices(); choices.Add(new GrammarBuilder(new SemanticResultValue("product", "<product/>"))); GrammarBuilder builder = new GrammarBuilder(); builder.Append(new SemanticResultKey("options", choices.ToGrammarBuilder())); Grammar grammar = new Grammar(builder) { Name = Constants.GrammarNameLanguage}; grammar.Priority = priority; _recognition.LoadGrammar(grammar); How can I add additional words to the loaded grammar? I know this can be achieved both in native code and using the SpeechLib interop, but I prefer to use the managed library. Update: What I want to achieve, is not having to load an entire grammar repeatedly because of individual changes. For small grammars I got good results by calling _recognition.RequestRecognizerUpdate() and then doing the unload of the old grammar and loading of a rebuilt grammar in the event: void Recognition_RecognizerUpdateReached(object sender, RecognizerUpdateReachedEventArgs e) For large grammars this becomes too expensive.

    Read the article

  • Creating a C++ DLL and then using it in C#

    - by Major
    Ok I'm trying to make a C++ DLL that I can then call and reference in a c# App. I've already made a simple dll using the numberous guides out there, however when I try to reference it in the C# app I get the error Unable to load DLL 'SDES.dll': The specified module could not be found. The code for the program is as follows (bear with me I'm going to include all the files) //These are the DLL Files. ifndef TestDLL_H define TestDLL_H extern "C" { // Returns a + b __declspec(dllexport) double Add(double a, double b); // Returns a - b __declspec(dllexport) double Subtract(double a, double b); // Returns a * b __declspec(dllexport) double Multiply(double a, double b); // Returns a / b // Throws DivideByZeroException if b is 0 __declspec(dllexport) double Divide(double a, double b); } endif //.cpp include "test.h" include using namespace std; extern double __cdecl Add(double a, double b) { return a + b; } extern double __cdecl Subtract(double a, double b) { return a - b; } extern double __cdecl Multiply(double a, double b) { return a * b; } extern double __cdecl Divide(double a, double b) { if (b == 0) { throw new invalid_argument("b cannot be zero!"); } return a / b; } //C# Program using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace ConsoleApplication1 { class Program { [DllImport("SDES.dll")] public static extern void SimulateGameDLL(int a, int b); static void Main(string[] args) { SimulateGameDLL(1, 2); //Error here... } } } Anyone have any idea's what I may be missing in my program? Let me know if I missed some code or if you have any questions.

    Read the article

  • Managed C++ Error 2504 Error

    - by Major
    I'm new to managed c++ and I'm attempting to design a program for a presentation. I am attempting to have a class inherit from an ABC and I'm getting the Error C2504. The code in question is as follows: ref class Item : Auction //Error C2504 here { //More code for the class Auction is defined in a different .h file. Let me know if there are any other questions or if you need to see more of the code. Thanks

    Read the article

  • JSON.parse vs. eval()

    - by Kevin Major
    My Spider Sense warns me that using eval() to parse incoming JSON is a bad idea. I'm just wondering if JSON.parse() - which I assume is a part of JavaScript and not a browser-specific function - is more secure.

    Read the article

  • Are my negative internship experiences representative of the real world? [closed]

    - by attemptAtAnonymity
    I'm curious if my current experiences as an intern are representative of actual industry. As background, I'm through the better part of two computing majors and a math major at a major university; I've aced every class and adored all of them, so I'd like to think that I'm not terrible at programming. I got an internship with one of the major software companies, and half way through now I've been shocked at the extraordinarily low quality of code. Comments don't exist, it's all spaghetti code, and everything that could be wrong is even worse. I've done a ton of tutoring/TAing, so I'm very used to reading bad code, but the major industry products I've been seeing trump all of that. I work 10-12 hours a day and never feel like I'm getting anywhere, because it's endless hours of trying to figure out an undocumented API or determine the behavior of some other part of the (completely undocumented) product. I've left work hating the job every day so far, and I desperately want to know if this is what is in store for the rest of my life. Did I draw a short straw on internships (the absurdly large paychecks imply that it's not a low quality position), or is this what the real world is like?

    Read the article

  • Are my negative internship experiences respresentative of the real world?

    - by attemptAtAnonymity
    I'm curious if my current experiences as an intern are representative of actual industry. As background, I'm through the better part of two computing majors and a math major at a major university; I've aced every class and adored all of them, so I'd like to think that I'm not terrible at programming. I got an internship with one of the major software companies, and half way through now I've been shocked at the extraordinarily low quality of code. Comments don't exist, it's all spaghetti code, and everything that could be wrong is even worse. I've done a ton of tutoring/TAing, so I'm very used to reading bad code, but the major industry products I've been seeing trump all of that. I work 10-12 hours a day and never feel like I'm getting anywhere, because it's endless hours of trying to figure out an undocumented API or determine the behavior of some other part of the (completely undocumented) product. I've left work hating the job every day so far, and I desperately want to know if this is what is in store for the rest of my life. Did I draw a short straw on internships (the absurdly large paychecks imply that it's not a low quality position), or is this what the real world is like?

    Read the article

  • ASP.NET 4 Unleashed in Bookstores!

    - by Stephen Walther
    I’m happy to announce that ASP.NET 4 Unleashed is now in bookstores! The book is over 1,800 pages and it is packed with code samples and tutorials on all the features of ASP.NET 4. Given the size of the book – did I mention that it is over 1,800 pages? -- I can safely say that it is the most comprehensive book on ASP.NET  This edition of the book has several new chapters written by Kevin Hoffman and Nate Dudek. Kevin and Nate did a fantastic job of covering the new features of ASP.NET 4 including: The new ASP.NET Chart Control The new ASP.NET QueryExtender Control The new ASP.NET routing framework jQuery You can buy the book from your local bookstore or buy the book from Amazon:

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • Silverlight Cream for February 26, 2011 -- #1052

    - by Dave Campbell
    In this Issue: Mark Monster, Gill Cleeren, Pencho Popadiyn, Kevin Dockx, Joost van Schaik, Jesse Liberty, John Papa, Jeremy Likness, Arik Poznanski(-2-), Page Brooks, Deborah Kurata, Mike Snow, Alfred Astort, Samuel Jack, XAMLNinja, and Shawn Wildermuth. Above the Fold: Silverlight: "Asynchronous Callbacks with Rx" Jesse Liberty WP7: "Phoney Windows Phone 7 Project Now Available!" Shawn Wildermuth MVVM: "Validating our ViewModel" Mark Monster Shoutouts: Shawn Wildermuth has a video up of his FadingMessage class to show it off: Introducing Phoney's FadingMessage Class From SilverlightCream.com: Validating our ViewModel Mark Monster discusses Validation in his latest post... using INotifyDataErrorInfo and his own implementation of a ViewModel base that supports it and INPC. Getting ready for Microsoft Silverlight Exam 70-506 (Part 7) Gill Cleeren hits part 7 of his series at SilverlightShow on a great walk through Silverlight and getting ready for the exam. This is the final part and concentrates on deploying apps. Windows Phone 7–Creating Custom Keyboard Pencho Popadiyn has a post at SilverlightShow discussing problems with WP7 keyboards in his native Bulgaria, and his solution to the problem... create his own. 360 Degrees Feedback by Kevin Dockx Kevin Dockx produced a white paper for his company about an employee review solution they did in Silverlight. The white paper is available, and SilverlightShow interviewd Kevin to answer questions about the app. Extended Windows Phone 7 page for handling rotation, focused element updates and back key press Looks like Joost van Schaik has a few posts I've missed... and I'm not going to get to them all today! ... this one is about the base class he uses for WP7 apps... a bunch of utilities he uses... definitely worth a look (and a take). Asynchronous Callbacks with Rx Jesse Liberty has his 8th post in the Rx series up and this one's on Asynchronous Callbacks... if you haven't seen this before, you should definitely look into it... cool stuff, Jesse! Silverlight TV 63: Exploring National Instruments' App Using Data and Business Features John Papa has Silverlight TV number 63 up and is talking to Steve Lasker about National Instruments and their Lab View product. Great demo and discussion. Jounce Part 11: Debugging MEF Jeremy Likness's latest (number 11) in his series on his MVVM framework Jounce is out, and he's discussing how to debug MEF, which Jounce handles nicely through the logging he provides... and you can use it externally to Jounce. Get Twitter Trends on Windows Phone 7 Arik Poznanski has a couple Twitter for WP7 posts up... first is one for pulling Twitter trends from whatthetrend.com... plus the code to do it. Searching Twitter on Windows Phone 7 In his next post, Arik Poznanski shows how to search twitter from your WP7 ... again with code. Tiled Background Control in Silverlight Page Brooks shows how to get a tiled background control in Silverlight ... did you know there was one in the JetPack them? Silverlight Charting: Displaying Data Above the Column Deborah Kurata continues her charting posts with this one displaying the column value above the column. I like this... it has a clean look and all the data is available at a glance. Silverlight: Tasks on the Win7 Mobile Phone Mike Snow has a list of the WP7 tasks available and an example of using them... looks like a pretty good reference! 10 of 10 - Aesthetics and alignment matter Alfred Astort discusses aesthetics and WP7 dev... looks like it's the same as any app development, but if you're not doing it, you should be. Simon Squared – We have Multi-player: Days 4, 5 and (ahem!) 6 Samuel Jack details the completion of his multi-player game for WP7 utilizing Azure, in the hour-by-hour detail he's done the rest... plus a video of the final product! Who ate all the pies!! XAMLNinja has a very good discussion/link set of Charting posts all leading up to a portrait-only version of charting for WP7 with labels that looks looks great Phoney Windows Phone 7 Project Now Available! Shawn Wildermuth has a collection of classes he always uses with WP7 dev, and he's sharing them with all of us a "Phoney" Tools project on Codeplex... and now has a NuGet project also. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for April 09, 2010 -- #835

    - by Dave Campbell
    In this Issue: Tim Heuer, smartyP, and Kevin Moore. From SilverlightCream.com: Using XNA libraries in your Silverlight Windows Phone 7 applications Tim Heuer has a post up using XNA on WP7 to hook up sound to a 'normal' Silverlight WP7 app... so there ya go! Example Pivot Control for Windows Phone 7 smartyP acknowledges that he said he was done with the Pivot control for WP7 and yes he realizes we're most likely going to get one from Microsoft, but just like the rest of us, he just couldn't leave it alone :) Bag of Tricks Update (two years in the making) I found this via Cool view transitions using ZapScoller by Rudi Grobler, and it points at Kevin Moore's Bag of Tricks Update for Silverlight 4 and WPF ... just the fact that Robby Ingebretsen is using it means we should all rush to CodePlex and absorb it :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Dartisans Ep. 7 - Dart news and special guests

    Dartisans Ep. 7 - Dart news and special guests In this episode of Dartisans, we talk to Kevin Moore and Bob Nystrom about the latest language changes as Dart heads to M1. We'll also chat about Dart's package manager and Kevin's proposed package structure. If we're lucky, we'll get a sneak peek at the Seattle Dart Hackathon (on July 14th) and Bob's Dart talk at OSCON. Learn more at www.dartlang.org From: GoogleDevelopers Views: 211 12 ratings Time: 42:04 More in Science & Technology

    Read the article

  • Go Big or Go Special

    - by Ajarn Mark Caldwell
    Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities.  <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler>  In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work.  I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”.  But there’s a whole other way to do business today that would be ideal for these ladies to pursue. As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin.  But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special).  This capitalizes on the power of social media, niche marketing, and The Long Tail.  One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog.  I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach. Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there.  I wish them the best of luck with their endeavors!

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Google I/O 2010 - Keynote Day 1

    Google I/O 2010 - Keynote Day 1 Google I/O 2010 - Keynote Day 1 Video footage from Day 1 keynote at Google I/O 2010 Vic Gundotra, Engineering Vice President, Google Sundar Pichai, Vice President, Product Management, Google Charles Pritchard, Founder, MugTug Jim Lanzone, CEO, Clicker Mike Shaver, VP Engineering, Mozilla Corporation Håkon Wium Lie, CTO, Opera Software Kevin Lynch, CTO, Adobe Systems Terry McDonell, Editor, Sports Illustrated Group Lars Rasmussen, Manager, Google Wave David Glazer, Engineering Director, Google Paul Maritz, President & CEO, VMware Ben Alex, Senior Staff Engineer, SpringSource Division of VMware, Bruce Johnson, Engineering Director, Google Kevin Gibbs, Software Engineer, Google For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 2 1 ratings Time: 02:05:08 More in Science & Technology

    Read the article

  • Azure Table Storage Creation using Nov 2009 CTP

    - by kaleidoscope
    The new SDK introduces a new class - · The CloudTableClient : This new class enables us to create tables and test for the existence of tables. We need not need use this class for querying table storage, it's   more of an administrative class for dealing with table storage itself.   · Once we have got the account key and the account name from ConfigurationSetting, we can create an instance of the storage credentials and table client classes:   StorageCredentialsAccountAndKey creds = new StorageCredentialsAccountAndKey(accountName, accountKey);     CloudTableClient tableStorage = new CloudTableClient(tableBaseUri, creds);     CustomerContext ctx = new CustomerContext(tableBaseUri, creds);     //where tableBaseUri is the TableStorageEndpoint obtained from ConfigurationSetting Using the table storage class, we can now create a new table (if it doesn't already exist):     if (tableStorage.CreateTableIfNotExist("Customers"))     {        CustomerRow cust = new CustomerRow("AccountsReceivable", "kevin");         cust.FirstName = "Kevin";        cust.LastName = "Hoffman";        ctx.AddObject("Customers", cust);        ctx.SaveChanges();     } For a complete article on this topic please follow this link: http://dotnetaddict.dotnetdevelopersjournal.com/azure_nov09_tablestorage.htm Tinu, O

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >