Search Results

Search found 3947 results on 158 pages for 'passing'.

Page 108/158 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • How to do hex8 encoding in c?

    - by Tech163
    I am trying to encode a string in hex8 using c. The script I have right now is: int hex8 (char str) { str = printf("%x", str); if(strlen(str) == 1) { return printf("%s", "0", str); } else { return str; } } In this function, I will need to add a 0 ahead of the string if the length is less than 1. I don't know why I'm getting: passing argument 1 of 'strlen' makes pointer from integer without a cast Does anyone know why?

    Read the article

  • How to discard const in c++

    - by Vincenzo
    This is what I'm trying to do and I can't: #include <string> using namespace std; class A { bool has() const { return get().length(); } string& get() { return s; } private: string s; }; The error I'm getting is: passing ‘const A’ as ‘this’ argument of ‘std::string& A::get()’ discards qualifiers I understand what the problem is, but how can I fix it? I really need has() to be const. Thanks.

    Read the article

  • PHP: reusing database class

    - by citricsquid
    Hi, I built a class that allows me to do: $db->query($query); and it works perfectly, although if I want to do: $db->query($query); while($row = $db->fetch_assoc()){ $db->query($anotherquery); echo $db->result(); } it "breaks" the class. I don't want to constantly have to redeclare my class (eg: $seconddb = new database()), is there a way to get around this? I want to be able to reuse $db within $db, without overwriting the "outside" db. currently I'm create an array of data (from db-fetch_assoc() then doing a foreach and then doing the db call inside that: $db->query('SELECT * FROM table'); while($row = $db->fetch_assoc()){ $arr[] = $row; } foreach($arr as $a){ $db->query(); // query and processing here } Is this the best method or am I missing the obvious? Should I consider passing a connection link ID with the database connection?

    Read the article

  • Do I really need to return Type::size_type?

    - by dehmann
    I often have classes that are mostly just wrappers around some STL container, like this: class Foo { public: typedef std::vector<whatever> Vec; typedef Vec::size_type; const Vec& GetVec() { return vec_; } size_type size() { return vec_.size() } private: Vec vec_; }; I am not so sure about returning size_type. Often, some function will call size() and pass that value on to another function and that one will use it and maybe pass it on. Now everyone has to include that Foo header, although I'm really just passing some size value around, which should just be unsigned int anyway ...? What is the right thing to do here? Is it best practice to really use size_type everywhere?

    Read the article

  • debug assertion fail

    - by Kolt
    I have a program that runs correctly if I start it manually. However, if I try to add a registry key to start it automatically during startup, I get this error: Debug assertion failed (str!=null) fprintf.c line:55 I tried to add Sleep(20000) before anything hapens, but same error. here`s the code: main() { FILE* filetowrite; filetowrite = fopen("textfile.txt", "a+"); writefunction(filetowrite); } int writefunction(FILE* filetowrite) { fprintf(filetowrite, "%s", "\n\n"); ... } I also tried with passing filename as char* and opening it in wrtiefunction(), but same error.

    Read the article

  • "Method name expected" error when trying add a handler method to a delegate - C#

    - by zakplayyy
    I keep getting the error "Method name expected" when trying add the a method to a delegate. I have a delegate which is invoked when ever my game ends. The function I'm trying to add to the delegate stops a countdown from flashing (the method is in a static class). I've searched about and I'm still unsure why its not working. Here is the line causing the error: LivesManager.gameEnded += new LivesManager.EndGame(CountdownManager.DisableFlashTimer(this)); The this passes the current form to the method so it can disable the timer flash on the form. I have added methods from static classes to the same delegate before and it works fine, the only difference is that I'm passing the form as a paramater and then it doesn't like it. Is there any way to pass the form to the method without the error? Thanks in advance :)

    Read the article

  • C# - Silverlight - Custom control or UserControl ?

    - by cmaduro
    I need a button that is visually completely customizable, but has custom logic to publish events and manage it's visual state based on events it has registered for. When I say visually customizable, I mean I should be able to both create the button in xaml and set it's style by binding to the supplied style. Or I can create an instance of the button and set the style by passing a parameter to an alternate constructor. Or by calling a method on the button class to set the style. I do not plan on substituting the controls template, it should be a button. Can anyone point me to some code samples of this?

    Read the article

  • How can I determine if a given git hash exists on a given branch?

    - by pinko
    Background: I use an automated build system which takes a git hash as input, as well as the name of the branch on which that hash exists, and builds it. However, the build system uses the hash alone to check out the code and build it -- it simply stores the branch name, as given, in the build DB metadata. I'm worried about developers accidentally providing the wrong branch name when they kick off a build, causing confusion when people are looking through the build history. So how can I confirm, before passing along the hash and branch name to the build system, that the given hash does in fact come from the given branch?

    Read the article

  • C Struct as an argument

    - by Brian
    I'm wondering what's the difference between sample1 and sample2. Why sometimes I have to pass the struct as an argument and sometimes I can do it without passing it in the function? and how would it be if samplex function needs several structs to work with? would you pass several structs as an argument? struct x { int a; int b; char *c; }; void sample1(struct x **z;){ printf(" first member is %d \n", z[0]->a); } void sample2(){ struct x **z; printf(" first member is %d \n", z[0]->a); // seg fault } int main(void) { struct x **z; sample1(z); sample2(); return 0; }

    Read the article

  • How can I use Javascript to access sub-elements?

    - by Steve
    I am dynamically creating forms dynamically according to a users actions. It is an entry-box and a two "buttons" for each instance. Each instance will be wrapped in a unique div tag. What I tried to do without success is when I dynamically create the "button" I attach a function with the input variable containing the div of its instance. This a brief excerpt: var newDivClass = document.getElementById("instance"+1); button1.innerHTML = "<a href=\"#\" onclick=\"buttons("+newDivClass+");\" id=\"button1\"> Button1 </a>"; function buttons(selected) { //I want this to select the first instance //of button1 found within div newDivClass selected.getElementById("button1"); //I also tried //this.getElementById("button1"); //selected.getChildren[0]; } The problem appears to be in passing newDivClass to the the actual function.

    Read the article

  • Make array from $_POST values

    - by cbarg
    Let's start telling that I'm passing an x amount of variables via post from a form. Let's name them menu_category_1, menu_category_2, ..., menu_category_x, plus, maybe, menu_category_new (I'm using an if empty to check this last one variable). To make things easier I'm also sending the parameter $key (amount of variables starting from 0). Now I need to set them into a new variable $menu_category (array), which is going to be imploded and then update my database. How do I set up that new $menu_category variable to be an array containing all my variables named in the beginning? I was thinking of using a for loop but I can't come up with something useful. Thanks!!!

    Read the article

  • Java Web Application

    - by Mark R
    I am interested in creating a simple web application that will take in user input, convert it to an XML file and send the file to a database. Coding wise I feel I am okay, it is just the general setup and what implementation to use I am a bit unsure of. At the moment I have a JSP page containing a form, the user fills out the form and on submit a get method is sent to a servlet, in the servlet doGet() method the servlet is instantiating a java object and passing it the user inputted data. The java object then writes that data to an XML file and sends it to the database via REST. All I would be interested to know is if this the standard/optimal way of creating such a web application. Any and all feedback is appreciated. Thanks

    Read the article

  • Recursive solution to finding patterns

    - by user2997162
    I was solving a problem on recursion which is to count the total number of consecutive 8's in a number. For example: input: 8801 output: 2 input: 801 output: 0 input: 888 output: 3 input: 88088018 output:4 I am unable to figure out the logic of passing the information to the next recursive call about whether the previous digit was an 8. I do not want the code but I need help with the logic. For an iterative solution, I could have used a flag variable, but in recursion how do I do the work which flag variable does in an iterative solution. Also, it is not a part of any assignment. This just came to my mind because I am trying to practice coding using recursion.

    Read the article

  • Problem with a large CSV file

    - by moustafa
    I have a very large CSV file. 51427 lines to be exact. I need to import the entire file into a MySQL database, however, the script times out due to server settings and slow connection (and maybe other reasons that I am not aware of). So - I am now passing parameters START and LIMIT via address bar to import, like this: http://my.server.address/import.php?...000&limit=1000 This reads the entire CSV file into an array, and starts at line 10000 of the array and inserts into the database until it reaches line 11000, and then terminates the script. This works very nicely, however, I am not happy having to reach the entire 51427 lines of the CSV file into an array before processing. Is there not a way where I can only read the required lines into an array? That would speed things up significantly.

    Read the article

  • Going from numpy array to itk Image

    - by tkerwin
    I have a numpy array and want to convert it into an ITK image for further processing. How do I do this without using the PyBuffer extension to WrapITK. I can't use that because I get a bunch of errors when compiling: .../ExternalProjects/PyBuffer/itkPyBuffer.txx: In static member function ‘static PyObject* itk::PyBuffer<TImage>::GetArrayFromImage(TImage*) [with TImage = itk::Image<float, 2u>]’: .../ExternalProjects/PyBuffer/wrap_itkPyBufferPython.cxx:1397: instantiated from here .../ExternalProjects/PyBuffer/itkPyBuffer.txx:64: error: cannot convert ‘int*’ to ‘npy_intp*’ in argument passing I could use an idea about either how to fix the compilation errors or another way to convert my python objects.

    Read the article

  • C++ vector that *doesn't* initialize its members?

    - by Mehrdad
    I'm making a C++ wrapper for a piece of C code that returns a large array, and so I've tried to return the data in a vector<unsigned char>. Now the problem is, the data is on the order of megabytes, and vector unnecessarily initializes its storage, which essentially turns out to cut down my speed by half. How do I prevent this? Or, if it's not possible -- is there some other STL container that would avoid such needless work? Or must I end up making my own container? (Pre-C++11) Note: I'm passing the vector as my output buffer. I'm not copying the data from elsewhere.

    Read the article

  • Silverlight Cream for April 01, 2010 -- #827

    - by Dave Campbell
    In this Issue: Max Paulousky, Hassan, Viktor Larsson, Fons Sonnemans, Jim McCurdy, Scott Marlowe, Mike Taulty, Brad Abrams, Jesse Liberty, Scott Barnes, Christopher Bennage, and John Papa and Ward Bell. Shoutouts: Tim Heuer posted a survey: What tools are the minimum to get started in Silverlight?... have you responded yet? Don't want to miss this discussion: Channel 9 Live at MIX10: Bill Buxton & Erik Meijer - Perspectives on Design Bookmark this... Jesse Liberty has moved his site: Silverlight Geek I stand with Tim Heuer on this: Congratulations to latest 2nd quarter Silverlight MVPs From SilverlightCream.com: Wizards. Prototype of sketching Wizard for WPF - 1 Max Paulousky is creating a SketchFlow WPF wizard in Expression Blend... looks like good Expression Blend and SketchFlow no matter what the target is Windows Phone 7 Navigation Hassan has another WP7 Video up, and this one is on Navigation and passing data from page to page. Silverlight 4 PathListBox Viktor Larsson is blogging about the PathListBox, and definitely had a good time doing so.. lots of fun examples. CountDown Clock in Silverlight 4 Fons Sonnemans has reworked his Sivlerlight 3 FlipClock to be this Silverlight 4 CountDown Clock utilizing the Viewbox control to make it scalable. Generic class for deep clone of Silverlight and CLR objects Jim McCurdy has a Silverlight 3 and 4-tested CloneObject class that he's using for creating a deep copy of an object and all it's properties... think drag/drop or undo/redo. Animating the Fill Color of a Silverlight Ellipse Scott Marlowe has a tutorial up that animates a pass/fail indicator with a smooth transition from a red to a green state... all with code. Silverlight 4, Blend 4, MVVM, Binding, DependencyObject Mike Taulty has a great tutorial up on Blend4 and binding... he's got a somewhat contrived example going, but it certainly looks good to me :) Silverlight 4 + RIA Services - Ready for Business: Authentication and Personalization Next up in Brad Abrams' series is Authentication and Personalization. RIA Services makes this easy to do... let Brad show you! An Annotated Line of Business Application Jesse Liberty is walking through the design and delivery of his HyperVideo project with this mini tutorial. Want to understand the thought process behind the LOB app, check this out. How to hack Expression Blend Seems like there was just some discussion about some of this today and here Scott Barnes posts this hack job for Expression Blend... pretty cool actually :) d:DesignInstance in Blend 4 Christopher Bennage has a follow-on post about using d:DesignInstance in Blend 4, and this is a very nice tutorial on the subject Silverlight TV 19: Hidden Gems from MIX10, UFC's Multi-Touch App John Papa and Ward Bell front and center for Silverlight TV number 19... and check out those threads! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Ask the Readers: Do You Prefer Computers, Game Consoles, or Other Devices for Your Gaming Needs?

    - by Asian Angel
    Nearly everyone who has access to a computer will play games on it at some point, but many people also use a separate game platform as well. What we would like to know this week is if you prefer using a computer, game consoles, or other devices for your gaming needs. Photo of Faith and Kate Connors from Mirror’s Edge by Tamahikari Tammas. Video games are a perfect way to relax and have fun at home (or at work if you can sneak in some game time!). The increasing variety of devices available with each passing year are making it easier to have access to a gaming platform to suit your needs or “darkest gaming desires”. For many people their computers are the perfect platform…they can play Flash-based games in their browsers, use the default set of games that come with their system, and install any extras that catch their eyes. The added benefit is that when game time is over they can drop right into their browsing, e-mail, personal projects, or work without having to switch hardware. The convenience of the “all-in-one” platform is certainly appealing! Perhaps you prefer to use your computer for other activities outside of gaming and own one or more separate game consoles. You might have chosen an Xbox, Playstation, or Nintendo for example. Maybe a hand-held is preferable for its’ size and portability. Then there are mobile phones and the iPad… With so many options it may feel hard to choose the right platform(s) without a good bit of research regarding display, availability of games for a particular platform, how long before the platform starts to become “obsolete”, etc. What we would like to know this week is which gaming platform you prefer. Is there only one that you choose to use or do you use multiple platforms for gaming? Is there a particular reason such as convenience for your choices? You may even be keeping an older platform around just for a certain game (or games) made for it. Are there any recommendations or advice that you would like to share with your fellow readers? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Apture Highlights Turns Your Cursor into a Search Tool Add Classic Sci-Fi Goodness to Your Desktop with the Matrix Theme for Windows 7 You Can’t Walk Straight without Visual Markers [Video] Lord of the Rings Movie Parody Double Feature [Video] Turn a Webpage into an Asteroids-Styled Shooting Game in Opera Dolphin Browser Mini Leaves Beta; Sports New GUI, Easy Bookmarking, and More

    Read the article

  • SQL SERVER – ColumnStore Index – Batch Mode vs Row Mode

    - by pinaldave
    What do you do when you are in a hurry and hear someone say things which you do not agree or is wrong? Well, let me tell you what I do or what I recently did. I was walking by and heard someone mentioning “Columnstore Index are really great as they are using Batch Mode which makes them seriously fast.” While I was passing by and I heard this statement my first reaction was I thought Columnstore Index can use both – Batch Mode and Row Mode. I stopped by even though I was in a hurry and asked the person if he meant that Columnstore indexes are seriously fast because they use Batch Mode all the time or Batch Mode is one of the reasons for Columnstore Index to be faster. He responded that Columnstore Indexes can run only in Batch Mode. However, I do not like to confront anybody without hearing their complete story. Honestly, I like to do information sharing and avoid confronting as much as possible. There are always ways to communicate the same positively. Well, this is what I did, I quickly pull up my earlier article on Columnstore Index and copied the script to SQL Server Management Studio. I created two versions of the script. 1) Very Large Table 2) Reasonably Small Table. I a query which uses columnstore index on both of the versions. I found very interesting result of the my tests. I saved my tests and sent it to the person who mentioned about that Columnstore Indexes are using Batch Mode only. He immediately acknowledged that indeed he was incorrect in saying that Columnstore Index uses only Batch Mode. What really caught my attention is that he also thanked me for sending him detail email instead of just having argument where he and I both were standing in the corridor and neither have no way to prove any theory. Here is the screenshots of the both the scenarios. 1) Columnstore Index using Batch Mode 2) Columnstore Index using Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput.  Batch mode processing spreads metadata access costs and overhead over all the rows in a batch.  Batch mode processing operates on compressed data when possible leading superior performance. Here is one last point – Columnstore Index can use Batch Mode or Row Mode but Batch Mode processing is only available in Columnstore Index. I hope this statement truly sums up the whole concept. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Silverlight Cream for March 29, 2010 -- #824

    - by Dave Campbell
    In this Issue: smartyP(-2-), Al Pascual, Mike Taulty, Shawn Burke(-2-), Vikram Pendse, Tomasz Janczuk, Lee, and Alexey Zakharov. Shoutouts: Jeff Weber announced New Silverlight Game “Snow Spill” by Nick Avery of Liserd Arts Games John Papa summarized links to all the Silverlight and Windows Phone 7 Sessions from MIX 10 Tim Heuer has a post up about OData and the MIX10 feed: MIX10: Yet another way to view video content sessions using their OData feed From SilverlightCream.com: Creating a Windows Phone 7 Metro Style Pivot Application [Part 1] smartyP has a two-part video tutorial up on creating a WP7 pivot navigation app using Expression Blend. He's also looking for feedback. Creating a Windows Phone 7 Metro Style Pivot Application [Part 2] In part 2, smartyP adds gestures to his navigation. He also has some good external links listed. Al Pascual: My First Windows Phone 7 Application Al Pascual extends the MIX10 keynote WP7 sample by adding the ability to send tweets ... with all the code. Silverlight 4 RC and the “silent installation” Mike Taulty discusses and demonstrates installing an OOB app without having to visit a webpage to get it. In other words, pass it around on a USB drive, send it in email, etc. iPhone SDK vs Windows Phone 7 Series SDK Challenge, Part 1: Hello World! Shawn Burke has a 2-part series up comparing iPhone and WP7 development looking at how easy it is to code and lines of code produced by the tools. This first post is the classic Hello World. Check out the comments as well. iPhone SDK vs. Windows Phone 7 Series SDK Challenge, Part 2: MoveMe Shawn Burke's part 2 is comparing the classic iPhone 'MoveMe' app... again, check out all the comments. Silverlight 4 : Indic Support in Silverlight Vikram Pendse demonstrates using the Microsoft Indic Language Input tool. He has some screen shots and discussion about fonts in Silverlight. Comparison of HTTP polling duplex and net.tcp performance in Silverlight 4 RC Tomasz Janczuk is checking out Silverlight4 RC and has a comparison up of the performance of the three mechanisms for asynch data push for the server to the client/. Summary rows in Datagrid with multiple groups Lee revisted a post that displayed Summary/Totals in the group header to also support multiple groups now. Silverlight Commands Hacks: Passing EventArgs as CommandParameter to DelegateCommand triggered by EventTrigger Alexey Zakharov suggests a workaround 'InvokeDelegateCommandAction' to keep Blend from ignoring event args. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • Silverlight Cream for December 28, 2010 -- #1017

    - by Dave Campbell
    In this Issue: Davide Zordan, Alex Golesh, Michael S. Scherotter, Andrej Tozon, Alex Knight, Jeff Blankenburg(-2-), Jeremy Likness, and Laurent Bugnion. Above the Fold: Silverlight: "My “What’s new in Silverlight 4 demo” app" Andrej Tozon WP7: "Taking a screenshot from within a Silverlight #WP7 application" Laurent Bugnion Expression Blend: "PathListBox: getting started" Alex Knight Shoutouts: If you haven't seen this SurfCube app demo on YouTube yet... check it out now: SurfCube V1.0 Windows Phone 7 Browser Want to get a free WP7 class from Shawn Wildermuth? Check this out: Webinar: Writing your first Windows Phone 7 Application Koen Zwikstra announed the next preview of his great tool: Silverlight Spy Preview 2 From SilverlightCream.com: Using the Multi-Touch Behavior in a Windows Phone 7 Multi-Page application Davide Zordan has a post up responding to questions he receives about multi-touch on WP7 in applications spanning more than one page. Silverlight for Windows Phone 7 Quick Tip: Fix missing icons while using DatePicker/TimePicker controls Alex Golesh discusses the use of the DatePicker control from the WP7 toolkit and found an unpleasant surprise associated with the Done/Cancel icons in the ApplicationBar, and has a solution for us. Updated SMF Thumbnail Scrubbing Sample Code Michael S. Scherotter has a post up about an update he's done to Silverlight 4 of code that allows thumbnail views of a video while 'scrubbing' ... don't know what that is? read the post :) My “What’s new in Silverlight 4 demo” app Andrej Tozon admits he's a little behind with this post, but as he points out, it might be a good time to review Silverlight 4 features, on the eve of 5. PathListBox: getting started One half the Knight team -- Alex Knight this time, has the first post of a series on the PathListBox up ... some real Expression Blend goodness. What I Learned in WP7 – Issue #9 Two more from Jeff Blankenburg today, in his number 9, he starts off demonstrating passing data between pages when navigating and fnishes up with some excellent info for submitting apps to the marketplace. What I Learned in WP7 – #Issue 10 Jeff Blankenburg's number 10 elaborates on the query string data he discussed in number 9. Using Sterling in Windows Phone 7 Applications Who better than the author?? Jeremy Likness has an end-to-end WP7/Sterling app up on his blog... begin with downloading Sterling, discuss what's needed to support Tombstoning, even custom serialization. Taking a screenshot from within a Silverlight #WP7 application Laurent Bugnion has a post up describing something people have been looking for: getting a screenshot of a WP7 application's page. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Dependency injection with n-tier Entity Framework solution

    - by Matthew
    I am currently designing an n-tier solution which is using Entity Framework 5 (.net 4) as its data access strategy, but am concerned about how to incorporate dependency injection to make it testable / flexible. My current solution layout is as follows (my solution is called Alcatraz): Alcatraz.WebUI: An asp.net webform project, the front end user interface, references projects Alcatraz.Business and Alcatraz.Data.Models. Alcatraz.Business: A class library project, contains the business logic, references projects Alcatraz.Data.Access, Alcatraz.Data.Models Alcatraz.Data.Access: A class library project, houses AlcatrazModel.edmx and AlcatrazEntities DbContext, references projects Alcatraz.Data.Models. Alcatraz.Data.Models: A class library project, contains POCOs for the Alcatraz model, no references. My vision for how this solution would work is the web-ui would instantiate a repository within the business library, this repository would have a dependency (through the constructor) of a connection string (not an AlcatrazEntities instance). The web-ui would know the database connection strings, but not that it was an entity framework connection string. In the Business project: public class InmateRepository : IInmateRepository { private string _connectionString; public InmateRepository(string connectionString) { if (connectionString == null) { throw new ArgumentNullException("connectionString"); } EntityConnectionStringBuilder connectionBuilder = new EntityConnectionStringBuilder(); connectionBuilder.Metadata = "res://*/AlcatrazModel.csdl|res://*/AlcatrazModel.ssdl|res://*/AlcatrazModel.msl"; connectionBuilder.Provider = "System.Data.SqlClient"; connectionBuilder.ProviderConnectionString = connectionString; _connectionString = connectionBuilder.ToString(); } public IQueryable<Inmate> GetAllInmates() { AlcatrazEntities ents = new AlcatrazEntities(_connectionString); return ents.Inmates; } } In the Web UI: IInmateRepository inmateRepo = new InmateRepository(@"data source=MATTHEW-PC\SQLEXPRESS;initial catalog=Alcatraz;integrated security=True;"); List<Inmate> deathRowInmates = inmateRepo.GetAllInmates().Where(i => i.OnDeathRow).ToList(); I have a few related questions about this design. 1) Does this design even make sense in terms of Entity Frameworks capabilities? I heard that Entity framework uses the Unit-of-work pattern already, am I just adding another layer of abstract unnecessarily? 2) I don't want my web-ui to directly communicate with Entity Framework (or even reference it for that matter), I want all database access to go through the business layer as in the future I will have multiple projects using the same business layer (web service, windows application, etc.) and I want to have it easy to maintain / update by having the business logic in one central area. Is this an appropriate way to achieve this? 3) Should the Business layer even contain repositories, or should that be contained within the Access layer? If where they are is alright, is passing a connection string a good dependency to assume? Thanks for taking the time to read!

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example

    - by pinaldave
    From last several days I am working on various Denali Analytical functions and it is indeed really fun to refresh the concept which I studied in the school. Earlier I wrote article where I explained how we can use PERCENTILE_CONT() to find median over here SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012. Today I am going to ask question based on the same blog post. Again just like last time the intention of this puzzle is as following: Learn new concept of SQL Server 2012 Learn new concept of SQL Server 2012 even if you are on earlier version of SQL Server. On another note, SQL Server 2012 RC0 has been announced and available to download SQL SERVER – 2012 RC0 Various Resources and Downloads. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The reason we get median is because we are passing value .05 to PERCENTILE_COUNT() function. Now run read the puzzle. Puzzle: Run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.9) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.9 value passed. For first four value the value is 775.1. Now run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.1) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.1 value passed. For first four value the value is 709.3. Now in my example I have explained how the median is found using this function. You have to explain using mathematics and explain (in easy words) why the value in last columns are 709.3 and 775.1 Hint: SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012 Rules Leave a comment with your detailed answer by Nov 25's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >