Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 84/728 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How would I find all sets of N single-digit, non-repeating numbers that add up to a given sum in PHP

    - by TerranRich
    Let's say I want to find all sets of 5 single-digit, non-repeating numbers that add up to 30... I'd end up with [9,8,7,5,1], [9,8,7,4,2], [9,8,6,4,3], [9,8,6,5,2], [9,7,6,5,3], and [8,7,6,5,4]. Each of those sets contains 5 non-repeating digits that add up to 30, the given sum. Any help would be greatly appreciated. Even just a starting point for me to use would be awesome. I came up with one method, which seems like a long way of going about it: get all unique 5-digit numbers (12345, 12346, 12347, etc.), add up the digits, and see if it equals the given sum (e.g. 30). If it does, add it to the list of possible matching sets. I'm doing this for a personal project, which will help me in solving Kakuro puzzles without actually solving the whole thing at once. Yeah, it may be cheating, but it's... it's not THAT bad... :P

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Fetch Data using predicate. Retrieve single value.

    - by Mr. McPepperNuts
    XYZAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name == %@", entryToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSArray *results = [managedObjectContext executeFetchRequest:request error:nil]; if (results == nil) { NSLog(@"No results found"); }else { NSLog(@"entryToSearchFor %@", entryToSearchFor); NSLog(@"results %@", [results objectAtIndex:0]); } I want to retrieve a single value (String) from "Entry." I believe I must be missing. Can someone point it out? Btw, the NSLog results outputs the following: results <NSManagedObject: 0x3d2d360> (entity: Entry; id: 0x3d13650 <x-coredata://6EA12ADA-8C0B-477F-801C-B44FE6E6C91C/Entry/p3> ; data: <fault>)

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • How to change the css class="current" when using page jumping (single page website)?

    - by Bryan
    Morning, I must be asking google all the wrong questions, because I can't find anything similar. I have a standard navigation list, but I'm using page jumping because I wanted a single web page. <ul> <li><a href="#livestream">Livestream</a></li> <li><a href="#media">Media</a></li> <li><a href="#crew">Crew</a></li> <li><a href="#services">Services</a></li> <li><a href="#contact">Contact</a></li> </ul> But I can't for the life of me figure out how to make the class="current" when using page jumping. I've tried this bit of jquery because it appears to be what I'm looking for, but it did nothing. I don't think it'll work for #links. Any ideas?

    Read the article

  • Is it possible for two VS2008 C# class library projects to share a single namespace?

    - by jeah
    I am trying to share a common namespace between two projects in a single solution. The projects are "Blueprint" and "Repositories". Blueprint contains Interfaces for the entire application and serves as a reference for the application structure. In the Blueprint project, I have an interface with the following declaration: namespace Application.Repositories{ public interface IRepository{ IEntity Get(Guid id); } } In the Repositories project I have a class the following class: namespace Application.Repositories{ public class STDRepository: IRepository { STD Get(Guid id){ return new SkankyExGirlfriendDataContext() .FirstOrDefault<STD>(x=>x.DiseaseId == id); } } } However, this does not work. The Repositories project has a reference to the Blueprint project. I receive a VS error: "The type or namespace name 'IRepository' could not be found (are you missing a using directive or an assembly reference?) - Normally, this is easy to fix but adding a using statement doesn't make sense since they have the same namespace. I tried it anyway and it didn't work. The reference has been added, and without the line of code referencing that interface, both projects compile successfully. I am lost here. I have searched all over and have found nothing, so I am assuming that there is something fundamentally wrong with what I'm doing ... but I don't know what it is. So, I would appreciate some explanation or guidance as to how to fix this problem. I hope you guys can help. Note: The reason I want to do it this way and keep the interfaces under the same namespace is because I want a solid project to keep all the interfaces in, in order to have a reference for the full architecture of the application. I have considered work arounds, such as putting all of the interfaces in the Blueprint.Application namespace instead of the application namespace. However, that would require me to write the using statement on virtually every page in the application...and my fingers get tired. Thanks again guys...

    Read the article

  • Structuring the UI code of a single-page EXTjs Web app using Rails?

    - by Daniel Beardsley
    I’m in the process of creating a large single-page web-app using ext-js for the UI components with Rails on the backend. I’ve come to good solutions for transferring data using Whorm gem and Rails support of RESTful Resources. What I haven’t come to a conclusion on is how to structure the UI and business logic aspects of the application. I’ve had a look at a few options, including Netzke but haven’t seen anything that I really think fits my needs. How should a web-application that uses ext-js components, layouts, and controls in the browser and Rails on the server best implement UI component re-use, good organization, and maintainability while maintaining a flexible layout design. Specifically I’m looking for best-practice suggestions for structuring the code that creates and configures UI components (many UI config options will be based on user data) Should EXT classes be extended in static JS for often re-used customizations and then instantiated with various configuration options by generated JS within html partials? Should partials create javascript blocks that instantiate EXT components? Should partials call helpers that return ruby hashes for EXT component config which is then dumped to Json? Something else entirely? There are many options and I'd love to hear from people who've been down this road and found some methodology that worked for them.

    Read the article

  • How create single HTML file to viewed in Excel with multiple sheets?

    - by Dmitry Nelepov
    I want know, is it possible create single HTML file wich (after chaging extension to xls) when opened at Excel will parsed to multiple sheets. Little example i got file test.xls with contents <html> <body> <table> <tr> <td> 1 </td> <td> 2 </td> <td> 3 </td> <td> =sum(A1:C1) </td> </tr> </table> </body> </html> When i open this file at Excel i got one Sheet with calculated sum of cells A1 to C1 in A4=6 I wonder, it's possible create a HTML file wich containts multiple tables wich will be parsed as multiple sheets at Excel. Here Excel view of this file

    Read the article

  • best practices question: How to save a collection of images and a java object in a single file? File

    - by Richard
    Hi all, I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc). I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed. I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though. Does anyone know a good approach for this type of issue? THanks! Thanks!

    Read the article

  • Subquery with multiple results combined into a single field?

    - by Todd
    Assume I have these tables, from which i need to display search results in a browser: Table: Containers id | name 1 Big Box 2 Grocery Bag 3 Envelope 4 Zip Lock Table: Sale id | date | containerid 1 20100101 1 2 20100102 2 3 20091201 3 4 20091115 4 Table: Items id | name | saleid 1 Barbie Doll 1 2 Coin 3 3 Pop-Top 4 4 Barbie Doll 2 5 Coin 4 I need output that looks like this: itemid itemname saleids saledates containerids containertypes 1 Barbie Doll 1,2 20100101,20100102 1,2 Big Box, Grocery Bag 2 Coin 3,4 20091201,20091115 3,4 Envelope, Zip Lock 3 Pop-Top 4 20091115 4 Zip Lock The important part is that each item type only gets one record/row in the return on the screen. I accomplished this in the past by returning multiple rows of the same item and using a scripting language to limit the output. However, this makes the ui overly complicated and loopy. So, I'm hoping I can get the database to spit out only as many records as there are rows to display. This example may be a bit extreme because of the 2 joins needed to get to the container from the item (through the sale table). I'd be happy for just an example query that outputs this: itemid itemname saleids saledates 1 Barbie Doll 1,2 20100101,20100102 2 Coin 3,4 20091201,20091115 3 Pop-Top 4 20091115 I can only return a single result in a subquery, so I'm not sure how to do this.

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • How do I gather data from the same collumn in multiple worksheets in a single workbook?

    - by infiniteloop91
    Okay so here is what I want to accomplish. For this example I have a single workbook composed of 4 data sheets plus a totals sheet. Each of the 4 data sheets has a similar name following the same pattern where the only difference is the date. (Ex. 9854978_1009_US.txt, where 1009 is the date that changes while the rest of the file name is the same). In each of those documents column F contains a series of number that I would like to find the sum of but I will have no idea how many cells in F actually contain numbers. (However there will never be additional information below it the numbers so I could in theory just add the entire F column together). I will also add new files to the workbook over time and do not want to have to rewrite the code of which I gather my data from column F. Essentially what I would like to accomplish is for the 'totals' document to take every column F from documents in the workbook with the name of '9854978_????_US.txt', where the question marks change based on the file name. How would I go about doing this in pure Excel code?

    Read the article

  • PHP Arrays: Pop an array of single-element arrays into one array.

    - by Rob Drimmie
    Using a proprietary framework, I am frequently finding myself in the situation where I get a resultset from the database in the following format: array(5) { [0] => array(1) { ["id"] => int(241) } [1] => array(1) { ["id"] => int(2) } [2] => array(1) { ["id"] => int(81) } [3] => array(1) { ["id"] => int(560) } [4] => array(1) { ["id"] => int(10) } } I'd much rather have a single array of ids, such as: array(5) { [0] => int(241) [1] => int(2) [2] => int(81) [3] => int(560) [4] => int(10) } To get there, I frequently find myself writing: $justIds = array(); foreach( $allIds as $id ) { $justIds[] = $id["id"]; } Is there a more efficient way to do this?

    Read the article

  • How can I have a single helper work on different models passed to it?

    - by Angela
    I am probably going to need to refactor in two steps since I'm still developing the project and learning the use-cases as I go along since it is to scratch my own itch. I have three models: Letters, Calls, Emails. They have some similarilty, but I anticipate they also will have some different attributes as you can tell from their description. Ideally I could refactor them as Events, with a type as Letters, Calls, Emails, but didn't know how to extend subclasses. My immediate need is this: I have a helper which checks on the status of whether an email (for example) was sent to a specific contact: def show_email_status(contact, email) @contact_email = ContactEmail.find(:first, :conditions => {:contact_id => contact.id, :email_id => email.id }) if ! @contact_email.nil? return @contact_email.status end end I realized that I, of course, want to know the status for whether a call was made to a contact as well, so I wrote: def show_call_status(contact, call) @contact_call = ContactCall.find(:first, :conditions => {:contact_id => contact.id, :call_id => call.id }) if ! @contact_call.nil? return @contact_call.status end end I would love to be able to just have a single helper show_status where I can say show_status(contact,call) or show_status(contact,email) and it would know whether to look for the object @contact_call or @contact_email. Yes, it would be easier if it were just @contact_event, but I want to do a small refactoring while I get the program up and running, and this would make the ability to do a history for a given contact much easier. Thanks!

    Read the article

  • SQL SERVER – SQLServer Quiz 2011 – Do you know your execution plan – Two questions – One Answer

    - by pinaldave
    My friend Jacob Sebastian has SQL Server Quiz 2011 launched. This time when he asked me to come up with quiz question – I wanted to come up with something which is new and make participant to think about it. After carefully thinking I come with question which I really like to solve myself. Here is the details: 1) Using Single table only Once in Single SELECT statement generate execution plan which have JOIN operator. Explain the reason for the same. 2) Using Single table only Once in Single SELECT statement generate execution plan which have parallelism operator. Explain the reason for the same. Bonus: Create a single query which satisfy both of the above statement. To answer this question and win exciting gifts please visit the SQL Server Quiz website. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Django custom SQL returning single row of results when query returns 2?

    - by Alvin
    I have a custom SQL call that is returning different results to the template than I get when I run the same query against the database directly, 1 row vs 2 Query - copied from Django Debug Toolbar: SELECT ((Sum(new_recruit_interviews) / Sum(opportunities_offered)) * 100) as avg_recruit, ((Sum(inspections) / Sum(presentations)) * 100) as avg_inspect, ((Sum(contracts_signed) / Sum(roof_approvals)) * 100) as avg_contracts, ((Sum(adjusters) / Sum(contracts_signed)) * 100) as avg_adjusters, ((Sum(roof_approvals) / Sum(adjusters)) *100) as roof_approval_avg, ((Sum(roof_turned_in) / Sum(adjusters)) * 100) as roof_jobs_avg, Sum(roof_turned_in) as roof_jobs_total, ((Sum(siding_approvals) / Sum(adjusters)) *100) as siding_approval_avg, ((Sum(siding_turned_in) / Sum(adjusters)) * 100) as siding_jobs_avg, Sum(siding_turned_in) as siding_jobs_total, ((Sum(gutter_approvals) / Sum(adjusters)) *100) as gutter_approval_avg, ((Sum(gutter_turned_in) / Sum(adjusters)) * 100) as gutter_jobs_avg, Sum(gutter_turned_in) as gutter_jobs_total, ((Sum(window_approvals) / Sum(adjusters)) *100) as window_approval_avg, ((Sum(window_turned_in) / Sum(adjusters)) * 100) as window_jobs_avg, Sum(window_turned_in) as window_jobs_total, (Sum(roof_turned_in) + Sum(siding_turned_in) + Sum(gutter_turned_in) + Sum(window_turned_in)) as total_jobs, (((Sum(collections_jobs_new) + Sum(collections_jobs_previous)) / (Sum(roof_turned_in) + Sum(siding_turned_in) + Sum(gutter_turned_in) + Sum(window_turned_in))) * 100) as total_collections, sales_report_salesmen.location_id as detail_id, business_unit_location.title as title FROM sales_report_salesmen Inner Join auth_user ON sales_report_salesmen.user_id = auth_user.id Inner Join business_unit_location ON sales_report_salesmen.location_id = business_unit_location.id GROUP BY location_id Results from direct query running the above query: INSERT INTO `` (`avg_recruit`, `avg_inspect`, `avg_contracts`, `avg_adjusters`, `roof_approval_avg`, `roof_jobs_avg`, `roof_jobs_total`, `siding_approval_avg`, `siding_jobs_avg`, `siding_jobs_total`, `gutter_approval_avg`, `gutter_jobs_avg`, `gutter_jobs_total`, `window_approval_avg`, `window_jobs_avg`, `window_jobs_total`, `total_jobs`, `total_collections`, `detail_id`, `title`) VALUES (95.3968, 92.8178, 106.9622, 90.2928, 103.5420, 103.5670, 4152, 100.2494, 106.8845, 4285, 120.1297, 86.2559, 3458, 92.9658, 106.1611, 4256, 16151, 4.281469, 12, 'St Paul, MN'); VALUES (90.2982, 73.3723, 97.8474, 104.5433, 97.7585, 86.1848, 1884, 109.9268, 109.3321, 2390, 81.0156, 96.4318, 2108, 91.7200, 123.8792, 2708, 9090, 4.531573, 13, 'Denver, CO'); Results from template: {'roof_jobs_total': Decimal('4152'), 'gutter_jobs_total': Decimal('3458'), 'avg_adjusters': Decimal('90.2928'), 'title': u'St Paul, MN', 'window_approval_avg': Decimal('92.9658'), 'total_collections': Decimal('4.281469'), 'gutter_approval_avg': Decimal('120.1297'), 'avg_recruit': Decimal('95.3968'), 'siding_approval_avg': Decimal('100.2494'), 'window_jobs_total': Decimal('4256'), 'detail_id': 12L, 'siding_jobs_avg': Decimal('106.8845'), 'avg_inspect': Decimal('92.8178'), 'roof_approval_avg': Decimal('103.5420'), 'roof_jobs_avg': Decimal('103.5670'), 'total_jobs': Decimal('16151'), 'window_jobs_avg': Decimal('106.1611'), 'avg_contracts': Decimal('106.9622'), 'gutter_jobs_avg': Decimal('86.2559'), 'siding_jobs_total': Decimal('4285')} Tried tweaking it a few ways and running the results through various for loops, keep getting the same result where my results are a single row through the Django template and the expected results (through console) have 2 rows The row that is coming back is the same as the first row returned through the console query so I'm thinking that it is running correctly just a matter of passing the results through... for good measure this is the code I'm using to generate the query (yes it's a bit ugly, been playing with it) def sql_grouped(table, fields, group_by=False, where=False): from django.db import connection query = 'SELECT %s FROM %s' % (fields, table) if where: query = query + ' WHERE %s' % (where) if group_by: query = query + ' GROUP BY %s' % (group_by) cursor = connection.cursor() cursor.execute(query) desc = cursor.description data = [dict(zip([col[0] for col in desc], row)) for row in cursor.fetchall()] return data[0] any feedback is greatly appreciated - been tinkering with this since I realized I could skip a few steps by generating my averages directly within the SQL rather than post-process

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • Django: testing get query

    - by Brant
    Okay, so I am sick of writing this... res = Something.objects.filter(asdf=something) if res: single = res[0] else: single = None if single: # do some stuff I would much rather be able to do something like this: single = Something.objects.filter(asdf=something) if single: #do some stuff I want to be able to grab a single object without testing the filtered results. In other words, when i know there is either going to be 1 or 0 matching entries, I would like to jump right to that entry, otherwise just get a 'None'. The DoesNotExist error that goes along with .get does not always work so well when trying to compress these queries into a single line. Is there any way to do what I have described?

    Read the article

  • Isn't it better to use a single try catch instead of tons of TryParsing and other error handling sometimes?

    - by Ryan Peschel
    I know people say it's bad to use exceptions for flow control and to only use exceptions for exceptional situations, but sometimes isn't it just cleaner and more elegant to wrap the entire block in a try-catch? For example, let's say I have a dialog window with a TextBox where the user can type input in to be parsed in a key-value sort of manner. This situation is not as contrived as you might think because I've inherited code that has to handle this exact situation (albeit not with farm animals). Consider this wall of code: class Animals { public int catA, catB; public float dogA, dogB; public int mouseA, mouseB, mouseC; public double cow; } class Program { static void Main(string[] args) { string input = "Sets all the farm animals CAT 3 5 DOG 21.3 5.23 MOUSE 1 0 1 COW 12.25"; string[] splitInput = input.Split(' '); string[] animals = { "CAT", "DOG", "MOUSE", "COW", "CHICKEN", "GOOSE", "HEN", "BUNNY" }; Animals animal = new Animals(); for (int i = 0; i < splitInput.Length; i++) { string token = splitInput[i]; if (animals.Contains(token)) { switch (token) { case "CAT": animal.catA = int.Parse(splitInput[i + 1]); animal.catB = int.Parse(splitInput[i + 2]); break; case "DOG": animal.dogA = float.Parse(splitInput[i + 1]); animal.dogB = float.Parse(splitInput[i + 2]); break; case "MOUSE": animal.mouseA = int.Parse(splitInput[i + 1]); animal.mouseB = int.Parse(splitInput[i + 2]); animal.mouseC = int.Parse(splitInput[i + 3]); break; case "COW": animal.cow = double.Parse(splitInput[i + 1]); break; } } } } } In actuality there are a lot more farm animals and more handling than that. A lot of things can go wrong though. The user could enter in the wrong number of parameters. The user can enter the input in an incorrect format. The user could specify numbers too large or too small for the data type to handle. All these different errors could be handled without exceptions through the use of TryParse, checking how many parameters the user tried to use for a specific animal, checking if the parameter is too large or too small for the data type (because TryParse just returns 0), but every one should result in the same thing: A MessageBox appearing telling the user that the inputted data is invalid and to fix it. My boss doesn't want different message boxes for different errors. So instead of doing all that, why not just wrap the block in a try-catch and in the catch statement just display that error message box and let the user try again? Maybe this isn't the best example but think of any other scenario where there would otherwise be tons of error handling that could be substituted for a single try-catch. Is that not the better solution?

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • WCF - Define multiple services in a single APP.Config file?

    - by Goober
    Scenario I have a windows forms application. I want to use two different WCF Services that are in no way connected. HOWEVER, I'm not sure how to go about defining the services in my APP.CONFIG file. From what I have read, it is possible to do what I have done below, but I cannot be sure that the syntax is correct or the tags are all present where necessary and I needed some clarification. Question. So is the below the correct way to setup two services in A SINGLE APP.CONFIG FILE? I.E: <configuration> <system.serviceModel> <services> <service> <!--SERVICE ONE--> <endpoint> </endpoint> <binding> </binding> </service> <service> <!--SERVICE TWO--> <endpoint> </endpoint> <binding> </binding> </service> </services> </system.serviceModel> </configuration> CODE <configuration> <system.serviceModel> <services> <!--SERVICE ONE--> <service> <endpoint address="" binding="netTcpBinding" bindingConfiguration="tcpServiceEndPoint" contract="ListenerService.IListenerService" name="tcpServiceEndPoint" /> <binding name="tcpServiceEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:05:00" enabled="true" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </service> <!--SERVICE TWO--> <service> <endpoint address="" binding="netTcpBinding" contract="UploadObjects.IResponseService" bindingConfiguration="TransactedBinding" name="UploadObjects.ResponseService"/> <binding name="TransactedBinding"> <security mode="None" /> </binding> </service> </services> </system.serviceModel> </configuration> EDIT What do the BEHAVIOURS represent? How do they relate to the service definitions?

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • How to include multiple XML files in a single XML file for deserialization by XmlSerializer in .NET

    - by harrydev
    Hi, is it possible to use the XmlSerializer in .NET to load an XML file which includes other XML files? And how? This, in order to share XML state easily in two "parent" XML files, e.g. AB and BC in below. Example: using System; using System.IO; using System.Xml.Serialization; namespace XmlSerializerMultipleFilesTest { [Serializable] public class A { public int Value { get; set; } } [Serializable] public class B { public double Value { get; set; } } [Serializable] public class C { public string Value { get; set; } } [Serializable] public class AB { public A A { get; set; } public B B { get; set; } } [Serializable] public class BC { public B B { get; set; } public C C { get; set; } } class Program { public static void Serialize<T>(T data, string filePath) { using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(T)); xmlSerializer.Serialize(writer, data); } } public static T Deserialize<T>(string filePath) { using (var reader = new StreamReader(filePath)) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(reader); } } static void Main(string[] args) { const string fileNameA = @"A.xml"; const string fileNameB = @"B.xml"; const string fileNameC = @"C.xml"; const string fileNameAB = @"AB.xml"; const string fileNameBC = @"BC.xml"; var a = new A(){ Value = 42 }; var b = new B(){ Value = Math.PI }; var c = new C(){ Value = "Something rotten" }; Serialize(a, fileNameA); Serialize(b, fileNameB); Serialize(c, fileNameC); // How can AB and BC be deserialized from single // files which include two of the A, B or C files. // Using ideally something like: var ab = Deserialize<AB>(fileNameAB); var bc = Deserialize<BC>(fileNameBC); // That is, so that A, B, C xml file // contents are shared across these two } } } Thus, the A, B, C files contain the following: A.xml: <?xml version="1.0" encoding="utf-8"?> <A xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>42</Value> </A> B.xml: <?xml version="1.0" encoding="utf-8"?> <B xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>3.1415926535897931</Value> </B> C.xml: <?xml version="1.0" encoding="utf-8"?> <C xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>Something rotten</Value> </C> And then the "parent" XML files would contain a XML include file of some sort (I have not been able to find anything like this), such as: AB.xml: <?xml version="1.0" encoding="utf-8"?> <AB xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <A include="A.xml"/> <B include="B.xml"/> </AB> BC.xml: <?xml version="1.0" encoding="utf-8"?> <BC xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <B include="B.xml"/> <C include="C.xml"/> </BC> Of course, I guess this can be solved by implementing IXmlSerializer for AB and BC, but I was hoping there was an easier solution or a generic solution with which classes themselves only need the [Serializable] attribute and nothing else. That is, the split into multiple files is XML only and handled by XmlSerializer itself or a custom generic serializer on top of this. I know this should be somewhat possible with app.config (as in http://stackoverflow.com/questions/480538/use-xml-includes-or-config-references-in-app-config-to-include-other-config-files), but I would prefer a solution based on XmlSerializer. Thanks.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >