Search Results

Search found 39420 results on 1577 pages for 'eleven two'.

Page 156/1577 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • merging arrays of hashes

    - by Ben
    I have two arrays of hashes. Array1 => [{attribute_1 = A, attribute_2 = B}, {attribute_1 = A, attribute_2 = B}] Array2 => [{attribute_3 = C, attribute_2 = D}, {attribute_3 = C, attribute_4 = D}] Each hash in the array is holding attributes for an object. In the above example, there are two objects that I'm working with. There are two attributes in each array for each object How do I merge the two arrays? I am trying to get a single array (there is no way to get a single array from the start because I have to make two different API calls to get these attributes). DesiredArray => [{attribute_1 = A, attribute_2 = B, attribute_3 = C, attribute_2 = D}, {attribute_1 = A, attribute_2 = B, attribute_3 = C, attribute_2 = D}] I've tried a couple things, including the iteration methods and the merge method, but I've been unable to get the array I need.

    Read the article

  • How to convert JavaScript dictionary into Python syntax

    - by Sputnix
    Writing out javascript dictionary from inside of JavaScript- enabled application (such as Adobe) into external .jsx file (or any other .txt file) the context of resulted file dictionary looks like: ({one:"1", two:"2"}) (Please note that each dictionary keys are written as they are the variables name (which is not true). A next step is to read this .jsx file with Python. I need to find a way to convert ({one:"1", two:"2"}) into Python dictionary syntax such as: {'one':"1", 'two':"2"} It has been already suggested that instead of using JavaScript's built-in dict.toSource() it would make more sense to use JSON which would write a dictionary content in similar to Python syntax. But unfortunately using JSON is not an option for me. I need to find a way to convert ({one:"1", two:"2"}) into {'one':"1", 'two':"2"} using Python alone. Any suggestions on how to achieve it? Once again, the problem mostly in dictionary keys syntax which inside of Python look like variable names instead of strings-like dictionary keys names: one vs "one"

    Read the article

  • Cocos2d: Moving background on update: offsett issue

    - by mm24
    working with Objective C, iOS and Cocos2d I am developing a vertical scrolling shooter game for iPhone (retina display models with 640 width x 960 height pixel resolution). My basic algorithm works as following: I create two instances of an image that has exactly 640 width x 960 height pixel of resolution, which we will call imageA and imageB I then set the two imags with exactly 480.0f of offset from each other, as the screenSize of a CCScene is set by default to 480.0f. At each update method call I move the two images by the same value. I make sure that their offsett stays to 480.0f However when running the game I see a 1 pixel height line between the two images. This literally bugs me and would like to adjust this. What am I doing wrong? This is a zoom in on the background when the "offsett line" is visible. The white line you can see divides the two background images and is not meant to exist as both images are completely black :): If I change the yPositionOfSecondElement value to 479.0f until the first loop the two images overlap correctly, but as soon as the loop starts the two images starts having an offsett of -1.0f. Here is the initialization code: -(void) init { //... screenHeight = 480.0f; yPositionOfSecondElement= screenHeight;//I tried subtracting an offsett of -1 but eventually the image would go wrong again yPositionOfFirstElement = 0.0f; loopedBackgroundImageInstanceA = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceA.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceA.position = CGPointMake(160.0f, yPositionOfFirstElement); [node addChild:loopedBackgroundImageInstanceA z:zLevelBackground]; //loopedBackgroundImageInstanceA.color= ccRED; loopedBackgroundImageInstanceB = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceB.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceB.position = CGPointMake(160.0f, yPositionOfSecondElement); [node addChild:loopedBackgroundImageInstanceB z:zLevelBackground]; //.... } And here is the move code called at each update: -(void) moveBackgroundSprites:(BackgroundLoopedImage*)imageA :(BackgroundLoopedImage*)imageB :(ccTime)delta { isEligibleToMove=false; //This is done to avoid rounding errors float yStep = delta * [GameController sharedGameController].currentBackgroundSpeed; NSString* formattedNumber = [NSString stringWithFormat:@"%.02f", yStep]; yStep = atof([formattedNumber UTF8String]); //First should adjust position of images [self adjustPosition:imageA :imageB]; //The can get the actual image position CGPoint posA = imageA.position; CGPoint posB = imageB.position; //Here could verify if the checksum is equal to the required difference (should be 479.0f) if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply A"); } //At this stage can compute the hypotetical new position CGPoint newPosA = CGPointMake(posA.x, posA.y - yStep); CGPoint newPosB = CGPointMake(posB.x, posB.y - yStep); // Reposition stripes when they're out of bounds if (newPosA.y <= -yPositionOfSecondElement) { newPosA.y = yPositionOfSecondElement; [imageA shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } else if (newPosB.y <= -yPositionOfSecondElement) { newPosB.y = yPositionOfSecondElement; [imageB shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } //Here should verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply B"); } imageA.position = newPosA; imageB.position = newPosB; //Here could verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply C"); } isEligibleToMove=true; } -(BOOL) verifyCheckSum:(CGPoint)posA :(CGPoint)posB { BOOL comply = false; float sum = 0.0f; if (posA.y > posB.y) { sum = posA.y - posB.y; } else if (posB.y > posA.y){ sum = posB.y - posA.y; } else{ return false; } if (sum!=yPositionOfSecondElement) { comply= false; } else{ comply=true; } return comply; } And here is what happens on the update: if(shouldMoveImageA && shouldMoveImageB) { if (isEligibleToMove) { [self moveBackgroundSprites:loopedBackgroundImageInstanceA :loopedBackgroundImageInstanceB :delta]; } Forget about shouldMoveImageA and shouldMoveImageB, this is just for when the background reaches the end of level, this works.

    Read the article

  • PASS: SQLRally Thoughts

    - by Bill Graziano
    The PASS Board recently decided that we wouldn’t put another US-based SQLRally on the calendar until we had a chance to review the program. I wanted to provide some of my thinking around this. Keep in mind that this is the opinion of one Board member. The Board committed to complete two SQLRally events to determine if an event modeled between SQL Saturday and the Summit was viable. We’ve completed the two events and now it’s time to step back and review the program. This is my seventh year on the PASS Board. Over that time people have asked me why PASS does certain things. Many, many times my answer has been “Because that’s the way we did it last year”. And I am tired of giving that answer. We need to take a step back and review the US-based SQLRally before we schedule another one. It would be irresponsible for me as a Board member to commit resources to this without validating that what we’re doing makes sense for the organization and our members. I have no doubt that this was a great event for the attendees. We just need to validate it’s the best use of our resources. Please keep in mind that we haven’t cancelled the event. We’ve just said we need to review it before scheduling another one. My opinion is that some fairly serious changes are needed to the model before we consider it again – IF we do it again. I’ve come to that conclusion after speaking with the Dallas organizers, our HQ team, our Marketing team, other Board members (including one of the Orlando organizers), attendees in Orlando and Dallas and visiting other similar events. I should point out that their views aren’t unanimous on nearly any part of this event -- which is one of the reasons I want to take some time and think about this before continuing. I think it’s helpful to look at the original goals of what we were trying to accomplish. Andy Warren wrote these up in August of 2010. My summary of these goals and some thoughts on each one is below. Many of these thoughts revolve around the growth of SQL Saturdays. In the two years since that document was written these events have grown significantly. The largest SQL Saturdays are now over 500 people which mean they are nearly the same size as our recent SQLRally. Our goals included: Geographic diversity. We wanted an event in an area of the country that was away from any given Summit location. I think that’s still a valid goal. But we also have SQL Saturdays all over the country. What does SQLRally bring to this that SQLSaturday doesn’t? Speaker growth. One of the stated goals was to build a “farm club” for speakers. This gives us a way for speakers to work up to speaking at Summit by speaking in front of larger crowds. What does SQLRally bring to this that the larger SQL Saturdays aren’t providing? Pre-Conference speakers is one obvious answer here. Lower price. On a per-day basis, SQLRally is roughly 1/4th the price of the Summit. We wanted a way for people to experience something Summit-like at a lower price point. The challenge is that we are very budget constrained at that lower price point. International Event Model.  (I need to write more about this but I’m out of time.  I’ll cover it in the next installment.) There are a number of things I really like about SQLRally. I love the smaller conferences. They give me a chance to meet more people than at something the size of Summit. I like the two day format. That gives you two evenings to be at social events with people. Seeing someone a second day is a great way to build a bond with that person. That’s more difficult to do at a SQL Saturday. We also need to talk about the financial aspects of the event. Last year generated a small $17,000 profit on revenues of $200,000. Percentage-wise that’s reasonable but on an absolute basis it’s not a huge amount in our budget. We think this year will lose between $30,000 and $50,000 and take roughly 1,000 hours of HQ time. We don’t have detailed financials back yet but that’s our best guess at this point. Part of that was driven by using a convention center instead of a hotel. Until we get detailed financials back we won’t have the full picture around the financial impact. This event also takes time and mindshare from our Marketing team. This may sound like a small thing but please don’t underestimate it. Our original vision for this was something that would take very little time from our Marketing team and just a few mentions in the Connector. It turned out to need more than that. And all those mentions and emails take up space we could use to talk about other events and other programs. Last I wanted to talk about some of the things I’m thinking about. I don’t think it’s as simple as saying if we just fix “X” it all gets better. Is this that much better of an event than SQL Saturdays? What if we gave a few SQL Saturdays some extra resources? When SQL Saturdays were around 250 people that wasn’t as viable. With some of those events over 500 we need to reconsider this. We need to get back to a hotel venue. That will help with cost and networking. Is this the best use of the 1,000 HQ hours that we invested in the event? Is our price-point correct? I’m leaning toward raising our price closer to Summit on a per-day basis. I think this will let us put on a higher quality event and alleviate much of the budget pressure. Should growing speakers be a focus? Having top-line pre-conference speakers helps market the event. It will also have an impact on pricing and overall profit. We should also ask if it actually does grow speakers. How many of these people will eventually register for Summit? Attend chapters? Is SQLRally a driver into PASS or is it something that chapters, etc. drive people to? Should we have one paid day and one free instead of two paid days? This is a very interesting model that is used by SQLBits in the UK. This gives you the two day aspect as well as offering options for paid and free attendees. I’m very intrigued by this. Should we focus on a topic? Buried in the minutes is a discussion of whether PASS should have a Business Analytics conference separate from Summit. This is an interesting question to consider. Would making SQLRally be focused on a particular topic make it more attractive? Would that even be a SQLRally? Can PASS effectively manage the two events? (FYI - Probably not.) Would it help differentiate it from Summit and SQL Saturday? These are all questions that I think should be asked and answered before we do this event again. And we can’t do that if we don’t take time to have the discussion. I wanted to get this published before I take off for a few days of vacation. When I get back I’d like to write more about why the international events are different and talk about where we go from here.

    Read the article

  • How can I create different DLLs in one project?

    - by jaloplo
    I have a question I don't know if it can be solved. I have one C# project on Visual Studio 2005 and I want to create different DLL names depending on a preprocessor constant. What I have in this moment is the preprocessor constant, two snk files and two assembly's guid. I also create two configurations (Debug and Debug Preprocessor) and they compile perfectly using the snk and guid appropiate. #if PREPROCESSOR_CONSTANT [assembly: AssemblyTitle("MyLibraryConstant")] [assembly: AssemblyProduct("MyLibraryConstant")] #else [assembly: AssemblyTitle("MyLibrary")] [assembly: AssemblyProduct("MyLibrary")] #endif Now, I have to put the two assemblies into the GAC. The first assembly is added without problems but the second isn't. What can I do to create two or more different assemblies from one Visual Studio project? It's possible that I forgot to include a new line on "AssemblyInfo.cs" to change the DLL name depending on the preprocessor constant?

    Read the article

  • Can I mix compile time string comparison with MPL templates?

    - by Negative Zero
    I got this compile time string comparison from another thread using constexpr and C++11 (http://stackoverflow.com/questions/5721813/compile-time-assert-for-string-equality). It works with constant strings like "OK" constexpr bool isequal(char const *one, char const *two) { return (*one && *two) ? (*one == *two && isequal(one + 1, two + 1)) : (!*one && !*two); } I am trying to use it in the following context: static_assert(isequal(boost::mpl::c_str<boost::mpl::string<'ak'>>::value, "ak"), "should not fail"); But it gives me an compilation error of static_assert expression is not an constant integral expression. Can I do this?

    Read the article

  • How to handle splitting a file under source control?

    - by sharptooth
    I have a .cpp file and .h file containing a class. Class.cpp contains the implementation and Class.h contains the definition. The class is overcomplicated so I want to separate some code and move it into a separate class. So I create NewClass.cpp and NewClass.h and move the code there. How do I handle this when the files are under SVN? I can simply "svn add" the two new files, but then they will appear as new and will have no history. I could instead "svn copy and rename" the two initial files and edit the the two old files and the two new files - then the two new files will have common history. Which approach is better from the point of version control? Should the new files share history with the old files or should they appear as new?

    Read the article

  • Why does $('#id') return true if id doesn't exist?

    - by David
    I always wondered why jQuery returns true if I'm trying to find elements by id selector that doesnt exist in the DOM structure. Like this: <div id="one">one</div> <script> console.log( !!$('#one') ) // prints true console.log( !!$('#two') ) // is also true! (empty jQuery object) console.log( !!document.getElementById('two') ) // prints false </script> I know I can use !!$('#two').length since length === 0 if the object is empty, but it seems logical to me that a selector would return the element if found, otherwise null (like the native document.getElementById does). F.ex, this logic can't be done in jQuery: var div = $('#two') || $('<div id="two"></div>'); Wouldnt it be more logical if the ID selector returned null if not found? anyone?

    Read the article

  • Determining whether a file is a duplicate

    - by Todd R
    Is there a reliable way to determine whether or not two files are the same? For example, two files with the same size and type may or may not be the same binarilly (yeah, I know it's not really a word). I assume that comparing one or two checksums of the files will help, but I wonder: How reliable are checksums at determining whether two files are different; what are the chances of two different files having the same checksum? Would reliability increase by applying additional checksum comparisons? Which checksum algorithm(s) would be the most efficient and/or reliable? Any ideas, suggestions or thoughts are appreciated! P.S. The code for this is being written in Java running on a nix system, but generic or platform agnostic input is most helpful.

    Read the article

  • PHP Classes: parent/child communication

    - by Fffff
    Hi all, I'm having some trouble extending Classes in PHP. Have been Googling for a while. $a = new A(); $a->one(); $a->b1(); // something like this, or... class A { public $test1; public function one() { echo "this is A-one"; $this->two(); $parent->two(); $parent->B->two(); // ...how do i do something like this (prepare it for using in instance $a)? } } class B extends A { public $test2; public function two($test) { echo "this is B-two"; } } I'm ok at procedural PHP.

    Read the article

  • Grammatically correct double-noun identifiers, plural versions

    - by Michal Czardybon
    Consider compounds of two nouns, which in natural English would most often appear in the form "noun of noun", e.g. "direction of light", "output of a filter". When programming we usually write "LightDirection" and "FilterOutput". Now, I have a problem with plural nouns. There are two cases: 1) singular of plural e.g. "union of (two) sets", "intersection of (two) segments" Which is correct, SetUnion and SegmentIntersection or SetsUnion and SegmentsIntersection? 2) plural of plural There are two subcases: (a) Many elements, each having many related elements, e.g. "outputs of filters" (b) Many elements, each having single related element, e.g. "directions of vectors" Shell I use FilterOutputs and VectorDirections or FiltersOutputs and VectorsDirections? I suspect correct is the first version (FilterOutupts, VectorDirections), but I think it may lead to ambiguities, e.g. FilterOutputs - many outputs of a single filter or many outputs of many filters? LineSegmentProjections - projections of many segments or many projections of a single segment?

    Read the article

  • SQL Server JOIN with optional NULL values

    - by Paul McLoughlin
    Imagine that we have two tables as follows: Trades ( TradeRef INT NOT NULL, TradeStatus INT NOT NULL, Broker INT NOT NULL, Country VARCHAR(3) NOT NULL ) CTMBroker ( Broker INT NOT NULL, Country VARCHAR(3) NULL ) (These have been simplified for the purpose of this example). Now, if we wish to join these two tables on the Broker column, and if a country exists in the CTMBroker table on the Country, we have the following two choices: SELECT T.TradeRef,T.TradeStatus FROM Trades AS T JOIN CTMBroker AS B ON B.Broker=T.Broker AND ISNULL(B.Country, T.Country) = T.Country or SELECT T.TradeRef,T.TradeStatus FROM Trades AS T JOIN CTMBroker AS B ON B.Broker=T.Broker AND (B.COUNTRY=T.Country OR B.Country IS NULL) These are both logically equivalent, however in this specific circumstance for our database (SQL Server 2008, SP1) two different execution plans are produced for these two queries with the second version significantly outperforming the first version in terms of both time and logical reads. My question really is as follows: as a general rule would (2) be preferred to (1), or does this just happen to be exploiting some particular idiosyncracy of the optimiser in 2008 SP1 (that could therefore change with future versions of SQL Server).

    Read the article

  • Multi Monitor Setup Problems

    - by Shamballa
    I have Ubuntu 10.04 LTS - the Lucid Lynx. I have until recently been using a nVida Graphics card (NVIDIA GeForce 9800 GT) with two monitors attached, this all worked fine and dandy. A couple of days ago I bought two new identical LCD monitors for a multi monitor setup and two ATI graphics cards (ATI Sapphire Radeon HD5450). NOTE *All monitors work fine in Windows XP, 2k, Vista and 7 After I had booted into Ubuntu only one display came on, that I kind of expected anyway, then I removed the driver for the nVidia card and downloaded the ATI version which gave me the ATI Catalyst Control Center - in that only two of the displays were showing the third was disabled and showing unknown driver. I enabled the third monitor that stated "Unkown Driver" and had to reboot, upon reboot none of the displays work. I restarted and booted up into recovery mode and from now that is only what I can get into using a failsafe driver. It seems according to the log that a server is already active for Display 0 and I have to remove /tmp/.X0-lock and start again. This is what the log file is saying: Fatal Server Error Server is already active for display 0 if this server is no longer running, remove /tmp/.X0-lock and start again. (WW) xf86 closeconsole: KDSETMODE failed: Bad file descriptor (WW) xf86 closeconsole: VT_GETMODE failed: Bad file descriptor (WW) xf86 closeconsole: VT_GETSTATE failed: Bad file descriptor ddxSigGiveUp: closing log I have tried looking at my xorg.config file but unfortunately I have not really got a clue as to how it "should" be - I have tried regenerating it using this command from a terminal: sudo dpkg-reconfigure -phigh xserver-xorg but that had no effect so I am currently stuck in failsafe driver mode but two monitors are active but are mirroring each other. I hope that this is not to long - looking back I have been going on a bit! but I am just trying to explain as much as I can... I have asked this on Linuxquestions but nobody seems to know either or at least I have not had any responses. Could some kind soul please help explain what I can do from here? I would be eternally grateful. Chris * Update * Removing xorg.conf does nothing other than allowing me to use only two monitors - using the command: sudo aticonfig --initial generates the xorg.conf file below: but does not work either - I just get two monitors... Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Files" EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I have tried using this command from a thread on the Ubuntu Forums with a question similar to mine: sudo aticonfig --initial=dual-head --adapter=all Generated xorg.conf file Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 Screen "aticonfig-Screen[0]-1" RightOf "aticonfig-Screen[0]-0" Screen "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-1" Screen "aticonfig-Screen[1]-1" RightOf "aticonfig-Screen[1]-0" EndSection Section "Files" EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "aticonfig-Device[0]-1" Driver "fglrx" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "aticonfig-Device[1]-0" Driver "fglrx" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "aticonfig-Device[1]-1" Driver "fglrx" BusID "PCI:2:0:0" Screen 1 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[0]-1" Device "aticonfig-Device[0]-1" Monitor "aticonfig-Monitor[0]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-0" Device "aticonfig-Device[1]-0" Monitor "aticonfig-Monitor[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-1" Device "aticonfig-Device[1]-1" Monitor "aticonfig-Monitor[1]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection This upon reboot renders ALL monitors blank and I have to go into recovery mode and use a failsafe driver. This is so much harder than I thought it would be, I don't think Ubuntu likes ATI for multi (3) monitors or maybe the other way around. Can anyone help still?

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Algorithm to Group All the Cycles Together

    - by Ngu Soon Hui
    I have a lot of cycles ( indicated by numeric values, for example, 1-2-3-4 corresponds to a cycle, with 4 edges, edge 1 is {1:2}, edge 2 is {2:3}, edge 3 is {3,4}, edge 4 is {4,1}, and so on). A cycle is said to be connected to another cycle if they share one and only one edge. For example, let's say I have two cycles 1-2-3-4 and 5-6-7-8, then there are two cycle groups because these two cycles are not connecting to each other. If I have two cycles 1-2-3-4 and 3-4-5-6, then I have only one cycle group because these two cycles share the same edge. What is the most efficient way to find all the cycle groups?

    Read the article

  • Authenticating a user for a single app with multiple domains

    - by hofnarwillie
    I have one asp.net web application, but two different domains point to this web app. For instance: www.one.com and www.two.com both point to the same web app. I have an issue where I need certain pages to be on a specific domain (due to some security requirements from our online payment provider - a third party website). So let's say page1.aspx needs to be called on www.two.com The process is as follows: A user logs into www.one.com The authentication cookie is saved to the browser The user then navigates to page1.aspx and, if on the wrong domain, gets redirected to the correct domain. (this redirection happens on page1.aspx in the page_load event) Then asp.net redirects the user to the login screen, because the authentication cookie is not sent to www.two.com. How can I track the user and keep him/her logged in between the two domains?

    Read the article

  • Combining XSLT transforms

    - by Flynn1179
    Is there a way to combine two XSLT documents into a single XSLT document that does the same as transforming using the original two in sequence? i.e. Combining XSLTA and XSLTB into XSLTC such that XSLTB( XSLTA( xml )) == XSLTC( xml )? There's three reasons I'd like to be able to do this: Simplifies development; some operations need sequential transforms, and although I can generate a combined one by hand, it's a lot more difficult to maintain that two much simpler, separate transforms. Speed; one transform is in most cases hopefully faster than two. I'm currently working on a program that literally just transforms a data file in XML into an XHTML page capable of editing it using one XSLT, and a second XSLT that transforms the XHTML page back into the data file when it's saved. One test I hope to be able to do is to combine the two, and easily confirm that the 'combined' XSLT should leave the data unchanged.

    Read the article

  • Strange profiler behavior: same functions, different performances

    - by arthurprs
    I was learning to use gprof and then i got weird results for this code: int one(int a, int b) { return a / (b + 1); } int two(int a, int b) { return a / (b + 1); } int main() { for (int i = 1; i < 30000000; i++) { two(i, i * 2); one(i, i * 2); } return 0; } and this is the profiler output % cumulative self self total time seconds seconds calls ns/call ns/call name 48.39 0.90 0.90 29999999 30.00 30.00 one(int, int) 40.86 1.66 0.76 29999999 25.33 25.33 two(int, int) 10.75 1.86 0.20 main If i call one then two the result is the inverse, two takes more time than one both are the same functions, but the first calls always take less time then the second Why is that? Note: The assembly code is exactly the same and code is being compiled with no optimizations

    Read the article

  • How do I switch the table that is queried with linq-to-sql

    - by Ian Ringrose
    We have two tables with the same set of columns; depending on the “type” of object the value is stored in one of the two tables. I wish to use common code to access these two tables. If I was using “raw sql” I could just use String.Format() to change the table name. (Likewise for updates etc) The two separate tables are needed as the data access patterns are very different for the common queries on the two tables and therefore different indexes are needed. “Views” and “instead of triggers” etc to make the tables look like a single table are not liked here. A lot of our customers use low end version of SqlServer so we cannot use partition tables.

    Read the article

  • How to match data between columns to do the comparasion

    - by NCC
    I do not really know how to explain this in a clear manner. Please see attached image I have a table with 4 different columns, 2 are identical to each other (NAME and QTY). The goal is to compare the differences between the QTY, however, in order to do it. I must: 1. sort the data 2. match the data item by item This is not a big deal with small table but with 10 thousand rows, it takes me a few days to do it. Pleas help me, I appreciate. My logic is: 1. Sorted the first two columns (NAME and QTY) 2. For each value of second two columns (NAME and QTY), check if it match with first two column. If true, the insert the value. 3. For values are not matched, insert to new rows with offset from the rows that are in first two columns but not in second two columns

    Read the article

  • Is git revert broken?

    - by sabgenton
    The following pastebin is a repo with one file with one, two, three, four, five typed on each line. Each line was commited separately into git: http://pastebin.ca/raw/2136179 I then tried to delete the line two with the command git revert <commmit which creates two> And get: error: could not revert b4e0a66... second hint: after resolving the conflicts, mark the corrected paths hint: with 'git add <paths>' or 'git rm <paths>' hint: and commit the result with 'git commit' There should be no conflict for something this simple? Or am I doing it wrong/got the wrong command? The merge details don't seem to make sense either: one <<<<<<< HEAD two three four five ======= >>>>>>> parent of b4e0a66... second Isn't that saying delete everything but one? I was expecting only two to be affected... git 1.7.10

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • Best triple head display setup

    - by dgel
    I'm currently running Ubuntu 12.04 with a darn good triple head display setup. I've got a VisionTek 900530 Radeon HD 5450 512MB DDR3 PCI Express video card that has two DVI outputs and one Mini DisplayPort that I have connected to a HDMI adapter. I'm running three identical Asus 1920x1080 monitors that each have a DVI, VGA, and HDMI input. I'm using the xorg-edgers ppa, so I'm using the open source radeon driver version 6.99.99. I tried using the ATI binary fglrx driver, but I wasn't able to get the three monitors working properly- the monitor connected via HDMI / DisplayPort wouldn't run at full resolution. The setup is almost perfect: Compiz runs fine and is quite snappy. I'm not able to use that great compiz feature where you can drag a window to the side of a display and it will half maximize. I occasionally experience display corruption weirdness with Unity and need to restart it. When I use a dropdown menu in LibreOffice it often pops the menu down in another window. For example, if I'm using the center monitor and click the Insert menu, the menu pulls down on the monitor to my right, forcing me to chase it. If I chase down the menu and choose Manual Break, the dialog appears over on my left monitor. This absurdity is mildly entertaining but has lost its novelty. I've decided to build a new system and have spared no expense- latest i7 processor, SSD, etc. I really like the performance of the Nvidia binary drivers, so I put two ZOTAC ZT-40707-10L GeForce GT 440 in the system, figuring I'd have four DVI outputs and an awesome triple (or even eventually quad) head setup. Unfortunately it appears that I didn't do sufficient research before my purchase. It seems that Nvidia TwinView only supports two monitors on one card (I guess that's why they call it TwinView...). I messed around with running two X servers, but I really don't want that- being able to drag windows to any monitor is critical. It doesn't sound like Xinerama is an option because from what I understand it simply doesn't support Compiz. I've seen a BaseMosaic option that can be used with the Nvidia drivers that appears to support an almost unlimited number of heads- unfortunately me cheap little cards don't support it. I'm also not sure whether you'll still have all nice maximizing and snapping that TwinView provides, or whether Ubuntu will only see it as one massive display. I put my old trusty ATI card into my new system and installed 12.10. I'm using the opensource radeon drivers again because even in 12.10 I can't get the fglrx binary drivers to do triple head. Unfortunately, even with an unbelievably powerful system the experience is extremely sluggish (much more so than my experience in 12.04). The menu scattering problem appears to be fixed, but I get a lot of nasty Unity display corruption. So finally, my question is this: What hardware / drivers should I use? I'm willing to buy (almost) any video card(s). I have two PCI-Express 3.0 slots on my motherboard (which has an integrated Intel HD card). I'm willing to use ATI or Nvidia cards and willing to run Ubuntu 12.04.1 or 12.10. I'm not a gamer, but do want beautiful and snappy Compiz effects. Does anyone out there have the perfect triple head setup in 12.04 or 12.10? What hardware / drivers are you using? I have those two Nvidia cards but will probably be returning them unless someone knows a way to use them together for a triple head setup. Since I'm having pretty good luck with a single ATI card providing three displays, should I just buy a beefier one with the hopes that it will fix the horrible sluggishness I'm experiencing in 12.10?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >