Search Results

Search found 3636 results on 146 pages for 'pub david'.

Page 129/146 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • SSH issue when do the Git clone

    - by palani
    Hi i have my own ubuntu linux box .... in that i have /home/username/.ssh/id_rsa.pub .. i copied the public key and paste the key into Git Hub. when i try to clone the git repository it says the following error warning: Authentication failed. Disconnected; no more authentication methods available (No further authentication methods available.). I have no idea what is wrong with that.... i have putty tool which connect my server with doing authenetication. becuase i copied my public key to my server known host folder. Can any one please help on this .... its heart breaking here.......

    Read the article

  • PG::Error: ERROR: operator does not exist: integer ~~ unknown

    - by rsvmrk
    I'm making a search-function in a Rails project with Postgres as db. Here's my code def self.search(search) if search find(:all, :conditions => ["LOWER(name) LIKE LOWER(?) OR LOWER(city) LIKE LOWER(?) OR LOWER(address) LIKE LOWER(?) OR (venue_type) LIKE (?)", "%#{search}%", "%#{search}%", "%#{search}%", "%#{search}%"]) else find(:all) end end But my problem is that "venue_type" is an integer. I've made a case switch for venue_type def venue_type_check case self.venue_type when 1 "Pub" when 2 "Nattklubb" end end Now to my question: How can I find something in my query when venue_type is an int?

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

  • Concatenate String to Evernote Markup Language (ENML) in python

    - by Adam the Mediocre
    I am looking to add a string containing the user's text input to the note.content of my note. After reading, I have found how to add resources, but I don't want the resource to be an attachment, I want it to be the actual text. Here is some of the code: title= self.textEditTitle.text() body= self.textEditBody.text() auth_token = "secret stuff!" client = EvernoteClient(token=auth_token, sandbox=True) note_store = client.get_note_store() nBody = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" nBody += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">" nBody += "<en-note>%s</en-note>" % body note = Types.Note() note.title = title note.content= nBody Any advice would be great, as I'm just starting out with this api and it looks like it's full of potential once I figure it out! Here is what I have been mostly reading from: http://dev.evernote.com/documentation/cloud/chapters/ENML.php

    Read the article

  • Oracle Buys BigMachines - Adds Leading Configure, Price and Quote (CPQ) Cloud to the Oracle Cloud to Enable Smarter Selling

    - by Richard Lefebvre
    News Facts Oracle today announced that it has entered into an agreement to acquire BigMachines, a leading cloud-based Configure, Price and Quote (CPQ) solution provider. BigMachines’ CPQ Cloud accelerates the conversion of sales opportunities into revenue by automating the sales order process with guided selling, dynamic pricing, and an easy-to-use workflow approval process, accessible anywhere, on any device. Companies that use sales automation technology often rely on manual, cumbersome and disconnected processes to convert opportunities into orders. This creates errors, adds costs, delays revenue, and degrades the customer experience. BigMachines’ CPQ cloud extends sales automation to include the creation of an optimal quote, which enables sales personnel to easily configure and price complex products, select the best options, promotions and deal terms, and include up sell and renewals, all using automated workflows. In combination with Oracle’s enterprise-grade cloud solutions, including Marketing, Sales, Social, Commerce and Service Clouds, Oracle and BigMachines will create an end-to-end smarter selling cloud solution so sales personnel are more productive, customers are more satisfied, and companies grow revenue faster. More information on this announcement can be found at http://www.oracle.com/bigmachines Supporting Quotes “The fundamental goals of smarter selling are to provide sales teams with the information, access, and insights they need to maximize revenue opportunities and execute on all phases of the sales cycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding BigMachines’ CPQ Cloud to the Oracle Cloud, companies will be able to drive more revenue and increase customer satisfaction with a seamlessly integrated process across marketing and sales, pricing and quoting, and fulfillment and service.” “BigMachines has developed leading CPQ solutions that serve companies of all sizes across multiple industries,” said David Bonnette, BigMachines’ CEO. “Together with Oracle, we expect to provide a complete cloud solution to manage sales processes and deliver exceptional customer experiences.” Supporting Resources About Oracle and BigMachines General Presentation Customer and Partner Letter FAQ

    Read the article

  • Google Maps - Adsense ads not showing in Internet Explorer

    - by Spiros
    I am trying to display adsense ads on maps, but internet explorer gives me a hard time once again. The ads show on all other browsers I tried (chrome, ff, safari, opeara) but internet explorer. Has anyone encountered this before? here is my code for the admanager: var publisherID = 'ca-pub-6630823543717184'; var adsManagerOptions = { maxAdsOnMap : 1, style: 'adunit', channel: '5611474977' }; adsManager = new GAdsManager(map, publisherID, adsManagerOptions); adsManager.enable(); I am using xhtml1-strict doctype

    Read the article

  • error when trying to install nokogiri

    - by sam
    im trying to install nokogiri to use in a ruby on rails application to read xml files, ive been following the instructions on their page for home brew 0.9, when i try and install the libivcon from source as bellow wget http://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.13.1.tar.gz tar xvfz libiconv-1.13.1.tar.gz cd libiconv-1.13.1 ./configure --prefix=/usr/local/Cellar/libiconv/1.13.1 make sudo make install i get the following error `make: *** No rule to make target `install'. Stop.` any idea why that might be ? sorry if the answers a real simple one im pretty new to ror / terminal and ive been going round in loops with this for almost a day, any helps much appreciated !

    Read the article

  • I want to use Google Map API for my Desktop application

    - by Nubkadiya
    i want to use google map api for my desktop application. the application will be totally connected to internet. while i was searching some research notes about this implementation. i found a ideal site with the configurations. but it has some java files to be downloaded. but when i tried that website its not loading. which is swinglabs.org http://today.java.net/pub/a/today/2007/10/30/building-maps-into-swing-app-with-jxmapviewer.html any other options of doing this api implementation to my desktop application. and one more thing. i tried downloading the google api. even it ask a url. we have to provide a url then only we get a key to download it. and the api should run in that specific url. otherwise its not working. how this appears to a desktop application any ideas welcome..

    Read the article

  • python filter can't output

    - by Jesse Siu
    i create filter by python to the log file like Sat Jun 2 03:32:13 2012 [pid 12461] CONNECT: Client "66.249.68.236" Sat Jun 2 03:32:13 2012 [pid 12460] [ftp] OK LOGIN: Client "66.249.68.236", anon password "[email protected]" Sat Jun 2 03:32:14 2012 [pid 12462] [ftp] OK DOWNLOAD: Client "66.249.68.236", "/pub/10.5524/100001_101000/100022/readme.txt", 451 bytes, 1.39Kbyte/sec the script is import time lines=[] f= open("/opt/CLiMB/Storage1/log/vsftp.log") line = f.readline() lines=[line for line in f] def OnlyRecent(line): if time.strptime(line.split("[")[0].strip(),"%a %b %d %H:%M:%S %Y") < time.time()-(60*60*24*2): return True return False print"\n".join(filter(OnlyRecent,lines)) f.close() but when i run this script, it continue running but didn't show anything until i stop it. Why it can't shows records happened in 2 days.

    Read the article

  • How can I embed a flash movie in a zend view programmatically?

    - by curro
    I have tried to embed a flash movie in a zend view using the htmlFlash helper. In theory you only have to pass the movie path to the htmlFlash helper in a phtml view: echo $this-htmlFlash('/path/to/myMovie.swf'); And the framework will generate the html code in the html page: <object data="/path/to/flash.swf" type="application/x-shockwave-flash" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab"> </object> However, I have done so and the code doesn't appear on the source code. Has anyone had this problem? Thank you

    Read the article

  • server side of adsense

    - by dole
    Hi there Google ask you to add a javascript code to your page and it will generate the links. The script has some id which are sent to the server, but I don't know how. <script type="text/javascript"><!-- google_ad_client = "ca-pub-1234567890123456"; /* snipet_name */ google_ad_slot = "123456789"; google_ad_width = 728; google_ad_height = 90; //--></script><script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"></script> I'm curious to know how the client and slot ids are send to the server. Javascript is client side and I wish to know how those parameters are sent to the server in order to query a db and to return the links. A link, sample, explanation related to PHP will work great for me. TY

    Read the article

  • Are your redirects HTTP compliant (absolute URI)?

    - by webbiedave
    In the Hypertext Transfer Protocol 1.1 spec, it states for the Location header: The field value consists of a single absolute URI. Location = "Location" ":" absoluteURI An example is: Location: http://www.w3.org/pub/WWW/People.html I have coded many an app that doesn't include the scheme and domain (i.e., /thankyou/) and every browser I've ever tested with redirects correctly. I'm wondering if it's imperative to go back and change all my redirect code or just ignore this part of the spec. Have many of you produced code that doesn't comply and will you go back and change it? Or will you just trust that clients will continue to resolve them well into the future? (going forward I will certainly adhere to this)

    Read the article

  • Publish/Subscribe paradigm: Why must message classes not know about their subscribers?

    - by carleeto
    From Wikipedia: "Publish/subscribe (or pub/sub) is a messaging paradigm where senders (publishers) of messages are not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into classes, without knowledge of what (if any) subscribers there may be" I can understand why a sender must not be programmed to send its message to a specific receiver. But why must published messages be classes that do not have knowledge of their subscribers? It would seem that once the messaging system itself is in place, what typically changes as software evolves is the messages sent, the publishers and the receivers. Keeping the messages separate from the subscribers seems to imply that the subscription model might also change. Is this the reason? Also, does this occur in the real world? I realize this may be a basic question, but I'm trying to understand this paradigm and your replies are very much appreciated.

    Read the article

  • python compare time

    - by Jesse Siu
    i want to using python create filter for a log file. get recent 7 days record. but when i didn't know how to compare time. like current time is 11/9/2012, i want to get records from 04/9/2012 to now the log file like Sat Sep 2 03:32:13 2012 [pid 12461] CONNECT: Client "66.249.68.236" Sat Sep 2 03:32:13 2012 [pid 12460] [ftp] OK LOGIN: Client "66.249.68.236", anon password "[email protected]" Sat Sep 2 03:32:14 2012 [pid 12462] [ftp] OK DOWNLOAD: Client "66.249.68.236", "/pub/10.5524/100001_101000/100022/readme.txt", 451 i using this one def OnlyRecent(line): print time.strptime(line.split("[")[0].strip(),"%a %b %d %H:%M:%S %Y") print time.time() if time.strptime(line.split("[")[0].strip(),"%a %b %d %H:%M:%S %Y") < time.time(): return True return False But it shows (2012, 9, 2, 3, 32, 13, 5, 246, -1) 1347332968.08 (2012, 9, 2, 3, 32, 13, 5, 246, -1) 1347332968.08 (2012, 9, 2, 3, 32, 14, 5, 246, -1) 1347332968.08 the time format is different, and it can't compare time. So how to set this comparison in 7 days. Thanks

    Read the article

  • Why is the camera not following the player? [on hold]

    - by Homer_Simpson
    I use the following code to create Parallax Scrolling: http://www.david-gouveia.com/portfolio/2d-camera-with-parallax-scrolling-in-xna/ Parallax Scrolling is working but I don't know how to focus the camera on the player. If the player moves, then the camera doesn't follow the player. The player leaves the screen when I'm moving it. I use the following code(as described in the tutorial), but it's not working: // Updates my camera to lock on the character _camera.LookAt(player.Playerposition); What can I do so that the player is always in the center of the screen/camera? My player class: public class Player { Texture2D Playertex; public Vector2 Playerposition = new Vector2(400, 240); private Game1 game1; public Player(Game1 game) { game1 = game; } public void Load(ContentManager content) { Playertex = content.Load<Texture2D>("8bitmario"); TouchPanel.EnabledGestures = GestureType.HorizontalDrag; } public void Update(GameTime gameTime) { while (TouchPanel.IsGestureAvailable) { GestureSample gs = TouchPanel.ReadGesture(); switch (gs.GestureType) { case GestureType.HorizontalDrag: Playerposition.X += 3f; break; } } } public void Render(SpriteBatch batch) { batch.Draw(Playertex, new Vector2(Playerposition.X - Playertex.Width / 2, Playerposition.Y - Playertex.Height / 2), Color.White); } } In Game1, I update the player and camera class: protected override void Update(GameTime gameTime) { // Updates my character's position player.Update(gameTime); // Updates my camera to lock on the character _camera.LookAt(player.Playerposition); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); foreach (Layer layer in _layers) layer.Draw(spriteBatch); spriteBatch.Begin(SpriteSortMode.Deferred, null, null, null, null, null, _camera.GetViewMatrix(new Vector2(0.0f, 0.0f))); player.Render(spriteBatch); spriteBatch.End(); base.Draw(gameTime); }

    Read the article

  • Did anybody use the constream (constrea.h) lib?

    - by user337938
    Two years ago, I used the conio.h (actually conio2.h for Dev-C++) to create a console form interface. Now I want to make the same thing, but C++ std lib does not provide the needed functions and I don't want to use the old C conio lib. I found some websites which highlights the constream lib, but I have no idea to use it on VS! I tried just copying the header file into my project, but VS show several erros. I believe I am doing something wrong. ps: i got this file: ftp://ftp.cs.technion.ac.il/pub/misc/baram/TC31/INCLUDE/CONSTREA.H

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Paging and Sorting

    - by Ricardo Peres
    Since .NET 3.5 brought us LINQ and expressions, I became a great fan of these technologies. There are times, however, when strong typing cannot be used - for example, when you are developing an ObjectDataSource and you need to do paging having just a column name, a page index and a page size, so I set out to fix this. Yes, I know about Dynamic LINQ, and even talked on it previously, but there's no need to add this extra assembly. So, without further delay, here's the code, in both generic and non-generic versions: public static IList ApplyPagingAndSorting(IEnumerable enumerable, Type elementType, Int32 pageSize, Int32 pageIndex, params String [] orderByColumns) { MethodInfo asQueryableMethod = typeof(Queryable).GetMethods(BindingFlags.Static | BindingFlags.Public).Where(m = (m.Name == "AsQueryable") && (m.ContainsGenericParameters == false)).Single(); IQueryable query = (enumerable is IQueryable) ? (enumerable as IQueryable) : asQueryableMethod.Invoke(null, new Object [] { enumerable }) as IQueryable; if ((orderByColumns != null) && (orderByColumns.Length 0)) { PropertyInfo orderByProperty = elementType.GetProperty(orderByColumns [ 0 ]); MemberExpression member = Expression.MakeMemberAccess(Expression.Parameter(elementType, "n"), orderByProperty); LambdaExpression orderBy = Expression.Lambda(member, member.Expression as ParameterExpression); MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(elementType, orderByProperty.PropertyType); query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; if (orderByColumns.Length 1) { MethodInfo thenByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "ThenBy").ToArray() [ 0 ].MakeGenericMethod(elementType, orderByProperty.PropertyType); PropertyInfo thenByProperty = null; MemberExpression thenByMember = null; LambdaExpression thenBy = null; for (Int32 i = 1; i 0) { MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(elementType); MethodInfo skipMethod = typeof(Queryable).GetMethod("Skip", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(elementType); query = skipMethod.Invoke(null, new Object [] { query, pageSize * pageIndex }) as IQueryable; query = takeMethod.Invoke(null, new Object [] { query, pageSize }) as IQueryable; } MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(elementType); IList list = toListMethod.Invoke(null, new Object [] { query }) as IList; return (list); } public static List ApplyPagingAndSorting(IEnumerable enumerable, Int32 pageSize, Int32 pageIndex, params String [] orderByColumns) { return (ApplyPagingAndSorting(enumerable, typeof(T), pageSize, pageIndex, orderByColumns) as List); } List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; List sortedList = ApplyPagingAndSorting(list, 3, 0, "Year", "Month", "Day"); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Unable to ssh out anywhere - ssh_exchange_identification

    - by Chowlett
    I have a setup where I'm running Ubuntu 11.10 as a VirtualBox guest under a Windows 7 host, behind a restrictive corporate firewall. I have set up NAT from the host port 22 to Ubuntu's port 22; IT inform me that they have opened port 22 outbound for the host machine's IP address. I have run ssh-keygen -t rsa, and am trying to test the setup by connecting to github and another known ssh server. In both cases the connect is refused with ssh_exchange_identification: Connection closed by remote host. Full -vvv log is below. Is this possibly still due to the corporate firewall? If so, what else might I need to request from them? Any other ideas what might be wrong and how to fix it? ~$ ssh -Tvvv [email protected] OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/chris/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/chris/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/chris/.ssh/id_rsa-cert type -1 debug1: identity file /home/chris/.ssh/id_dsa type -1 debug1: identity file /home/chris/.ssh/id_dsa-cert type -1 debug1: identity file /home/chris/.ssh/id_ecdsa type -1 debug1: identity file /home/chris/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host Edit: Requested diagnostics: ~$ ls -la ~/.ssh total 16 drwx------ 2 chris chris 4096 2012-03-30 13:12 . drwxr-xr-x 29 chris chris 4096 2012-03-30 13:25 .. -rw------- 1 chris chris 1766 2012-03-30 13:12 id_rsa -rw-r--r-- 1 chris chris 409 2012-03-30 13:12 id_rsa.pub

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Dynamic XAP loading in Task-It - Part 1

    Download Source Code NOTE 1: The source code provided is running against the RC versions of Silverlight 4 and VisualStudio 2010, so you will need to update to those bits to run it. NOTE 2: After downloading the source, be sure to set the .Web project as the StartUp Project, and Default.aspx as the Start Page In my MEF into post, MEF to the rescue in Task-It, I outlined a couple of issues I was facing and explained why I chose MEF (the Managed Extensibility Framework) to solve these issues. Other posts to check out There are a few other resources out there around dynamic XAP loading that you may want to review (by the way, Glenn Block is the main dude when it comes to MEF): Glenn Blocks 3-part series on a dynamically loaded dashboard Glenn and John Papas Silverlight TV video on dynamic xap loading These provide some great info, but didnt exactly cover the scenario I wanted to achieve in Task-Itand that is dynamically loading each of the apps pages the first time the user enters a page. The code In the code I provided for download above, I created a simple solution that shows the technique I used for dynamic XAP loading in Task-It, but without all of the other code that surrounds it. Taking all that other stuff away should make it easier to grasp. Having said that, there is still a fair amount of code involved. I am always looking for ways to make things simpler, and to achieve the desired result with as little code as possible, so if I find a better/simpler way I will blog about it, but for now this technique works for me. When I created this solution I started by creating a new Silverlight Navigation Application called DynamicXAP Loading. I then added the following line to my UriMappings in MainPage.xaml: <uriMapper:UriMapping Uri="/{assemblyName};component/{path}" MappedUri="/{assemblyName};component/{path}"/> In the section of MainPage.xaml that produces the page links in the upper right, I kept the Home link, but added a couple of new ones (page1 and page 2). These are the pages that will be dynamically (lazy) loaded: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">      <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>  </StackPanel> In App.xaml.cs I added a bit of MEF code. In Application_Startup I call a method called InitializeContainer, which creates a PackageCatalog (a MEF thing), then I create a CompositionContainer and pass it to the CompositionHost.Initialize method. This is boiler-plate MEF stuff that allows you to do 'composition' and import 'packages'. You're welcome to do a bit more MEF research on what is happening here if you'd like, but for the purpose of this example you can just trust that it works. :-) private void Application_Startup(object sender, StartupEventArgs e) {     InitializeContainer();     this.RootVisual = new MainPage(); }   private static void InitializeContainer() {     var catalog = new PackageCatalog();     catalog.AddPackage(Package.Current);     var container = new CompositionContainer(catalog);     container.ComposeExportedValue(catalog);     CompositionHost.Initialize(container); } Infrastructure In the sample code you'll notice that there is a project in the solution called DynamicXAPLoading.Infrastructure. This is simply a Silverlight Class Library project that I created just to move stuff I considered application 'infrastructure' code into a separate place, rather than cluttering the main Silverlight project (DynamicXapLoading). I did this same thing in Task-It, as the amount of this type of code was starting to clutter up the Silverlight project, and it just seemed to make sense to move things like Enums, Constants and the like off to a separate place. In the DynamicXapLoading.Infrastructure project you'll see 3 classes: Enums - There is only one enum in here called ModuleEnum. We'll use these later. PageMetadata - We will use this class later to add metadata to a new dynamically loaded project. ViewModelBase - This is simply a base class for view models that we will use in this, as well as future samples. As mentioned in my MVVM post, I will be using the MVVM pattern throughout my code for reasons detailed in the post. By the way, the ViewModelExtension class in there allows me to do strongly-typed property changed notification, so rather than OnPropertyChanged("MyProperty"), I can do this.OnPropertyChanged(p => p.MyProperty). It's just a less error-prown approach, because if you don't spell "MyProperty" correctly using the first method, nothing will break, it just won't work. Adding a new page We currently have a couple of pages that are being dynamically (lazy) loaded, but now let's add a third page. 1. First, create a new Silverlight Application project: In this example I call it Page3. In the future you may prefer to use a different name, like DynamicXAPLoading.Page3, or even DynamicXAPLoading.Modules.Page3. It can be whatever you want. In my Task-It application I used the latter approach (with 'Modules' in the name). I do think of these application as 'modules', but Prism uses the same term, so some folks may not like that. Use whichever naming convention you feel is appropriate, but for now Page3 will do. When you change the name to Page3 and click OK, you will be presented with the Add New Project dialog: It is important that you leave the 'Host the Silverlight application in a new or existing Web site in the solution' checked, and the .Web project will be selected in the dropdown below. This will create the .xap file for this project under ClientBin in the .Web project, which is where we want it. 2. Uncheck the 'Add a test page that references the application' checkbox, and leave everything else as is. 3. Once the project is created, you can delete App.xaml and MainPage.xaml. 4. You will need to add references your new project to the following: DynamicXAPLoading.Infrastructure.dll (this is a Project reference) DynamicNavigation.dll (this is in the Libs directory under the DynamicXAPLoading project) System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll System.Windows.Controls.Navigation.dll If you have installed the latest RC bits you will find the last 3 dll's under the .NET tab in the Add Referenced dialog. They live in the following location, or if you are on a 64-bit machine like me, it will be Program Files (x86).       C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client Now let's create some UI for our new project. 5. First, create a new Silverlight User Control called Page3.dyn.xaml 6. Paste the following code into the xaml: <dyn:DynamicPageShim xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:my="clr-namespace:Page3;assembly=Page3">     <my:Page3Host /> </dyn:DynamicPageShim> This is just a 'shim', part of David Poll's technique for dynamic loading. 7. Expand the icon next to Page3.dyn.xaml and delete the code-behind file (Page3.dyn.xaml.cs). 8. Next we will create a control that will 'host' our page. Create another Silverlight User Control called Page3Host.xaml and paste in the following XAML: <dyn:DynamicPage x:Class="Page3.Page3Host"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:Views="clr-namespace:Page3.Views"      mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Title="Page 3">       <Views:Page3/>   </dyn:DynamicPage> 9. Now paste the following code into the code-behind for this control: using DynamicXAPLoading.Infrastructure;   namespace Page3 {     [PageMetadata(NavigateUri = "/Page3;component/Page3.dyn.xaml", Module = Enums.Page3)]     public partial class Page3Host     {         public Page3Host()         {             InitializeComponent();         }     } } Notice that we are now using that PageMetadata custom attribute class that we created in the Infrastructure project, and setting its two properties. NavigateUri - This tells it that the assembly is called Page3 (with a slash beforehand), and the page we want to load is Page3.dyn.xaml...our 'shim'. That line we added to the UriMapper in MainPage.xaml will use this information to load the page. Module - This goes back to that ModuleEnum class in our Infrastructure project. However, setting the Module to ModuleEnum.Page3 will cause a compilation error, so... 10. Go back to that Enums.cs under the Infrastructure project and add a 3rd entry for Page3: public enum ModuleEnum {     Page1,     Page2,     Page3 } 11. Now right-click on the Page3 project and add a folder called Views. 12. Right-click on the Views folder and create a new Silverlight User Control called Page3.xaml. We won't bother creating a view model for this User Control as I did in the Page 1 and Page 2 projects, just for the sake of simplicity. Feel free to add one if you'd like though, and copy the code from one of those other projects. Right now those view models aren't really doing anything anyway...though they will in my next post. :-) 13. Now let's replace the xaml for Page3.xaml with the following: <dyn:DynamicPage x:Class="Page3.Views.Page3"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Style="{StaticResource PageStyle}">       <Grid x:Name="LayoutRoot">         <ScrollViewer x:Name="PageScrollViewer" Style="{StaticResource PageScrollViewerStyle}">             <StackPanel x:Name="ContentStackPanel">                 <TextBlock x:Name="HeaderText" Style="{StaticResource HeaderTextStyle}" Text="Page 3"/>                 <TextBlock x:Name="ContentText" Style="{StaticResource ContentTextStyle}" Text="Page 3 content"/>             </StackPanel>         </ScrollViewer>     </Grid>   </dyn:DynamicPage> 14. And in the code-behind remove the inheritance from UserControl, so it should look like this: namespace Page3.Views {     public partial class Page3     {         public Page3()         {             InitializeComponent();         }     } } One thing you may have noticed is that the base class for the last two User Controls we created is DynamicPage. Once again, we are using the infrastructure that David Poll created. 15. OK, a few last things. We need a link on our main page so that we can access our new page. In MainPage.xaml let's update our links to look like this: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">     <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 3" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage3}"/> </StackPanel> 16. Next, we need to add the following at the bottom of MainPageViewModel in the ViewModels directory of our DynamicXAPLoading project: public ModuleEnum ModulePage3 {     get { return ModuleEnum.Page3; } } 17. And at last, we need to add a case for our new page to the switch statement in MainPageViewModel: switch (module) {     case ModuleEnum.Page1:         DownloadPackage("Page1.xap");         break;     case ModuleEnum.Page2:         DownloadPackage("Page2.xap");         break;     case ModuleEnum.Page3:         DownloadPackage("Page3.xap");         break;     default:         break; } Now fire up the application and click the page 1, page 2 and page 3 links. What you'll notice is that there is a 2-second delay the first time you hit each page. That is because I added the following line to the Navigate method in MainPageViewModel: Thread.Sleep(2000); // Simulate a 2 second initial loading delay The reason I put this in there is that I wanted to simulate a delay the first time the page loads (as the .xap is being downloaded from the server). You'll notice that after the first hit to the page though that there is no delay...that's because the .xap has already been downloaded. Feel free to comment out this 2-second delay, or remove it if you'd like. I just wanted to show how subsequent hits to the page would be quicker than the initial one. By the way, you may want to display some sort of BusyIndicator while the .xap is loading. I have that in my Task-It appplication, but for the sake of simplicity I did not include it here. In the future I'll blog about how I show and hide the BusyIndicator using events (I'm currently using the eventing framework in Prism for that, but may move to the one in the MVVM Light Toolkit some time soon). Whew, that felt like a lot of steps, but it does work quite nicely. As I mentioned earlier, I'll try to find ways to simplify the code (I'd like to get away from having things like hard-coded .xap file names) and will blog about it in the future if I find a better way. In my next post, I'll talk more about what is actually happening with the code that makes this all work.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Getting Started With nServiceBus on VAN Mar 31

    - by van
    Topic: nServiceBus is mature and powerful open source framework that enables to design robust, scalable, message-based, service-oriented architectures. Latest improvements in the configuration API enables developers to quickly get started and build a working simple system that uses messaging infrastructure. The goal of this session is to give a jump start with the framework, introduce basic concepts such as message handlers, Sagas, Pub/Sub, Generic Host and also create a working demo application that uses publish/subscribe messaging. The content of the session is addressed to developers that are interested in learning how to get started using nServiceBus in order to design and build distributed systems. Bio: Bernard Kowalski is currently a Software Developer at Microdesk, one of Autodesk's leading partners in providing variety of Geospatial and Computer-Aided Design solutions. Bernard has experience developing .NET framework-based applications utilizing Windows Forms, Windows Services, ASP.NET MVC, and Web services. In a recent project, Bernard architected and implemented a distributed system based on SOA principles using an open source implementation of an Enterprise Service Bus. Bernard develops software with Agile patterns and practices using Domain Driven Design combined with TDD (Test Driven Development). He is familiar with all of the following APIs: Autodesk Vault/Product Stream API, AutoCAD ActiveX/VBA/.NET API, AutoCAD Mechanical API, Autodesk Inventor API, Autodesk MapGuide Enterprise. Prior to joining Microdesk, Bernard worked as a researcher and teacher at the University of Science and Technology in Krakow, Poland where he was awarded with a PhD in Computer Methods in Materials Science. He also participated in research projects where he developed applications for analysis of hot compression test results using advanced optimization techniques. He also developed Finite Element Method-based programs for thermal and stress analysis using C++ and FORTRAN. Bernard is a member of the Domain Driven Design and ALT.NET user groups in NYC. Virtual ALT.NET (VAN) is the online gathering place of the ALT.NET community. Through conversations, presentations, pair programming and dojos, we strive to improve, explore, and challenge the way we create software. Using net conferencing technology such as Skype and LiveMeeting, we hold regular meetings, open to anyone, usually taking the form of a presentation or an Open Space Technology-style conversation. Please see the Calendar(http://www.virtualaltnet.com/Home/Calendar) to find a VAN group that meets at a time convenient to you, and feel welcome to join a meeting. Past sessions can be found on the Recording page. To stay informed about VAN activities, you can subscribe to the Virtual ALT.NET Google Group and follow the Virtual ALT.NET blog. Times below are Central Standard Time Start Time: Wed, Mar 31, 2010 8:00 PM UTC/GMT -5 hours End Time: Wed, Mar 31, 2010 10:00 PM UTC/GMT -5 hours Attendee URL: http://www.virtualaltnet.com/van Zach Young http://www.virtualaltnet.com

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Dynamic Filtering

    - by Ricardo Peres
    Continuing my previous posts on dynamic LINQ, now it's time for dynamic filtering. For now, I'll focus on string matching. There are three standard operators for string matching, which both NHibernate, Entity Framework and LINQ to SQL recognize: Equals Contains StartsWith EndsWith So, if we want to apply filtering by one of these operators on a string property, we can use this code: public enum MatchType { StartsWith = 0, EndsWith = 1, Contains = 2, Equals = 3 } public static List Filter(IEnumerable enumerable, String propertyName, String filter, MatchType matchType) { return (Filter(enumerable, typeof(T), propertyName, filter, matchType) as List); } public static IList Filter(IEnumerable enumerable, Type elementType, String propertyName, String filter, MatchType matchType) { MethodInfo asQueryableMethod = typeof(Queryable).GetMethods(BindingFlags.Static | BindingFlags.Public).Where(m = (m.Name == "AsQueryable") && (m.ContainsGenericParameters == false)).Single(); IQueryable query = (enumerable is IQueryable) ? (enumerable as IQueryable) : asQueryableMethod.Invoke(null, new Object [] { enumerable }) as IQueryable; MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(elementType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : (matchType == MatchType.EndsWith) ? "EndsWith" : (matchType == MatchType.Contains) ? "Contains" : "Equals", new Type [] { typeof(String) } ); PropertyInfo displayProperty = elementType.GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance); MemberExpression member = Expression.MakeMemberAccess(Expression.Parameter(elementType, "n"), displayProperty); MethodCallExpression call = Expression.Call(member, matchMethod, Expression.Constant(filter)); LambdaExpression where = Expression.Lambda(call, member.Expression as ParameterExpression); query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(elementType); IList list = toListMethod.Invoke(null, new Object [] { query }) as IList; return (list); } var list = new [] { new { A = "aa" }, new { A = "aabb" }, new { A = "ccaa" }, new { A = "ddaadd" } }; var contains = Filter(list, "A", "aa", MatchType.Contains); var endsWith = Filter(list, "A", "aa", MatchType.EndsWith); var startsWith = Filter(list, "A", "aa", MatchType.StartsWith); var equals = Filter(list, "A", "aa", MatchType.Equals); Perhaps I'll write some more posts on this subject in the near future. SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >