Search Results

Search found 10472 results on 419 pages for 'david hope ross'.

Page 145/419 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Need for J2me source code

    - by tikamchandrakar
    For J2me It strikes me as odd that you need an extra "api key" and so on. But actually, what I really want is NOT create an extra facebook application that needs to be registered on Facebook. I don't want to create any extra configuration effords necessary for the user of my application to undergo. All my user should need is his well-known login data for facebook. Everything else should be completely transparent to him. So, I thought maybe would u can do the login process, creating a request to the REST server via http. I know this would provide me with an XML. I hope that the this API will somehow automatically transform that XML into an intuitive object model that represents the facebook user data of the respective user. So, I would expect something like userData = new FacebookData(new FacebookConnection("user_name", "password")). Done. If you get, what I mean. No api key. No secret key. Just the well-known login data. Practically, the equivalent to thunderbird webmail, which allows you to access your MSN hotmail account via Thunderbird. Thunderbird webmail will automatically converts the htmls obtained from a hotmail browser login into the data structure usually passed on to a mail client. Hope you get what I mean. I was expecting the equilalent for the your API.

    Read the article

  • filtering search results with php

    - by fl3x7
    Hello, Cant really find any useful information on this through Google so hope someone here with some knowledge can help. I have a set of results which are pulled from a multi dimensional array. Currently the array key is the price of a product whilst the item contains another array which contains all the product details. key=>Item(name=>test, foo=>bar) So currently when I list the items I just order by the key, smallest first and it lists the products smallest price first. However I want to build on this so that when a user sees the results they can choose other ordering options like list all products by a name, certain manufacturer, colour, x ,y ,z etc etc from a drop down box(or something similar) This is where I need some guidance. Im just not sure how to go about it or best practise or anything. The only way I can think of is to order all the items by the nested array eg by the name, manufacturer etc. but how do I do that in PHP? Hope you understand what im trying to achieve(if not just ask). Any help on this with ideas, approaches or examples would be great. Thanks for reading p.s Im using PHP5

    Read the article

  • Need access to views on the OnSeekBarChangeListener (which is in an own class)

    - by sandkasten
    first of all I hope you understand my english because I'am not a native speaker. Okay, I'am new to android development and try following: For my app I need a SeekBar, so I create a Seekbar via XML and implement an OnSeekBarChangeListener. In the compay I work for its forbidden (because of the styleguid) to create something like this: seekbar.setOnSeekBarChangeListener(new OnSeekBarChangeListener() { @Override public void onProgressChanged(SeekBar arg0, int arg1, boolean arg2) { /// Do something } ... }); So I need to create a own class for the OnSeekBarChangeListener. So far no problem. public class SeekBarChangeListener extends SeekBar implements SeekBar.OnSeekBarChangeListener { public SeekBarChangeListener(Context context) { super(context); } public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { } public void onStartTrackingTouch(SeekBar seekBar) { } public void onStopTrackingTouch(SeekBar seekBar) { /// Do something. Following Code wont work CheckBox RemeberUsername = (CheckBox)findViewById(R.id.RemeberUsername); /// Always gets NULL } } I need a way to get access to some controls. Normaly the findViewById works fine but not in this case (what I can totaly understand, because how should the listender know about the views?). Some good hints? Or is there no oter way like the first code snippet to get the controls? Hope someone can help me out.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Dynamic XAP loading in Task-It - Part 1

    Download Source Code NOTE 1: The source code provided is running against the RC versions of Silverlight 4 and VisualStudio 2010, so you will need to update to those bits to run it. NOTE 2: After downloading the source, be sure to set the .Web project as the StartUp Project, and Default.aspx as the Start Page In my MEF into post, MEF to the rescue in Task-It, I outlined a couple of issues I was facing and explained why I chose MEF (the Managed Extensibility Framework) to solve these issues. Other posts to check out There are a few other resources out there around dynamic XAP loading that you may want to review (by the way, Glenn Block is the main dude when it comes to MEF): Glenn Blocks 3-part series on a dynamically loaded dashboard Glenn and John Papas Silverlight TV video on dynamic xap loading These provide some great info, but didnt exactly cover the scenario I wanted to achieve in Task-Itand that is dynamically loading each of the apps pages the first time the user enters a page. The code In the code I provided for download above, I created a simple solution that shows the technique I used for dynamic XAP loading in Task-It, but without all of the other code that surrounds it. Taking all that other stuff away should make it easier to grasp. Having said that, there is still a fair amount of code involved. I am always looking for ways to make things simpler, and to achieve the desired result with as little code as possible, so if I find a better/simpler way I will blog about it, but for now this technique works for me. When I created this solution I started by creating a new Silverlight Navigation Application called DynamicXAP Loading. I then added the following line to my UriMappings in MainPage.xaml: <uriMapper:UriMapping Uri="/{assemblyName};component/{path}" MappedUri="/{assemblyName};component/{path}"/> In the section of MainPage.xaml that produces the page links in the upper right, I kept the Home link, but added a couple of new ones (page1 and page 2). These are the pages that will be dynamically (lazy) loaded: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">      <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>  </StackPanel> In App.xaml.cs I added a bit of MEF code. In Application_Startup I call a method called InitializeContainer, which creates a PackageCatalog (a MEF thing), then I create a CompositionContainer and pass it to the CompositionHost.Initialize method. This is boiler-plate MEF stuff that allows you to do 'composition' and import 'packages'. You're welcome to do a bit more MEF research on what is happening here if you'd like, but for the purpose of this example you can just trust that it works. :-) private void Application_Startup(object sender, StartupEventArgs e) {     InitializeContainer();     this.RootVisual = new MainPage(); }   private static void InitializeContainer() {     var catalog = new PackageCatalog();     catalog.AddPackage(Package.Current);     var container = new CompositionContainer(catalog);     container.ComposeExportedValue(catalog);     CompositionHost.Initialize(container); } Infrastructure In the sample code you'll notice that there is a project in the solution called DynamicXAPLoading.Infrastructure. This is simply a Silverlight Class Library project that I created just to move stuff I considered application 'infrastructure' code into a separate place, rather than cluttering the main Silverlight project (DynamicXapLoading). I did this same thing in Task-It, as the amount of this type of code was starting to clutter up the Silverlight project, and it just seemed to make sense to move things like Enums, Constants and the like off to a separate place. In the DynamicXapLoading.Infrastructure project you'll see 3 classes: Enums - There is only one enum in here called ModuleEnum. We'll use these later. PageMetadata - We will use this class later to add metadata to a new dynamically loaded project. ViewModelBase - This is simply a base class for view models that we will use in this, as well as future samples. As mentioned in my MVVM post, I will be using the MVVM pattern throughout my code for reasons detailed in the post. By the way, the ViewModelExtension class in there allows me to do strongly-typed property changed notification, so rather than OnPropertyChanged("MyProperty"), I can do this.OnPropertyChanged(p => p.MyProperty). It's just a less error-prown approach, because if you don't spell "MyProperty" correctly using the first method, nothing will break, it just won't work. Adding a new page We currently have a couple of pages that are being dynamically (lazy) loaded, but now let's add a third page. 1. First, create a new Silverlight Application project: In this example I call it Page3. In the future you may prefer to use a different name, like DynamicXAPLoading.Page3, or even DynamicXAPLoading.Modules.Page3. It can be whatever you want. In my Task-It application I used the latter approach (with 'Modules' in the name). I do think of these application as 'modules', but Prism uses the same term, so some folks may not like that. Use whichever naming convention you feel is appropriate, but for now Page3 will do. When you change the name to Page3 and click OK, you will be presented with the Add New Project dialog: It is important that you leave the 'Host the Silverlight application in a new or existing Web site in the solution' checked, and the .Web project will be selected in the dropdown below. This will create the .xap file for this project under ClientBin in the .Web project, which is where we want it. 2. Uncheck the 'Add a test page that references the application' checkbox, and leave everything else as is. 3. Once the project is created, you can delete App.xaml and MainPage.xaml. 4. You will need to add references your new project to the following: DynamicXAPLoading.Infrastructure.dll (this is a Project reference) DynamicNavigation.dll (this is in the Libs directory under the DynamicXAPLoading project) System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll System.Windows.Controls.Navigation.dll If you have installed the latest RC bits you will find the last 3 dll's under the .NET tab in the Add Referenced dialog. They live in the following location, or if you are on a 64-bit machine like me, it will be Program Files (x86).       C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client Now let's create some UI for our new project. 5. First, create a new Silverlight User Control called Page3.dyn.xaml 6. Paste the following code into the xaml: <dyn:DynamicPageShim xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:my="clr-namespace:Page3;assembly=Page3">     <my:Page3Host /> </dyn:DynamicPageShim> This is just a 'shim', part of David Poll's technique for dynamic loading. 7. Expand the icon next to Page3.dyn.xaml and delete the code-behind file (Page3.dyn.xaml.cs). 8. Next we will create a control that will 'host' our page. Create another Silverlight User Control called Page3Host.xaml and paste in the following XAML: <dyn:DynamicPage x:Class="Page3.Page3Host"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:Views="clr-namespace:Page3.Views"      mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Title="Page 3">       <Views:Page3/>   </dyn:DynamicPage> 9. Now paste the following code into the code-behind for this control: using DynamicXAPLoading.Infrastructure;   namespace Page3 {     [PageMetadata(NavigateUri = "/Page3;component/Page3.dyn.xaml", Module = Enums.Page3)]     public partial class Page3Host     {         public Page3Host()         {             InitializeComponent();         }     } } Notice that we are now using that PageMetadata custom attribute class that we created in the Infrastructure project, and setting its two properties. NavigateUri - This tells it that the assembly is called Page3 (with a slash beforehand), and the page we want to load is Page3.dyn.xaml...our 'shim'. That line we added to the UriMapper in MainPage.xaml will use this information to load the page. Module - This goes back to that ModuleEnum class in our Infrastructure project. However, setting the Module to ModuleEnum.Page3 will cause a compilation error, so... 10. Go back to that Enums.cs under the Infrastructure project and add a 3rd entry for Page3: public enum ModuleEnum {     Page1,     Page2,     Page3 } 11. Now right-click on the Page3 project and add a folder called Views. 12. Right-click on the Views folder and create a new Silverlight User Control called Page3.xaml. We won't bother creating a view model for this User Control as I did in the Page 1 and Page 2 projects, just for the sake of simplicity. Feel free to add one if you'd like though, and copy the code from one of those other projects. Right now those view models aren't really doing anything anyway...though they will in my next post. :-) 13. Now let's replace the xaml for Page3.xaml with the following: <dyn:DynamicPage x:Class="Page3.Views.Page3"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Style="{StaticResource PageStyle}">       <Grid x:Name="LayoutRoot">         <ScrollViewer x:Name="PageScrollViewer" Style="{StaticResource PageScrollViewerStyle}">             <StackPanel x:Name="ContentStackPanel">                 <TextBlock x:Name="HeaderText" Style="{StaticResource HeaderTextStyle}" Text="Page 3"/>                 <TextBlock x:Name="ContentText" Style="{StaticResource ContentTextStyle}" Text="Page 3 content"/>             </StackPanel>         </ScrollViewer>     </Grid>   </dyn:DynamicPage> 14. And in the code-behind remove the inheritance from UserControl, so it should look like this: namespace Page3.Views {     public partial class Page3     {         public Page3()         {             InitializeComponent();         }     } } One thing you may have noticed is that the base class for the last two User Controls we created is DynamicPage. Once again, we are using the infrastructure that David Poll created. 15. OK, a few last things. We need a link on our main page so that we can access our new page. In MainPage.xaml let's update our links to look like this: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">     <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 3" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage3}"/> </StackPanel> 16. Next, we need to add the following at the bottom of MainPageViewModel in the ViewModels directory of our DynamicXAPLoading project: public ModuleEnum ModulePage3 {     get { return ModuleEnum.Page3; } } 17. And at last, we need to add a case for our new page to the switch statement in MainPageViewModel: switch (module) {     case ModuleEnum.Page1:         DownloadPackage("Page1.xap");         break;     case ModuleEnum.Page2:         DownloadPackage("Page2.xap");         break;     case ModuleEnum.Page3:         DownloadPackage("Page3.xap");         break;     default:         break; } Now fire up the application and click the page 1, page 2 and page 3 links. What you'll notice is that there is a 2-second delay the first time you hit each page. That is because I added the following line to the Navigate method in MainPageViewModel: Thread.Sleep(2000); // Simulate a 2 second initial loading delay The reason I put this in there is that I wanted to simulate a delay the first time the page loads (as the .xap is being downloaded from the server). You'll notice that after the first hit to the page though that there is no delay...that's because the .xap has already been downloaded. Feel free to comment out this 2-second delay, or remove it if you'd like. I just wanted to show how subsequent hits to the page would be quicker than the initial one. By the way, you may want to display some sort of BusyIndicator while the .xap is loading. I have that in my Task-It appplication, but for the sake of simplicity I did not include it here. In the future I'll blog about how I show and hide the BusyIndicator using events (I'm currently using the eventing framework in Prism for that, but may move to the one in the MVVM Light Toolkit some time soon). Whew, that felt like a lot of steps, but it does work quite nicely. As I mentioned earlier, I'll try to find ways to simplify the code (I'd like to get away from having things like hard-coded .xap file names) and will blog about it in the future if I find a better way. In my next post, I'll talk more about what is actually happening with the code that makes this all work.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Public free time server

    - by JL.
    I need to get the current datetime from a reliable source, because its likely that the local system time could be changed. Is it possible to get this from an internet time server, one that has close to 100% uptime, preferably via a webservice method, something that is free, and I have to stress absolutely reliable? I would hope an offering from Microsoft, or the organisation responsible for keeping global time.

    Read the article

  • What is "The" book for database design?

    - by FarmBoy
    In programming, there is often a canonical book for a particular topic, like the dragon book for compilers, K&R for C, etc. Is their a book regarding modern database design that simply must be read by anyone that would hope to eventually design databases? I'm not looking for a bunch of recommendations here. The answer I'm looking for is either "Yes, it's [Title, author]." or "No, there are many good books on databases, but no one must-read."

    Read the article

  • How do the Virtual machine network works ?

    - by Arpit
    I wish to know If I am using 2 VM instance on the same setup and I wish to use heavy data flow between the VMs is there any possibility that I get the Timeout (let say I having one timer on the sending end which stops on getting the ack.) I vague question is How network works in VM . I hope I am clear with the question.

    Read the article

  • Problem virtualizing Ubuntu 10.04 32 bit on VirtualBox 3.1 on Windows Vista 64 bit

    - by Adam Siddhi
    Software & Hardware Setup Host System : Windows Vista Home Premium SP1 64 bit Guest : Ubuntu 10.04 (ubuntu-10.04-desktop-i386.iso) 32 bit VM : VirtualBox 3.1.8 Hardware : Intel Core 2 Duo T6400 4GB SDRAM What Happened I followed the tutorial called Installing Ubuntu inside Windows using VirtualBox located here: www.psychocats.net/ubuntu/virtualbox At first I downloaded ubuntu-10.04-desktop-amd64.iso because I figured that it would be a perfect fit with my Vista 64 OS. I was wrong because it turns out the my Intel Core 2 Duo T6400 CPU does not have Intel® Virtualization Technology. So I had to go with the ubuntu-10.04-desktop-i386.iso which is 32 bit. This got me to the point where I could actually create the Ubuntu VM. So I set up the VM in VirtualBox (according to the tutorial I was following) to prepare for the Ubuntu 10.04 virtualization. Please go to my Picassa web album to see the screen shots of my VM settings and Ubuntu boot process so you can see what I experienced (they appear in the order that I experienced them in). www.picasaweb.google.com/rubysiddhi/ProblemVirtualizingUbuntu100432BitOnVirtualBox31OnWindowsVista64# The first 17 images show the VM settings. The last 8 show my attempt at virtualizing Ubuntu 10.04. You can see booting up but ultimately failing. The Specifics The one error message I got was: (process:210): GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0) It appeared on a black screen that sort of looked like a Windows console screen but with out the c:\ or the ability to type. Then this error message got more complex when tons of text appeared in the screen. Pictures 23 - 25 in the album show this text. I should also mention that I found this post in the Ubuntu forums by zonination who seemed to have similar problems to mine even though they had a different set up. The main issue I think zonination and me may be having is the fact that we can not change the color mode to 32 bit while it is booting. I think the 16 bit color mode maybe making Ubuntu fail. Not certain though. Well I hope I explained my problem thoroughly and clearly. Thanks for the tutorial. It got me started but, now I hope to finish this process so I can start developing in Ubuntu. OH by the way if you want to actually see what happened play by play (with some classical in the background) check out the video I made over here: http://www.youtube.com/watch?v=XMbbm5E_0Xw Thanks! Regards, Adam

    Read the article

  • Home entertainment karaoke system

    - by Mehper C. Palavuzlar
    Here is what I have: 40" Sony Bravia LCD TV, 5+1 speaker system, lots of original Karaoke CDs, and of course, a microphone. To set up a karaoke entertainment system, what kind of hardware do I need? Are there any standalone karaoke players out there? I hope my only option is not having to connect my laptop to TV. I already have karaoke software on my laptop but I wanna step up to a higher level without the help of a computer.

    Read the article

  • How to fix C:\Windows\System32\StikyNot.exe

    - by Sara Kamil
    I have tried all the trouble shooting options that i have found on the net. But i still get the same error message C:\Windows\System32\StikyNot.exe . I really like this application and none of the other sticky note applications compare to it. I really hope someone can hemp me with this. Thank you.

    Read the article

  • SSH port forwarding through Windows machine

    - by Leonardo Ramé
    is it possible to connect to an SSH server only accessible from inside a network, using a Windows machine without SSH as a gateway?. Let me clarify my question with a sketch: Me (Linux machine)--- WIN (Windows without SSHD)---LIN (Linux with SSHD). Machine Me, is the PC I'm using to connect to LIN through WIN. WIN is accessible from the outside, it has an RDESKTOP port open, and LIN is only accessible from inside the network. Hope you understand the question.

    Read the article

  • Where is google talk on android [closed]

    - by José Antonio
    Hi, i hope u can help me with this problem. I just bought a new motorola DEXT with Android 1.5 and all the app's works well, but i can't find google talk. I use the Astro file manager and it is install in the services folder the google talk service and the IM or instant messaging service but for one reason i just can't use it, it keeps telling me that Main Activitynot found for com.gooogle.android.apps.gtalkservices Thanks!

    Read the article

  • Realtek HD Audio 5.1 on Windows 7

    - by Darth
    I have problem with Realtek 5.1 driver for Windows 7 x64. I've installed newest drivers for Realtek HD Audio, but 5.1 still doesn't work, the only thing that works is front stereo. However, when I click on single speaker in sound settings test, every one of them work. Sorry that the image isn't in english, but I hope you can understand the point.

    Read the article

  • How practical is using a Wireless-N router together with a B and G router?

    - by Jian Lin
    I hope to use a Wireless N router to boost up the speed of wireless Internet, but there are some device that probably only supports B or G, such as iPhone, iPad, Wii, so if it is to replace the existing Wireless-B/G router, then some device won't work. Is it practical to buy a Wireless-N router, and then just plug the existing wireless router to it, or if I am using AT&T's u-verse, which has a central Wireless-B/G router, then plug in a new Wireless-N router into it?

    Read the article

  • vim indentation for bullet lists

    - by Oliver
    hi, all I often write text with format like this in VIM My talking points: - talking point 1 - talking point 2 .... continue on point 2 Ideally, I would hope VIM can auto align it for me such as: - talking point 1 - talking point 2 continue on point 2 Is this possible? thanks Oliver

    Read the article

  • iPhone 3G refuses to transfer purchased apps to iTunes

    - by andynormancx
    My iPhone 3G refuses to transfer purchased apps to iTunes. This is causing me major problems with syncing. Whenever I attempt to transfer apps from the iPhone to iTunes it goes through the motions, but never actually transfers anything. It displays the various apps in the info area at the top of the screen, but the progress bar never advances. In comparison when I sync other iPhones, using the same install of iTunes, the progress bar advances and apps are transferred. The same also happens on clean installs of iTunes on other computers, it seems to be my iPhone that is the common factor. I have tried restoring the phone from a backup, which makes no difference. This started happening months ago and the phone has since been upgraded to 3.0 and 3.1, but the problem still persists. Originally it was just a minor irritation, but I made and attempt to fix it which has made things worse. I deleted all the apps from with iTunes and then did "Transfer purchases" in the hope that it might fix something. It didn't fix anything. Also, I cannot now sync at all. If I do sync iTunes now does "transferring purchases", fails to transfer and then deletes all the apps (and data) from my iPhone. It also means I can't sync music, podcasts or anything else. I can't sync anything else, because I can't temporarily turn off app syncing because then iTunes warns that the apps on the iPhone will be deleted. I also tried de-authorising and re-authorising. What can I do to get app syncing working again ? P.S. I have considered deleting all the apps and reinstalling them one by one, in the hope that it will fix the problem. However I don't really want to embark on doing that for 55+ apps and re-entering login details etc for the apps that need them, especially as I might then find out it didn't solve the problem. Update: The latest update to iTunes 9 has improved things in one key aspect. If I let a sync run to completion iTunes no longer deletes all the apps from my phone. So I can now sync all my other data, even if I still can't sync my apps. Resolved: See my answer to the question for how I finally resolved the problem.

    Read the article

  • vim format help

    - by Oliver
    hi, all I often write text with format like this in VIM My talking points: - talking point 1 - talking point 2 .... continue on point 2 Ideally, I would hope VIM can auto align it for me such as: - talking point 1 - talking point 2 continue on point 2 Is this possible? thanks Oliver

    Read the article

  • HP blade server: How many connection can be made between HP new gen 8 blades and an interconnect

    - by Dave T
    I am building a virtualized network on an HP C3000 with 460c Gen 8 blades and 2 HP L3 switch interconnects. I was advised to by a 1Gb 4-port 366M Mezzanine Adapter. That provides me 6 ethernet connections to each blade. I have been told that you can only make 2 connections to from each blade to each interconnect, but since I have to interconnectes and 6 ports I hope someone can tell me if I can make 3 connections from each server to each interconnect. I looking for the actual - thanks Dave

    Read the article

  • Can I use excel to read barcodes and take me to a specific cell?

    - by Ben
    I work for a community group that holds an annual fund raiser for charity over a weekend. I am an excel user and am wanting to set it up so that I can assign a barcode on a card to a specific person. My hope is to be able to scan the barcode have it take me to a specific cell in the spread sheet so I can update the Commitment amount. and provide as much anonymity for our donors as possible. Can this even be done?

    Read the article

  • How do I use saz-sudo puppet module to deploy my own sudoers file with hiera?

    - by mr.zog
    I have saz-sudo installed and have created a site_sudo module based (I hope) on it. Here's what I have in my site_sudo/manifest/init.pp file: class { 'site_sudo': } sudo::conf { 'web': source => 'puppet:///files/etc/sudoers', } sudo::conf { 'syseng': priority => 10, content => "%sysadm ALL=(ALL) NOPASSWD: ALL", } include sudo No matter what I do, the sudoers file on the target is always overwritten with the sudoers.rhel6 file from saz-sudo module. I'm using common.yaml too: classes: - site_sudo

    Read the article

  • Detach current session and attach to another session, done with one script, can I?

    - by Jimm Chen
    After reading the vague official doc of GNU screen( http://www.gnu.org/software/screen/manual/screen.html ) and asking quite some questions at this site. I still cannot figure out how to accomplish such a task with a shell script. This task costs some words to describe. Assume I'm using PuTTY to telnet into my Linux server. ?STEP 1? Launch 2 telnet connections . From putty window 1 (PTWIN1),telnet into Linux Bash shell, execute screen -RR to launch a screen session, and get session name 21385.pts-4.linux-ic37 . From putty window 2 (PTWIN2), do that same as in PTWIN1, but this time, I get session name 22041.pts-9.linux-ic37 . Now, we have two screen sessions running simultaneously. We can check this: $ screen -ls There are screens on: 22041.pts-9.linux-ic37 (Attached) 21385.pts-4.linux-ic37 (Attached) 2 Sockets in /var/run/uscreens/S-chj2. ?STEP 2? Assume that for some reason, PTWIN1's TCP connection is lost abnormally(but server doesn't know that), and an urgent work is pending on session 21385 and I want to quickly regain control of it. Fortunately, we know the 21385 session is still there, so, I want to have PTWIN2 attach to session 21385. Because I hate to remember the esoteric screen option all the time, so I decide to write a script called sttach. I hope that sttach 21385.pts-4.linux-ic37 can let me attach to session 21385(for PTWIN2). Now, let's say sttach works well and I take control of 21385 on PTWIN2. ?STEP 3? Some minutes later. I want to go back to work on session 22041. Here, please allow me to have PTWIN2 remain associated with session 21385. What I would like to do is to launch another putty window (PTWIN3), telnet into server, and execute sttach 22041.pts-9.linux-ic37 in hope that I can resume session 22041 on PTWIN3 . You can see the benefit of sttach: as long as I know the target session name, I can call it to have my PuTTY window switch to that session, regardless whether the target session is "(Attached)" or "(Detached)", and regardless whether the running context is inside a screen session or not. Now the question: How to write the (Bash) script sttach? I mean, run screen with appropriate options in sttach to accomplish the goal. Waiting for your kind answer. Thank you. My previous questions regarding GNU screen: GNU screen, how to get current sessionname programmatically Is it possible to change GNU screen session name after created? How do I know I'm running inside a linux "screen" or not? My env: openSUSE Linux 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >