Search Results

Search found 22 results on 1 pages for 'abhimanyu grover'.

Page 1/1 | 1 

  • ifconfig : Command 'ifconfig' is available in '/sbin/ifconfig'

    - by Sahil Grover
    My question is related to another open question. My echo $PATH gives me an output which is like /home/sahil/.rvm/gems/ruby-1.9.3-p125/bin:/home/sahil/.rvm/gems/ruby-1.9.3-p125@global/bin:/home/sahil/.rvm/rubies/ruby-1.9.3-p125/bin:/home/sahil/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/home/sahil/.rvm/bin{}:/home/android-sdks/{}:/home/android-sdks/platform-tools/{}:/home/android-sdks/tools/{}:/home/sahil/android-sdks/tools{}:/home/sahil/android-sdks/tools:/home/sahil/android-sdks/platform-tools/ But running ifconfig gives me an output like Command 'ifconfig' is available in '/sbin/ifconfig' The command could not be located because '/sbin' is not included in the PATH environment variable. This is most likely caused by the lack of administrative privileges associated with your user account. ifconfig: command not found after running command like given in other question export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" it runs ifconfig but blocks other commands of ruby rails or rvm. Seeking help how to resolve this. Also why this happens?

    Read the article

  • Recommended: git-completion.bash

    - by andy.grover
    If you use git on a daily basis like I do, git-completion.bash is a great way to make your life a little easier. While I guess it does add tab-completion for git commands, the most useful feature for me is the ability to put the current branch into the cmdline prompt. Now that I am comfortable working with multiple git branches and remotes, a little reminder where I am prevents time-consuming mistakes. git-completion.bash lives in git's git tree.git clone git://git.kernel.org/pub/scm/git/git.gitcopy git/contrib/completion/git-completion.bash to ~/.git-completion.shFollow the instructions in the file to set up, and enable showing branch in $PS1I also use this alias in my ~/.gitconfig, which is convenient:[alias]        log1 = log --pretty=oneline --abbrev-commitHave fun!

    Read the article

  • Is Infiniband going to get squeezed by iWARP and external QPI?

    - by andy.grover
    The Inquirer certainly thinks so.However, I'm not so sure it makes sense to compare Infiniband to an as-yet-unannounced optical external QPI. QPI is currently a processor interconnect. CPUs, RAM, and devices connected by it are conceptually part of the same machine -- they run a single OS, for example. They are both "networks" or "fabrics" but they have very different design trade-offs.Another widely-used bus in the system is closer to Infiniband than QPI -- PCI Express. Isn't it more likely that PCIe could take on IB? There are companies already who have solutions that use external PCI Express for cluster interconnect, but these have not gained significant market share. Why would QPI, a technology whose sweet spot is even further from Infiniband's than PCIe, be able to challenge Infiniband? It's hard to speculate without much information, but right now it doesn't seem likely to me.The other prediction made in the article is that Intel's 10GbE iWARP card could squeeze IB on the low end, due to its greater compatibility and lower cost.It's definitely never a good idea to bet against Ethernet when it comes to mass-market, commodity networking. Ethernet will win. 10GbE will win. But, there are now two competing ways to implement the low-latency RDMA Verbs interface on top of Ethernet. iWARP is essentially RDMA over TCP/IP over Ethernet. The new alternative is IBoE (Infiniband over Ethernet, aka RoCEE, aka "Rocky"). This encapsulates the IB packet protocol directly in the Ethernet frame. It loses the layer 3 routability of iWARP, but better maintains software compatibility with existing apps that use IB, and is simpler to implement in both software and hardware. iWARP has a substantial head start, but I believe that IBoE silicon will eventually be cheaper, and more likely to be implemented in commodity Ethernet hardware.I think IBoE is going to take low-end market share from traditional IB, but I think this is a situation IB hardware vendors have no problem accepting. Commoditized IBoE NICs invite greater use of RDMA features, and when higher performance is needed, customers can upgrade to "real" IB, maintaining IB's justification for higher prices. (IB max interconnect speeds have historically been 2-4x higher than Ethernet, and I don't see that changing.)(ObDisclosure: My current employer now sells IB hardware. I previously also worked at Intel. My opinions are my own, duh.)

    Read the article

  • Display resolutions missing after upgrade to Ubuntu 12-10

    - by Jim Grover
    I recently upgraded my dual-boot HP laptop (dv7-3060us, 17.3" display) from ubuntu 12-4 to 12-10. After the upgrade, only three (3) resolutions are available (800x600, 1024x768, and 1152x864, all 4:3 resolutions), none of which are 16:9. Everything worked properly in 12-04. The display adapter is ATI Mobility Radeon HD 4530. Does anyone have any ideas/suggestions about how to correct this problem? Thanks.

    Read the article

  • Can I use copyrighted images and music in my game? [duplicate]

    - by Aman Grover
    This question already has an answer here: Using modified copyrighted music in non-commercial games 3 answers I am working on my first ever project , and I have used some copyrighted images(images taken from some other well recognized games). Is it even possible that I edit the images a little bit and then use them as royalty free images? If not, then can anyone tell me some web links from where I can get royalty free images?

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Parse MIME messages

    - by Abhimanyu
    Hi, For my new project which has email module.i need to show all the email information on web.when i m making a call to server i m getting the base64 encoded mime data. after applying base64 decoding technique i m getting the mime data as follows: /*****************Mime data start *******************************/ From [email protected] Tue Jun 23 12:01:02 2009 Date: Tue, 23 Jun 2009 12:01:02 +0530 From: Prashant R Naik <[email protected]> To: [email protected] Subject: This is a test mail Message-ID: <[email protected]> Reply-To: Prashant R Naik <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="ReaqsoxgOBHFXBhH" Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Status: RO Content-Length: 1912 Lines: 52 --ReaqsoxgOBHFXBhH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Test mail. Initiated by prashant Regards, -- Prashant R Naik Principal Technologist | Symbian & Web2.0 Geodesic Limited | www.geodesic.com Tel: +91-80-66551000 --ReaqsoxgOBHFXBhH Content-Type: image/gif Content-Disposition: attachment; filename="trash.gif" Content-Transfer-Encoding: base64 R0lGODlhEAAQANUoADJ8wTqU2DmR1TqV2DN9wTSBxTWFyTaGyTJ9wTWGyTaKzjmS1TOAxTuV 2DaFyTN8wDiN0jiO0jSAxTeKzjqS1DN8wTqR1TWFyjB4vTOBxTmO0TmS1DaKzTeJzTqV1zSA xDJ8wDqS1TeKzTF4vDF4vTiO0f///zuX2gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAACgALAAA AAAQABAAAAaDQNRpSCwWhcakcsk8mZ5Qpik5pUKvT2W1uDVWp+BiYNAImAZmz/lcDoQEFoFp QTFtTPKFQLCAREolJiURJhCCJhqAJRMiIhwmjSYdJgqUjQoODgkJJgecBp0mBgYXBx8ZBQxY UAUSDAUACLEPDwgEAAAEIBUEtygkIyMkwMMYw8EjKEEAOw== --ReaqsoxgOBHFXBhH Content-Type: image/jpeg Content-Disposition: attachment; filename="bx.jpg" Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAAR CAAUAAoDAREAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAn/xAAYEAEAAwEAAAAAAAAA AAAAAAAAGWen5//EABQBAQAAAAAAAAAAAAAAAAAAAAD/xAAUEQEAAAAAAAAAAAAAAAAAAAAA /9oADAMBAAIRAxEAPwCb4AJHym0Vp3PQJTaK07noJHgA/9k= --ReaqsoxgOBHFXBhH Content-Type: image/png Content-Disposition: attachment; filename="day_bg.png" Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAAGQAAAApCAYAAADDJIzmAAAABmJLR0QA/wD/AP+gvaeTAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH2AwCCS0kTriU2QAAAB10RVh0Q29tbWVudABD cmVhdGVkIHdpdGggVGhlIEdJTVDvZCVuAAAAXElEQVR42u3bQQEAMAgDMZiqiZtP5AwbfeQk NO/WvPtLMR0TABEQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECA CAgQARGQ7NpPPasFT+0FZPjBRwYAAAAASUVORK5CYII= --ReaqsoxgOBHFXBhH-- /*****************Mime data end *******************************/ now the problem is i have to parse this data and use it in my application.since this data is not a xml so it difficult to parse it (because parsing with some tag is easy).so any one who knows how to parse mime data help be.i m using erlang to parse this data. Thank you in advance

    Read the article

  • How to Hide the UIbarbutton on navigation bar

    - by abhimanyu jindal
    hi i have created a bar button which displays Done when the editing of the text view starts. what actually i need that when i press the done button then the the editable property of textview will become false and the done button hides. i am done with the first part but how to hide the bar button? please help..

    Read the article

  • VI/VIM file handling

    - by Abhimanyu
    Nowa days I'm working with the Vi editor with the positive approach that you can do most things using it - unlike other editors. I came across one problem: Let's assume I have open a folder with vi <folder name> so it opens the folder in Vi and lists the files in that folder. I select a file and read the content, then I want to go back to the previous view which has filenames listed so it is easy to choose another file. But don't know how to achieve this. I'm hoping some method should be there to achieve this.

    Read the article

  • clear view of box model

    - by Abhimanyu
    As far as i know every element in HTML associated with padding(left, right, top, bottom) , margin(left, right, top, bottom) which will create the box model for that so that we can figure it out its actual position with respect to document. any idea over it?

    Read the article

  • Clone a node using Javascript DOM

    - by Abhimanyu
    I want to create a clone for below code using javascript DOM var summaryDiv = __createElement("div","sDiv","sDiv"+j); summaryDiv.onmouseover = function() {this.setAttribute("style","text-decoration:underline;cursor:pointer;");} summaryDiv.onmouseout = function() {this.setAttribute("style","text-decoration:none;");} if(browser.isIE) { summaryDiv.onclick = new Function("__fc.show_tooltip("+j+",'view_month')"); } else { summaryDiv.setAttribute("onclick", "__fc.show_tooltip("+j+",'view_month',event)"); } someobj.appendChild(summaryDiv); I m using obj = summaryDiv.cloneNode(true) which is creating node. but onclick event is not getting fire in case of Internet Explorer.can any body help me over it?

    Read the article

  • Map entries become vectors when piped thru a macro

    - by Gavin Grover
    In Clojure, a map entry created within a macro is preserved... (class (eval `(new clojure.lang.MapEntry :a 7))) ;=> clojure.lang.MapEntry ...but when piped thru from the outside context collapses to a vector... (class (eval `~(new clojure.lang.MapEntry :a 7))) ;=> clojure.lang.PersistentVector This behavior is defined inside LispReader.syntaxQuote(Object form) condition if(form instanceof IPersistentCollection). Does anyone know if this is intended behavior or something that will be fixed?

    Read the article

  • unable to trigger event in IE during clonning

    - by Abhimanyu
    Following is the code which will clone a set of div with their events(onclick) which is working fine for FF but in case of IE it is not firing events associated with each div. <html> <head> <style type='text/css'> .firstdiv{ border:1px solid red; } </style> <script language="JavaScript"> function show_tooltip(idx,condition,ev) { alert(idx +"=="+condition+"=="+ev); } function createCloneNode () { var cloneObj = document.getElementById("firstdiv").cloneNode(true); document.getElementById("maindiv").appendChild(cloneObj); } function init(){ var mainDiv = document.createElement("div"); mainDiv.id = 'maindiv'; var firstDiv = document.createElement("div"); firstDiv.id ='firstdiv'; firstDiv.className ='firstdiv'; for(var j=0;j<4;j++) { var summaryDiv = document.createElement("div"); summaryDiv.id = "sDiv"+j summaryDiv.className ='summaryDiv'; summaryDiv.onmouseover = function() {this.setAttribute("style","text-decoration:underline;cursor:pointer;");} summaryDiv.onmouseout = function() {this.setAttribute("style","text-decoration:none;");} summaryDiv.setAttribute("onclick", "show_tooltip("+j+",'view_month',event)"); summaryDiv.innerHTML = 'Div'+j; firstDiv.appendChild(summaryDiv); } mainDiv.appendChild(firstDiv); var secondDiv = document.createElement("div"); var linkDiv = document.createElement("div"); linkDiv.innerHTML ='create clone of above element'; linkDiv.onclick = function() { createCloneNode(); } secondDiv.appendChild(linkDiv); mainDiv.appendChild(secondDiv); document.body.appendChild(mainDiv); } </script> </head> <body> <script language="JavaScript"> init() </script> </body> </html> can anybody tell me whats the problem in above code please correct me..

    Read the article

  • want to run c program from php using exec() function

    - by Abhimanyu
    hi i m trying to run one c executable file using php exec(). when c contains a simple program like print hello i m using exec('./print.out') its working fine.but when i need to pass a argument to my c program i m uing exec('./arugment.out -n 1234') it not working .can any body tell me how to pass arugment using exec to c program.

    Read the article

  • scoket connection issue+php

    - by Abhimanyu
    Hi, I am using PHP socket programming and able to write data to open socket but i have to wait for a long time(or stuck it)for the response or some time getting error like "Maximum execution time of 30 seconds exceeded line number where this code is placed fgets($fp, 128), i have check the server it seems it has sent the response as expected but i am not getting why i m unable to get response.following the code using for socket connection and reading data. functon scoket_connection() { $fp = fsockopen(CLIENT_HOST,CLIENT_PORT, $errno, $errstr); fwrite($fp,$packet); $msg = fgets($fp, 128); fclose($fp) return $msg; } any idea???

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

1