code review: Is it subjective or objective(quantifiable) ?
- by Ram
I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed
We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development.
What are the things you look for when doing a code review ?
These are the things I came up with
Enforce FXCop rules (we are a Microsoft shop)
Check for performance (any tools ?) and security (thinking about using
OWASP- code crawler) and thread
safety
Adhere to naming conventions
The code should cover edge cases and boundaries conditions
Should handle exceptions correctly (do not swallow exceptions)
Check if the functionality is duplicated elsewhere
method body should be small(20-30 lines) , and methods should do one
thing and one thing only (no side
effects/ avoid temporal coupling -)
Do not pass/return nulls in methods
Avoid dead code
Document public and protected methods/properties/variables
What other things do you generally look for ?
I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small"
Or is code review very subjective ( and would differ from one reviewer to another ) ?
The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code)
What other objective/quantifiable system do you follow to identify good/bad code written by developers?
Book reference:
Clean Code: A handbook of agile software craftmanship by Robert Martin