Search Results

Search found 90835 results on 3634 pages for 'code comment'.

Page 4/3634 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • If your unit test code "smells" does it really matter?

    - by Buttons840
    Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?

    Read the article

  • SQL SERVER – Tricks to Comment T-SQL in SSMS – SQL in Sixty Seconds #019 – Video

    - by pinaldave
    Code commeting is the one of the most common tasks developers perform. There are two major reasons why developer comment code. 1) During Debug 2) Documenting the code. While debugging the T-SQL code I have often seen developers struggling to comment code.  They spend (or waste) more time in commenting and uncommenting  than doing actual debugging of the procedure.  When I see developer struggling to comment the code I feel little uncomfortable as commenting should be a very easy task over. Today we will see three quick method to comment T-SQL code in Query Editor. There are three different method to comment and uncomment statements in SQL Server Management Studio Using Keyboard Shortcuts Using Tool Bar Using Menu Bar Method 1: Using Keyboard Shortcuts Commenting the statement – CTRL+K, CTRL+C Commenting the statement – CTRL+K, CTRL+U Method 2: Using Tool Bar Using Tool bar buttons. (See Video) Method 3: Using Menu Bar Commenting the statement – Menu Bar >> Edit >> Advanced >> Click on Comment Selection. Unommenting the statement – Menu Bar >> Edit >> Advanced >> Click on Uncomment Selection. More on Importing CSV Data: Two Different Ways to Comment Code – Explanation and Example I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • Tag, comment, rating, etc. database design

    - by Efe Kaptan
    I want to implement modules such as comment, rating, tag, etc. to my entities. What I thought was: comments_table - comment_id, comment_text entity1 - entitity1_id, entity1_text entity2 - entitity2_id, entity2_text entity1_comments - entity1_id, comment_id entity2_comments - entity2_id, comment_id .... Is this approach correct?

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Design Code Outside of an IDE (C#)?

    - by ryanzec
    Does anyone design code outside of an IDE? I think that code design is great and all but the only place I find myself actually design code (besides in my head) is in the IDE itself. I generally think about it a little before hand but when I go to type it out, it is always in the IDE; no UML or anything like that. Now I think having UML of your code is really good because you are able to see a lot more of the code on one screen however the issue I have is that once I type it in UML, I then have to type the actual code and that is just a big duplicate for me. For those who work with C# and design code outside of Visual Studio (or at least outside Visual Studio's text editor), what tools do you use? Do those tools allow you to convert your design to actual skeleton code? It is also possible to convert code to the design (when you update the code and need an updated UML diagram or whatnot)?

    Read the article

  • How can a code editor effectively hint at code nesting level - without using indentation?

    - by pgfearo
    I've written an XML text editor that provides 2 view options for the same XML text, one indented (virtually), the other left-justified. The motivation for the left-justified view is to help users 'see' the whitespace characters they're using for indentation of plain-text or XPath code without interference from indentation that is an automated side-effect of the XML context. I want to provide visual clues (in the non-editable part of the editor) for the left-justified mode that will help the user, but without getting too elaborate. I tried just using connecting lines, but that seemed too busy. The best I've come up with so far is shown in a mocked up screenshot of the editor below, but I'm seeking better/simpler alternatives (that don't require too much code). [Edit] Taking the heatmap idea (from: @jimp) I get this and 3 alternatives - labelled a, b and c: The following section describes the accepted answer as a proposal, bringing together ideas from a number of other answers and comments. As this question is now community wiki, please feel free to update this. NestView The name for this idea which provides a visual method to improve the readability of nested code without using indentation. Contour Lines The name for the differently shaded lines within the NestView The image above shows the NestView used to help visualise an XML snippet. Though XML is used for this illustration, any other code syntax that uses nesting could have been used for this illustration. An Overview: The contour lines are shaded (as in a heatmap) to convey nesting level The contour lines are angled to show when a nesting level is being either opened or closed. A contour line links the start of a nesting level to the corresponding end. The combined width of contour lines give a visual impression of nesting level, in addition to the heatmap. The width of the NestView may be manually resizable, but should not change as the code changes. Contour lines can either be compressed or truncated to keep acheive this. Blank lines are sometimes used code to break up text into more digestable chunks. Such lines could trigger special behaviour in the NestView. For example the heatmap could be reset or a background color contour line used, or both. One or more contour lines associated with the currently selected code can be highlighted. The contour line associated with the selected code level would be emphasized the most, but other contour lines could also 'light up' in addition to help highlight the containing nested group Different behaviors (such as code folding or code selection) can be associated with clicking/double-clicking on a Contour Line. Different parts of a contour line (leading, middle or trailing edge) may have different dynamic behaviors associated. Tooltips can be shown on a mouse hover event over a contour line The NestView is updated continously as the code is edited. Where nesting is not well-balanced assumptions can be made where the nesting level should end, but the associated temporary contour lines must be highlighted in some way as a warning. Drag and drop behaviors of Contour Lines can be supported. Behaviour may vary according to the part of the contour line being dragged. Features commonly found in the left margin such as line numbering and colour highlighting for errors and change state could overlay the NestView. Additional Functionality The proposal addresses a range of additional issues - many are outside the scope of the original question, but a useful side-effect. Visually linking the start and end of a nested region The contour lines connect the start and end of each nested level Highlighting the context of the currently selected line As code is selected, the associated nest-level in the NestView can be highlighted Differentiating between code regions at the same nesting level In the case of XML different hues could be used for different namespaces. Programming languages (such as c#) support named regions that could be used in a similar way. Dividing areas within a nesting area into different visual blocks Extra lines are often inserted into code to aid readability. Such empty lines could be used to reset the saturation level of the NestView's contour lines. Multi-Column Code View Code without indentation makes the use of a multi-column view more effective because word-wrap or horizontal scrolling is less likely to be required. In this view, once code has reach the bottom of one column, it flows into the next one: Usage beyond merely providing a visual aid As proposed in the overview, the NestView could provide a range of editing and selection features which would be broadly in line with what is expected from a TreeView control. The key difference is that a typical TreeView node has 2 parts: an expander and the node icon. A NestView contour line can have as many as 3 parts: an opener (sloping), a connector (vertical) and a close (sloping). On Indentation The NestView presented alongside non-indented code complements, but is unlikely to replace, the conventional indented code view. It's likely that any solutions adopting a NestView, will provide a method to switch seamlessly between indented and non-indented code views without affecting any of the code text itself - including whitespace characters. One technique for the indented view would be 'Virtual Formatting' - where a dynamic left-margin is used in lieu of tab or space characters. The same nesting-level data used to dynamically render the NestView could also used for the more conventional-looking indented view. Printing Indentation will be important for the readability of printed code. Here, the absence of tab/space characters and a dynamic left-margin means that the text can wrap at the right-margin and still maintain the integrity of the indented view. Line numbers can be used as visual markers that indicate where code is word-wrapped and also the exact position of indentation: Screen Real-Estate: Flat Vs Indented Addressing the question of whether the NestView uses up valuable screen real-estate: Contour lines work well with a width the same as the code editor's character width. A NestView width of 12 character widths can therefore accommodate 12 levels of nesting before contour lines are truncated/compressed. If an indented view uses 3 character-widths for each nesting level then space is saved until nesting reaches 4 levels of nesting, after this nesting level the flat view has a space-saving advantage that increases with each nesting level. Note: A minimum indentation of 4 character widths is often recommended for code, however XML often manages with less. Also, Virtual Formatting permits less indentation to be used because there's no risk of alignment issues A comparison of the 2 views is shown below: Based on the above, its probably fair to conclude that view style choice will be based on factors other than screen real-estate. The one exception is where screen space is at a premium, for example on a Netbook/Tablet or when multiple code windows are open. In these cases, the resizable NestView would seem to be a clear winner. Use Cases Examples of real-world examples where NestView may be a useful option: Where screen real-estate is at a premium a. On devices such as tablets, notepads and smartphones b. When showing code on websites c. When multiple code windows need to be visible on the desktop simultaneously Where consistent whitespace indentation of text within code is a priority For reviewing deeply nested code. For example where sub-languages (e.g. Linq in C# or XPath in XSLT) might cause high levels of nesting. Accessibility Resizing and color options must be provided to aid those with visual impairments, and also to suit environmental conditions and personal preferences: Compatability of edited code with other systems A solution incorporating a NestView option should ideally be capable of stripping leading tab and space characters (identified as only having a formatting role) from imported code. Then, once stripped, the code could be rendered neatly in both the left-justified and indented views without change. For many users relying on systems such as merging and diff tools that are not whitespace-aware this will be a major concern (if not a complete show-stopper). Other Works: Visualisation of Overlapping Markup Published research by Wendell Piez, dated from 2004, addresses the issue of the visualisation of overlapping markup, specifically LMNL. This includes SVG graphics with significant similarities to the NestView proposal, as such, they are acknowledged here. The visual differences are clear in the images (below), the key functional distinction is that NestView is intended only for well-nested XML or code, whereas Wendell Piez's graphics are designed to represent overlapped nesting. The graphics above were reproduced - with kind permission - from http://www.piez.org Sources: Towards Hermenutic Markup Half-steps toward LMNL

    Read the article

  • How should code reviews be Carried Out?

    - by Graviton
    My previous question has to do with how to advance code reviews among the developers. Here I am interested in how a code review session should be carried out, so that both the reviewer and reviewed are feeling comfortable with it. I have done some code reviews before and the experience has been very unpleasant. My previous manager would come to us --on an ad hoc basis-- and tell us to explain our code to him. Since he wasn't very familiar with the code base, whenever he would ask me to explain my code, I'd find myself spending a huge amount of time explaining the most basic structure of my code. As a result, each review would last much too long, and the process would leave both of us exhausted. Once I was done explaining my work, he would continue by raising issues with it. Most of the issues he raised were cosmetic in nature ( e.g, don't use region for this code block, change the variable name from xxx to yyy even though the later makes even less sense, and so on). After trying this process for few rounds, we found the review session didn't derive much benefits for either of us, and we stopped. How would you go about making each code review a natural, enjoyable, thought stimulating, bug-fixing and mutual-learning experience? Also, how frequently you do your code reviews - as soon as the code is checked in? Do you allocate a fixed time every week to do this? What are the guidelines that you follow during your code reviews?

    Read the article

  • Emphasize Some Comments - but not Dirty the Code

    - by Jon Sandness
    I'm having trouble structuring my comments at the moment. I have major sections of the code that, when scrolling through the document, I want to be able to see those stand out. Examples: This is a normal comment: int money = 100; //start out with 100 money - This is a comment to emphasize a certain part of the code: /****** Set up all the money ******/ But I don't like that this isn't very clean. Is there a standard way of setting up this type of a comment?

    Read the article

  • Self Documenting Code Vs. Commented Code

    - by Phill
    I had a search but didn't find what I was looking for, please feel free to link me if this question has already being asked. Earlier this month this post was made: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/ Basically to sum it up, you're a bad programmer if you don't write comments. My personal opinion is that code should be descriptive and mostly not require comment's unless the code cannot be self describing. In the example given // Get the extension off the image filename $pieces = explode('.', $image_name); $extension = array_pop($pieces); The author said this code should be given a comment, my personal opinion is the code should be a function call that is descriptive: $extension = GetFileExtension($image_filename); However in the comments someone actually made just that suggestion: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/comment-page-2/#comment-357130 The author responded by saying the commenter was "one of those people", i.e, a bad programmer. What are everyone elses views on Self Describing Code vs Commenting Code?

    Read the article

  • What are the standard directory layouts for source code?

    - by splattered bits
    I'm in the process of proposing a new standard directory layout that will be used across all the projects in our organization. Projects can have compiled source code, setup scripts, build scripts, third-party libraries, database scripts, resources, web services, web sites, etc. This is partly inspired by discovering Maven's standard layout. Are there any other standard layouts that are generally accepted in the industry?

    Read the article

  • Unit-testing code that relies on untestable 3rd party code

    - by DudeOnRock
    Sometimes, especially when working with third party code, I write unit-test specific code in my production code. This happens when third party code uses singletons, relies on constants, accesses the file-system/a resource I don't want to access in a test situation, or overuses inheritance. The form my unit-test specific code takes is usually the following: if (accessing or importing a certain resource fails) I assume this is a test case and load a mock object Is this poor form, and if it is, what is normally done when writing tests for code that uses untestable third party code?

    Read the article

  • Should maven generate jaxb java code or just use java code from source control?

    - by Peter Turner
    We're trying to plan how to mash together a build server for our shiny new java backend. We use a lot of jaxb XSD code generation and I was getting into a heated argument with whoever cared that the build server should delete jaxb created structures that were checked in generate the code from XSD's use code generated from those XSD's Everyone else thought that it made more sense to just use the code they checked in (we check in the code generated from the XSD because Eclipse pretty much forces you to do this as far as I can tell). My only stale argument is in my reading of the Joel test is that making the build in one step means generating from the source code and the source code is not the java source, but the XSD's because if you're messing around with the generated code you're gonna get pinched eventually. So, given that we all agree (you may not agree) we should probably be checking in our generate java files, should we use them to generate our code or should we generate it using the XSD's?

    Read the article

  • fb:comment does not work on my application

    - by giffary
    hello! I'm new for facebook developer. I want to add comments box to my application and I follow the tutorial in facebook wiki. It's not work in my application. My Canvas Callback URL is http://122.155.0.71/~facebook/ and I upload xd_receiver.htm to root directory. and I paste the code FB.init("b6e07896a1d0889d9784dea150802587", "/xd_receiver.htm"); to my application. It not show up. When I view this page with firebug. I see . I don't know about xid, where can I create it. Can you help me about this problem? thank you...

    Read the article

  • Software for "High-level" source code (C++) Management

    - by Korchkidu
    after a lot of small-medium projects, I have a lot of libraries and test programs here and there. Also, I must admit that some of the "best practices" I learnt are not that "good" IMHO. In particular, documenting your code and making a "high-level" documentation is not useful in practice: High-level documentation are not maintain up to date = I prefer to read the source code directly; Browsing generated developer documentation is a pain (IMHO) = I prefer to read the source code directly. For that reason, I am looking for a tool who could help me organize all my source code directories in a more "readable manner". What I need is a tool which: Maintains an UML diagram from C++ source code. I don't need source code generation from UML; USE CASE: I am in this super-tool, I notice a design issue, I change the source code, when I get back, the UML diagram is updated; Maintains easily browsable call graphs; Lists references to methods, variables, etc. For example, when I want to see where/when a method is called; Helps writing pseudo-code from C++; Embeds a nice C++ source code browser; Is Open Source would be great; Works at least on Win7. The focus of this tool should be to browse source code to understand what's going on. For example, when you have a newcomer and you need him to go through source code. Do you know any great tool? Thanks in advance. PS: please do not answer doxygen (great tool however).

    Read the article

  • Design Code Outside of an IDE (C#)?

    - by ryanzec
    Does anyone design code outside of an IDE? I think that code design is great and all but the only place I find myself actually design code (besides in my head) is in the IDE itself. I generally think about it a little before hand but when I go to type it out, it is always in the IDE; no UML or anything like that. Now I think having UML of your code is really good because you are able to see a lot more of the code on one screen however the issue I have is that once I type it in UML, I then have to type the actual code and that is just a big duplicate for me. For those who work with C# and design code outside of Visual Studio (or at least outside Visual Studio's text editor), what tools do you use? Do those tools allow you to convert your design to actual skeleton code? It is also possible to convert code to the design (when you update the code and need an updated UML diagram or whatnot)?

    Read the article

  • Correct redirect after posting comment - django comments framework

    - by Sachin
    I am using django comments framework for allowing users to comment on my site, but there is a problem with the url to which user is redirected after posting the comment. If I render my comment form as {% with comment.content_object.get_absolute_url as next %} {% render_comment_form for comment.content_object %} {% endwith %} Then the url to which it is redirected after the comment is posted is <comment.content_object.get_absolute_url/?c=<comment.id> For example I posted a comment on a post with url /post/256/a-new-post/ then the url to which I am redirected after posting the comment is /post/256/a-new-post/?c=99 where assume 99 is the id comment just posted. Instead what I want is something like this /post/256/a-new-post/#c99. Secondly when I do comment.get_absolute_url() I do not get the proper url instead I get a url like /comments/cr/58/14/#c99 and this link is broken. How can I get the correct url as mentioned in the documentation. Am i doing something wrong. Any help is appreciated.

    Read the article

  • Freelancing - Share the source code?

    - by Tec
    I have developed a couple of form based windows application in vb.net for a client and they all work well and he paid me through a freelance site. I have handed over the executable and the setup to the client and all was well. Now the client wants the source code for the application. Is there a general practice on sharing the source code with the client? Please note - the client never mentioned he needs the source code and he is now asking for it after a week after the app was completed and he made the payment. I don't mind sharing the source code, but I am not sure if I should. This probably means the client would not hire me again and the bigger question is the source code really his property? This question may have been asked a few times, but I cannot still draw a conclusion on what is right. update To answer some of the questions: The source code was not mentioned at all. There was no exclusive contract signed except for the usual agreement of the freelance site. I am not sure if software development comes under work for hire and is it valid for users outside of the US? The reason for not sharing the source code was this was a very small project and I got paid for a mere few hours. So if I have an option then definitely I would want to keep the source code to myself as that gives a possibility of the client coming back. The application works flawlessly and the code is solid. Also, the task that the client wanted to achieve was very challenging and I would not like other programmers (competitors) to know how I achieved it. So unless I get the confirmation that the source code is purely the property of the client, I would not be willing to share it.

    Read the article

  • Code Trivia #7

    - by João Angelo
    Lets go for another code trivia, it’s business as usual, you just need to find what’s wrong with the following code: static void Main(string[] args) { using (var file = new FileStream("test", FileMode.Create) { WriteTimeout = 1 }) { file.WriteByte(0); } } TIP: There’s something very wrong with this specific code and there’s also another subtle problem that arises due to how the code is structured.

    Read the article

  • Code review recommendations and Code Smells

    - by Michael Freidgeim
    Some time ago Twitter told that I am similar to Boris Lipschitz . Indeed he is also .Net programmer from Russia living in Australia. I‘ve read his list of Code Review points and found them quite comprehensive. A few points  were not clear for me, and it forced me for a further reading.In particular the statement “Exception should not be used to return a status or an error code.” wasn’t fully clear for me, because sometimes we store an exception as an object with all error details and I believe it’s a valid approach. However I agree that throwing exceptions should be avoided, if you expect to return error as a part of a normal flow. Related link: http://codeutopia.net/blog/2010/03/11/should-a-failed-function-return-a-value-or-throw-an-exception/ Another point slightly puzzled me“If Thread.Sleep() is used, can it be replaced with something else, ei Timer, AutoResetEvent, etc” . I believe, that there are very rare cases, when anyone using Thread.Sleep in any production code. Usually it is used in mocks and prototypes.I had to look further to clarify “Dependency injection is used instead of Service Location pattern”.Even most of articles has some preferences to Dependency injection, there are also advantages to use Service Location. E.g see http://geekswithblogs.net/KyleBurns/archive/2012/04/27/dependency-injection-vs.-service-locator.aspx. http://www.cookcomputing.com/blog/archives/000587.html  refers to Concluding Thoughts of Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an applicationThe post had a link to excellent article Code Smells of Jeff Atwood, but the statement, that “code should not pass a review if it violates any of the  code smells” sound too strict for my environment. In particular, I disagree with “Dead Code” recommendation “Ruthlessly delete code that isn't being used. That's why we have source control systems!”. If there is a chance that not used code will be required in a future, it is convenient to keep it as commented or #if/#endif blocks with appropriate explanation, why it could be required in the future. TFS is a good source control system, but context search in source code of current solution is much easier than finding something in the previous versions of the code.I also found a link to a good book “Clean Code.A.Handbook.of.Agile.Software”

    Read the article

  • What is New in ASP.NET 4.0 Code Access Security

    - by Xiaohong
    ASP.NET Code Access Security (CAS) is a feature that helps protect server applications on hosting multiple Web sites, ASP.NET lets you assign a configurable trust level that corresponds to a predefined set of permissions. ASP.NET has predefined ASP.NET Trust Levels and Policy Files that you can assign to applications, you also can assign custom trust level and policy files. Most web hosting companies run ASP.NET applications in Medium Trust to prevent that one website affect or harm another site etc. As .NET Framework's Code Access Security model has evolved, ASP.NET 4.0 Code Access Security also has introduced several changes and improvements. The main change in ASP.NET 4.0 CAS In ASP.NET v4.0 partial trust applications, application domain can have a default partial trust permission set as opposed to being full-trust, the permission set name is defined in the <trust /> new attribute permissionSetName that is used to initialize the application domain . By default, the PermissionSetName attribute value is "ASP.Net" which is the name of the permission set you can find in all predefined partial trust configuration files. <trust level="Something" permissionSetName="ASP.Net" /> This is ASP.NET 4.0 new CAS model. For compatibility ASP.NET 4.0 also support legacy CAS model where application domain still has full trust permission set. You can specify new legacyCasModel attribute on the <trust /> element to indicate whether the legacy CAS model is enabled. By default legacyCasModel is false which means that new 4.0 CAS model is the default. <trust level="Something" legacyCasModel="true|false" /> In .Net FX 4.0 Config directory, there are two set of predefined partial trust config files for each new CAS model and legacy CAS model, trust config files with name legacy.XYZ.config are for legacy CAS model: New CAS model: Legacy CAS model: web_hightrust.config legacy.web_hightrust.config web_mediumtrust.config legacy.web_mediumtrust.config web_lowtrust.config legacy.web_lowtrust.config web_minimaltrust.config legacy.web_minimaltrust.config   The figure below shows in ASP.NET 4.0 new CAS model what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:    There also some benefits that comes with the new CAS model: You can lock down a machine by making all managed code no-execute by default (e.g. setting the MyComputer zone to have no managed execution code permissions), it should still be possible to configure ASP.NET web applications to run as either full-trust or partial trust. UNC share doesn’t require full trust with CASPOL at machine-level CAS policy. Side effect that comes with the new CAS model: processRequestInApplicationTrust attribute is deprecated  in new CAS model since application domain always has partial trust permission set in new CAS model.   In ASP.NET 4.0 legacy CAS model or ASP.NET 2.0 CAS model, even though you assign partial trust level to a application but the application domain still has full trust permission set. The figure below shows in ASP.NET 4.0 legacy CAS model (or ASP.NET 2.0 CAS model) what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:     What $AppDirUrl$, $CodeGen$, $Gac$ represents: $AppDirUrl$ The application's virtual root directory. This allows permissions to be applied to code that is located in the application's bin directory. For example, if a virtual directory is mapped to C:\YourWebApp, then $AppDirUrl$ would equate to C:\YourWebApp. $CodeGen$ The directory that contains dynamically generated assemblies (for example, the result of .aspx page compiles). This can be configured on a per application basis and defaults to %windir%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files. $CodeGen$ allows permissions to be applied to dynamically generated assemblies. $Gac$ Any assembly that is installed in the computer's global assembly cache (GAC). This allows permissions to be granted to strong named assemblies loaded from the GAC by the Web application.   The new customization of CAS Policy in ASP.NET 4.0 new CAS model 1. Define which named permission set in partial trust configuration files By default the permission set that will be assigned at application domain initialization time is the named "ASP.Net" permission set found in all predefined partial trust configuration files. However ASP.NET 4.0 allows you set PermissionSetName attribute to define which named permission set in a partial trust configuration file should be the one used to initialize an application domain. Example: add "ASP.Net_2" named permission set in partial trust configuration file: <PermissionSet class="NamedPermissionSet" version="1" Name="ASP.Net_2"> <IPermission class="FileIOPermission" version="1" Read="$AppDir$" PathDiscovery="$AppDir$" /> <IPermission class="ReflectionPermission" version="1" Flags ="RestrictedMemberAccess" /> <IPermission class="SecurityPermission " version="1" Flags ="Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /></PermissionSet> Then you can use "ASP.Net_2" named permission set for the application domain permission set: <trust level="Something" legacyCasModel="false" permissionSetName="ASP.Net_2" /> 2. Define a custom set of Full Trust Assemblies for an application By using the new fullTrustAssemblies element to configure a set of Full Trust Assemblies for an application, you can modify set of partial trust assemblies to full trust at the machine, site or application level. The configuration definition is shown below: <fullTrustAssemblies> <add assemblyName="MyAssembly" version="1.1.2.3" publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies> 3. Define <CodeGroup /> policy in partial trust configuration files ASP.NET 4.0 new CAS model will retain the ability for developers to optionally define <CodeGroup />with membership conditions and assigned permission sets. The specific restriction in ASP.NET 4.0 new CAS model though will be that the results of evaluating custom policies can only result in one of two outcomes: either an assembly is granted full trust, or an assembly is granted the partial trust permission set currently associated with the running application domain. It will not be possible to use custom policies to create additional custom partial trust permission sets. When parsing the partial trust configuration file: Any assemblies that match to code groups associated with "PermissionSet='FullTrust'" will run at full trust. Any assemblies that match to code groups associated with "PermissionSet='Nothing'" will result in a PolicyError being thrown from the CLR. This is acceptable since it provides administrators with a way to do a blanket-deny of managed code followed by selectively defining policy in a <CodeGroup /> that re-adds assemblies that would be allowed to run. Any assemblies that match to code groups associated with other permissions sets will be interpreted to mean the assembly should run at the permission set of the appdomain. This means that even though syntactically a developer could define additional "flavors" of partial trust in an ASP.NET partial trust configuration file, those "flavors" will always be ignored. Example: defines full trust in <CodeGroup /> for my strong named assemblies in partial trust config files: <CodeGroup class="FirstMatchCodeGroup" version="1" PermissionSetName="Nothing"> <IMembershipCondition    class="AllMembershipCondition"    version="1" /> <CodeGroup    class="UnionCodeGroup"    version="1"    PermissionSetName="FullTrust"    Name="My_Strong_Name"    Description="This code group grants code signed full trust. "> <IMembershipCondition      class="StrongNameMembershipCondition" version="1"       PublicKeyBlob="hex_char_representation_of_key_blob" /> </CodeGroup> <CodeGroup   class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$AppDirUrl$/*" /> </CodeGroup> <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$CodeGen$/*"   /> </CodeGroup></CodeGroup>   4. Customize CAS policy at runtime in ASP.NET 4.0 new CAS model ASP.NET 4.0 new CAS model allows to customize CAS policy at runtime by using custom HostSecurityPolicyResolver that overrides the ASP.NET code access security policy. Example: use custom host security policy resolver to resolve partial trust web application bin folder MyTrustedAssembly.dll to full trust at runtime: You can create a custom host security policy resolver and compile it to assembly MyCustomResolver.dll with strong name enabled and deploy in GAC: public class MyCustomResolver : HostSecurityPolicyResolver{ public override HostSecurityPolicyResults ResolvePolicy(Evidence evidence) { IEnumerator hostEvidence = evidence.GetHostEnumerator(); while (hostEvidence.MoveNext()) { object hostEvidenceObject = hostEvidence.Current; if (hostEvidenceObject is System.Security.Policy.Url) { string assemblyName = hostEvidenceObject.ToString(); if (assemblyName.Contains(“MyTrustedAssembly.dll”) return HostSecurityPolicyResult.FullTrust; } } //default fall-through return HostSecurityPolicyResult.DefaultPolicy; }} Because ASP.NET accesses the custom HostSecurityPolicyResolver during application domain initialization, and a custom policy resolver requires full trust, you also can add a custom policy resolver in <fullTrustAssemblies /> , or deploy in the GAC. You also need configure a custom HostSecurityPolicyResolver instance by adding the HostSecurityPolicyResolverType attribute in the <trust /> element: <trust level="Something" legacyCasModel="false" hostSecurityPolicyResolverType="MyCustomResolver, MyCustomResolver" permissionSetName="ASP.Net" />   Note: If an assembly policy define in <CodeGroup/> and also in hostSecurityPolicyResolverType, hostSecurityPolicyResolverType will win. If an assembly added in <fullTrustAssemblies/> then the assembly has full trust no matter what policy in <CodeGroup/> or in hostSecurityPolicyResolverType.   Other changes in ASP.NET 4.0 CAS Use the new transparency model introduced in .Net Framework 4.0 Change in dynamically compiled code generated assemblies by ASP.NET: In new CAS model they will be marked as security transparent level2 to use Framework 4.0 security transparent rule that means partial trust code is treated as completely Transparent and it is more strict enforcement. In legacy CAS model they will be marked as security transparent level1 to use Framework 2.0 security transparent rule for compatibility. Most of ASP.NET products runtime assemblies are also changed to be marked as security transparent level2 to switch to SecurityTransparent code by default unless SecurityCritical or SecuritySafeCritical attribute specified. You also can look at Security Changes in the .NET Framework 4 for more information about these security attributes. Support conditional APTCA If an assembly is marked with the Conditional APTCA attribute to allow partially trusted callers, and if you want to make the assembly both visible and accessible to partial-trust code in your web application, you must add a reference to the assembly in the partialTrustVisibleAssemblies section: <partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" />/partialTrustVisibleAssemblies>   Most of ASP.NET products runtime assemblies are also changed to be marked as conditional APTCA to prevent use of ASP.NET APIs in partial trust environments such as Winforms or WPF UI controls hosted in Internet Explorer.   Differences between ASP.NET new CAS model and legacy CAS model: Here list some differences between ASP.NET new CAS model and legacy CAS model ASP.NET 4.0 legacy CAS model  : Asp.net partial trust appdomains have full trust permission Multiple different permission sets in a single appdomain are allowed in ASP.NET partial trust configuration files Code groups Machine CAS policy is honored processRequestInApplicationTrust attribute is still honored    New configuration setting for legacy model: <trust level="Something" legacyCASModel="true" ></trust><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>   ASP.NET 4.0 new CAS model: ASP.NET will now run in homogeneous application domains. Only full trust or the app-domain's partial trust grant set, are allowable permission sets. It is no longer possible to define arbitrary permission sets that get assigned to different assemblies. If an application currently depends on fine-tuning the partial trust permission set using the ASP.NET partial trust configuration file, this will no longer be possible. processRequestInApplicationTrust attribute is deprecated Dynamically compiled assemblies output by ASP.NET build providers will be updated to explicitly mark assemblies as transparent. ASP.NET partial trust grant sets will be independent from any enterprise, machine, or user CAS policy levels. A simplified model for locking down web servers that only allows trusted managed web applications to run. Machine policy used to always grant full-trust to managed code (based on membership conditions) can instead be configured using the new ASP.NET 4.0 full-trust assembly configuration section. The full-trust assembly configuration section requires explicitly listing each assembly as opposed to using membership conditions. Alternatively, the membership condition(s) used in machine policy can instead be re-defined in a <CodeGroup /> within ASP.NET's partial trust configuration file to grant full-trust.   New configuration setting for new model: <trust level="Something" legacyCASModel="false" permissionSetName="ASP.Net" hostSecurityPolicyResolverType=".NET type string" ></trust><fullTrustAssemblies> <add assemblyName=”MyAssembly” version=”1.0.0.0” publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>     Hope this post is helpful to better understand the ASP.Net 4.0 CAS. Xiaohong Tang ASP.NET QA Team

    Read the article

  • Tips on a tool to measure code quality?

    - by Cristi Diaconescu
    I'm looking for a tool that can provide code quality metrics. For instance it could report very long functions (spaghetti code) very complex classes (which could contain do-it-all code) ... While we're on the (subjective:-) subject of code quality, what other code metrics would you suggest? I'm targetting C#/.NET code, but I'm sure this could extend to most programming languages.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • How can I promote clean coding at my workplace?

    - by Michael
    I work with a lot of legacy Java and RPG code on an internal company application. As you might expect, a lot of the code is written in many different styles, and often is difficult to read because of poorly named variables, inconsistent formatting, and contradictory comments (if they're there at all). Also, a good amount of code is not robust. Many times code is pushed to production quickly by the more experienced programmers, while code by newer programmers is held back by "code reviews" that IMO are unsatisfactory. (They usually take the form of, "It works, must be ok," than a serious critique of the code.) We have a fair number of production issues, which I feel could be lessened by giving more thought to the original design and testing. I have been working for this company for about 4 months, and have been complimented on my coding style a couple of times. My manager is also a fan of cleaner coding than is the norm. Is it my place to try to push for better style and better defensive coding, or should I simply code in the best way I can, and hope that my example will help others see how cleaner, more robust code (as well as aggressive refactoring) will result in less debugging and change time?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >