Search Results

Search found 13577 results on 544 pages for 'the great gonzo'.

Page 288/544 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • How to unit test asynchronous APIs?

    - by Ben Clayton
    Hi all. I have installed Google Toolbox for Mac (http://code.google.com/p/google-toolbox-for-mac/) into Xcode and followed the instructions to set up unit testing found here (http://code.google.com/p/google-toolbox-for-mac/wiki/iPhoneUnitTesting). It all works great, and I can test my synchronous methods on all my objects absolutely fine. However, most of the complex APIs I actually want to test return results asynchronously via calling a method on a delegate - for example a call to a file download and update system will return immediately and then run a -fileDownloadDidComplete: method when the file finishes downloading. How would I test this as a unit test? It seems like I'd want to the testDownload function, or at least the test framework to 'wait' for fileDownloadDidComplete: method to run. Any ideas much appreciated!

    Read the article

  • How to detect browser's protocol handlers?

    - by CJCraft.com
    I have created a custom URL protocol handler. http:// mailto:// custom:// I have registered a WinForms application to respond accordingly. This all works great. But I would like to be able to gracefully handle the case where the user doesn't have the custom URL protocol handler installed, yet. In order to be able to do this I need to be able to detect the browser's registered protocol handlers, I would assume from JavaScript. But I have been unable to find a way to poll for the information. I am hoping to find a solution to this problem. Thanks for any ideas you might be able to share.

    Read the article

  • Can I use a MIME type declaration to get an HTML file to open in MS Word?

    - by Toph
    Bill James wrote: I was able to render an HTML page with the MIME type set to "application/msword", which caused the browser to spawn Word which imported the html just fine, allowing edits and saving just as if I'd output a real Word doc. That sounds great to me, but I haven't been able to get it to work in any browser (Chrome/FF/Safari/Opera/IE on Win7 running Word 2010 beta). I tried changing the MIME type in the HTTP headers of several pages via Tamper Data to application/msword, and I tried using the http-equiv meta tag <meta http-equiv="Content-type" content="application/msword"> on a local HTML file I tried opening from the browser, but neither appeared to have any effect. I don't really have a clue with regard to HTTP headers and MIME types generally, so - any tips? Many thanks!

    Read the article

  • Silverlight Navigation using Mvvm-light(oobe)+MEF?

    - by deliberative assembly
    What is the best approach for navigating between UserControls/Pages(out of browser experience)? I'm fairly new to Silverlight and even newer to the mvvm pattern. How well does the Navigation Framework Integrate with the MVVM Light Toolkit? A snippet for general application flow control with the two would be great. The plan was to use the Navigation Framework for general flow or using Jeremy Likeness's approach to region management(http://csharperimage.jeremylikness.com/search/label/regions) and swapping out regions as needed. I've seen a few places mention replacing the Visual Root, but that sounded like a hack to me. Any advice, snippets, or a nudge in the general direction would be greatly appreciated. Thank you.

    Read the article

  • ADO.NET Entity Framework or ADO.NET

    - by sharru
    I'm starting a new project based on ASP.NET and Windows server. The application is planned to be pretty big and serve large amount of clients pulling and updating high freq. changing data. I have previously created projects with Linq-To-Sql or with Ado.Net. My plan for this project is to use VS2010 and the new EF4 framework. It would be great to hear other programmers options about development with Entity Framework Pros and cons from previous experience? Do you think EF4 is ready for production? Should i take the risk or just stick with plain old good ADO.NET?

    Read the article

  • Should functional programming be taught before imperative programming?

    - by Zifre
    It seems to me that functional programming is a great thing. It eliminates state and makes it much easier to automatically make code run in parallel. Many programmers who were first taught imperative programming styles find it very difficult to learn functional programming, because it is so different. I began to wonder if programmers who were taught functional programming first would find it hard to begin imperative programming. It seems like it would not be as hard as the other way around, so I thought it would be a good thing if more programmers were taught functional programming first. So, my question is, should functional programming be taught in school before imperative, and if so, why is it not more common to start with it?

    Read the article

  • Colour Name to RGB/Hex/HSL/HSV etc

    - by Abs
    Hello all, I have come across this great function/command. Colour to RGB, you can do this: col2rgb("peachpuff") //returns hex It will return one hex value. I want to extend this using Perl, Python or PHP but I want to be able to pass in, for example, "yellow" and the function returns all types of yellows - their hex/rgb/?/etc value. I already have a quick solution implemented, which involves mapping colour names to hex values but now I want to get more precise and use some formulas etc to determine what's what. However, as usual, I don't have a clue on how to do this! So I appreciate any implementation advice on how to do this. Thanks all

    Read the article

  • Organising asp.net website development process

    - by ZX12R
    Is there a standard practice to organize the process of developing a simple website. there is no use implementing MVC as there is no data base involved. It will be very useful in organizing the project and separating the aspx files and master page content(this can be very useful in implementing simple cms techniques) user controls scripts styles images is there any industry standard or best practice for this.? thanks in advance :) Update: yes the way i have listed is convenient. but it would be great if i could separate server codes and files like master,aspx.. and the actual page content. One more reason for not using MVC: I usually outsource the SEO process. Now an MVC application can be greek/latin for my SEO expert. :)

    Read the article

  • Flex: Dynamically create a preview image for a video....

    - by onekidney
    I'm using the VideoDisplay to play flv's, mov's, and mp4's and everything is working great. They are all being loaded via progressive download and are not being streamed. What I'd like to do is to grab a single specified frame (like whatever is being shown at the 10 second mark), convert it to a bitmap and use that bitmap as the preview image for the video. I'd like to do this at runtime so I don't have to create a preview image for every video that would be shown. Any idea's on how to do this? I'd rather not fake it by playing it - seeking for that specific frame and then pausing it but I may have no other choice?

    Read the article

  • JPA ManyToMany problem...

    - by Parhs
    Hello i have an Entity Exam @Id @GeneratedValue(strategy=GenerationType.TABLE) private Long id; ..... ...... @ManyToMany private List<Exam> exams; @ManyToMany(mappedBy="id") private List<Exam> parentExams; The problem is that @ManyToMany private List<Exam> exams; is ok works great... but this doesnt validate?! @ManyToMany(mappedBy="id") private List<Exam> parentExams; Any idea what to do ? I just want to get the exams that are parents of the current exam. ??e???? ??µ?sµat??? ?aµe??

    Read the article

  • x264 IDR access unit with a SPS and a PPS

    - by Gcoop
    Hi All, I am trying to encode video in h.264 that when split with Apples HTTP Live Streaming tools media file segmenter will pass the media file validator I am getting two errors on the split MPEG-TS file WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS. WARNING: 7 samples (17.073 %) do not have timestamps in track 257 (avc1). After hours of research I think the "IDR" warning relates to not having keyframes in the right place on the segmented MPEG-TS file so in my ffmpeg command I set -keyint_min 1 to ensure keyframes where at every frame, but this didn't work. Although it would be great to get an answer, if anyone can shed any light on what a "IDR access unit with a SPS and a PPS" is or what the timestamps warning means I would be very grateful, thanks.

    Read the article

  • Tracking Google Analytics events with server side request automation

    - by Esko
    I'm currently in the process of programming an utility which generates GA tracking pixel (utm.gif) URL:s based on given parameters. For those of you who are wondering why I'm doing this on the server side, I need to do this server side since the context which I'm going to start tracking simply doesn't support JavaScript and as such ga.js is completely useless to me. I have managed to get it working otherwise quite nicely but I've hit a snag: I can't track events or custom variables because I have no idea how exactly the utme parameter's value should be structured to form a valid event or var type hit. GA's own documentation on this parameter isn't exactly that great, either. I've tried everything from Googling without finding anything (which I find ironic) to reverse engineering ga.js, unfortunately it's minified and quite unreadable because of that. The "mobile" version of GA didn't help either since officially GA mobile doesn't support events nor vars. To summarize, what is the format of the utme parameter for page hit types event and custom variable?

    Read the article

  • Hashing words to numbers with respect to definition

    - by thornate
    As part of a larger project, I need to read in text and represent each word as a number. For example, if the program reads in "Every good boy deserves fruit", then I would get a table that converts 'every' to '1742', 'good' to '977513', etc. Now, obviously I can just use a hashing algorithm to get these numbers. However, it would be more useful if words with similar meanings had numerical values close to each other, so that 'good' becomes '6827' and 'great' becomes '6835', etc. As another option, instead of a simple integer representing each number, it would be even better to have a vector made up of multiple numbers, eg (lexical_category, tense, classification, specific_word) where lexical_category is noun/verb/adjective/etc, tense is future/past/present, classification defines a wide set of general topics and specific_word is much the same as described in the previous paragraph. Does any such an algorithm exist? If not, can you give me any tips on how to get started on developing one myself? I code in C++.

    Read the article

  • Freeware Programmer Calculator

    - by AdamC
    There have been lots of times in the past where a good programmer-oriented calculator would've saved me a lot of time. Lately, I've been doing quite a lot of bit manipulation, and having to do build/run to debug my calculations feels really slow. I've looked for something like this in the past, but found nothing that worked very well. The only thing that comes close is this one from AnalogX, but I can't get it to work or really do anything on my vista box which is where I'm doing most of my work at the moment. (btw - please send comments about my vista usage here;). Anyway, I'm looking for something for simple calculations using a C-like syntax with support for proper precenedce, operators, etc. Bonus points for cross-platform. The python interpreter was a great idea and is totally cross-platform. For windows only SpeQ is amazing. Thanks for the suggestions.

    Read the article

  • Best Practise for Writing a POS System

    - by Gary
    Hi, I'm putting together a basic POS system in C# that needs to print to a receipt printer and open a cash drawer. Do I have to use the Microsoft Point of Service SDK? I've been playing around with printing to my Samsung printer using the Windows driver that came with it, and it seems to work great. I assume though that other printers may not come with Windows drivers and then I would be stuck? Or would I be able to simply use the Generic/Text Driver to print to any printer that supports it? For the cash drawer I would need to send codes directly to the COM port which is fine with me, if it saves me the hassle of helping clients setup OPOS drivers on there systems. Am I going down the wrong path here? Thanks, Gary

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Loading a Workflow 4 from xaml file and adding it to workflowdesigner

    - by Jimmy Engtröm
    Hi I have created a couple of activities and stored them as XAML. Opening them in the Workflowdesigner works great and I can Execute them. Now I would like to create a new Activity and add the activities I created to it. Basically loading it from the XAML and into the designer as part of another activity/flow. I have tried adding my activities to the toolbox but the render as dynamicactivity and (understandably) does not work. Any suggestions? Is it even possible? /Jimmy

    Read the article

  • Rename "Event" object in jQuery FullCalendar plug-in

    - by Jeff
    GREAT PLUGIN!!! BUT... choice of word "Event" to mean a "calendar entry" was particularly unfortunate This is a wonderfully well-written plug in, and I've really impressed people here at work with what this thing can do. The documentation is astonishingly thorough and clear. Congratulations to Adam! HOWEVER, this plug-in refers to entries in the calendar as "Events" -- this has caused a lot of confusion in my development team's conversations, because when we use the word "Event" we think of things like onmouseover, click, etc. We would really prefer a term like CalendarEvent or CalendarEntry. I am not all that experienced with jQuery yet, so am wondering if there is a simple way to alias one of those terms to this plug-in's Event/Events object? (I know we could recode the plug-in directly, but our code will then break when we download an update.) Thanks!

    Read the article

  • Embedded ZXing - what am I missing?

    - by Brian515
    Hi all, Sorry if this has been answered before, but I am trying to make an application that will include the ability to scan barcodes on Android. I'm looking at using ZXing as the library, however, I want to embed the scanner in my application so that the user doesn't have to have the ZXing barcode scanner installed to use my application. From the description of ZXing, it sounds like this is possible. I've gotten as far as building ZXing, linking it into my project in Eclipse, then creating a new reader instance. However, I'm lost when it comes to starting the barcode reader and implementing the callbacks. IMO, this is when the documentation here gets hazy. If someone could explain how to use ZXing properly, that would be of great help. Cheers!

    Read the article

  • jquery error() calls showing up in firebug profile

    - by Aros
    I am working on an ASP.NET application that make a lot of jquery and javascript calls and trying to optimize the client side code as much as possible. (This web application is only designed to run on special hardware that has very low memory and processing power.) The profiler in firebug is great for figuring out what calls are taking up the most time. I have already optimized a lot of my selectors and it is much faster. However the profile shows a lot of jquery error() calls. In the attached image of the firebug profile window you can see it was called 52 times, accounting for 15.4 of the processing time. Is that normal for jquery to call its error() like that? My code works flawlessy, and there are no error messages in the firefox error console. It seems like that is a significant performance hit. Is there anyway to get more info on what the errors are? Thanks.

    Read the article

  • Best server-side javascript servers.

    - by fmsf
    Hey, I've been wondering to try out server-side javascript for a while. And I'm finding a good amount of servers, like: Node.js Rhino SpiderMonkey among others. Could anyone with experience on server-side javascript, tell me which are the best engines? and why? I like the Node.js because it's based on Google's V8 engine. And seems easy to use. But some feedback on what you would choose would be great. Edit: Some benchmarks for Node. I'm thinking on going with this one but feedback is still welcome. Thanks

    Read the article

  • Ruby GUI (non-complex layouts)

    - by Ruby Novice
    I've done quite a bit of research on Ruby GUI design, and it appears to be the one area where Ruby tends to be behind the curve. I've explored the options of MonkeyBars, wxRuby, fxRuby, Shoes, etc. and was just wanted to get some input from the Ruby community. While they're definitely usable, the development on each seems to have fallen off. There is not a great deal of useful documentation or user bases that I could find on any (minus the fxRuby book). I'm just looking to make a simple GUI, so I don't really want to spend hundreds of hours learning the intricacies of the more complex tools or attempt to use something that is no longer even being developed (Shoes is the type of application I'm looking for, but it's extremely buggy and not being actively developed.) Out of all of the options, which would you guys recommend as being the quickest to pick up and that still has some sort of development base? Thanks!

    Read the article

  • Can iTextSharp export to JPEG?

    - by SkippyFire
    I need to be able to export PDF's that I am creating to JPEG, so that users can have a screenshot/thumbnail of the end product, which is faster than opening the whole PDF. I am running this on an ASP.NET website running in Medium Trust in the Rackspace Mosso Cloud. I have yet to find a library that will either work in Medium trust, or in the case of ABC PDF, which works great locally, wont load in Mosso. Maybe Mosso has a custom trust level? I know that iTextSharp works on Mosso, but I haven't been able to figure how to "screenshot" a single page of a PDF, or export a page to JPEG. Is there anyone out there who has done this before?

    Read the article

  • How to provide a fileDownloadName only if the user requests to save the file in ASP.NET MVC?

    - by davekaro
    I've got a controller action that returns a FileResult like this return this.File("file.pdf", "application/pdf"); for the URL "/Download/322" - where 322 is the id of the file. This works great, so that if a user clicks on a link to the PDF - it will open in their web browser as long as they have a PDF plugin installed. But, what if they right-click the link and choose "Save as..."? The browser pops up with the filename as "322." I'd like to have a better filename at this point, by doing something like this: return this.File("file.pdf", "application/pdf", "file.pdf"); But if I change the controller to return like that, then it will always pop up the download box, since MVC is setting the Content-Disposition header to attachment (so I can't embed the file). In summary, can I somehow detect that the user is trying to download the file vs. the file is just being embedded in something on the page?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >