Search Results

Search found 13430 results on 538 pages for 'easy'.

Page 33/538 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Is there a visual web application builder or rapid webapp prototyping framework?

    - by Jesper Mortensen
    Question: Is there such a thing as a self-hosted framework or CMS especially tailored towards the creation of interactive web applications without -- or with an absolute minimum of -- programming? (Substantially less programming than say a simple Rails app or a plugin for Wordpress, Joomla etc would require.) As for desired features I'd settle for whatever is available, but some ideas could be: A User authentication and Permissions system. A GUI-driven input form builder. A GUI-driven template / visual site design builder. A simple scripting language (think AppleScript-like simplicity) A highly modular architecture, with high-level business objects (users, forms data, etc) exposed for easy re-use. If something like the above doesn't exist, then what comes near this? Need: This is for self-hosted rapid prototyping of web applications, and limited user testing of webapp user interface designs in a closed user test. Notes: I know about Ruby on Rails (Rails), Django, Pyramid etc. I'm looking for something much faster to work in, for making prototypes. I know about CMS's in general but find that most of them are tailored towards displaying information to the end users. If there is an exceptionally easy-to-master CMS with easy scripting (lets say much more so than for example Wordpress) then I'd be interested.

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • What is the best way to "carve" a terrain created from a heightmap?

    - by tigrou
    I have a 3d landscape created from a heightmap. I'd like to "carve" some holes in that terrain. That will allow me to create bridges, caverns and tunnels inside it. That operation will be done in the game editor so it doesn't need to be realtime. In the end, rendering is done using traditional polygons. What would be the best/easiest way to do that ? I already think about several solutions : Solution 1 1) Create voxels from the heightmap (very easy). In other words, fill a 3D array like this : voxels[32][32][32] from the heightmap values. 2) Carve holes in the voxels as i want (easy too). 3) Convert voxels to polygons using some iso-surface extraction technique (like marching cubes). 4) Reduce (decimate) polygons created in 3). This technique seems to be the most promising for giving good results (untested). However the problem with marching cubes is that they tends to produce lots of polygons thus reducing them is mandatory. Implementing 4) also seems not trivial, i have read several papers on the web and it seems pretty complex. I was also unable to find an example, code snippet or something to start writing an algorithm for triangle mesh decimation. Maybe there is a special decimation algorithm (simpler) for meshes created from marching cubes ? Solution 2 1) Create some triangle mesh from the heighmap (easy). 2) Apply severals 3D boolean operation (eg: subtraction with a sphere) to carve the mesh. 3) apply some procedure to reduce polygons (optional). Operation 2) seems to be very complex and to be honest i have no idea how to do that. Also applying many boolean operation seems to be slow and will maybe degrade the triangle mesh every time a boolean operation is applied.

    Read the article

  • Filtering List Data with a jQuery-searchFilter Plugin

    - by Rick Strahl
    When dealing with list based data on HTML forms, filtering that data down based on a search text expression is an extremely useful feature. We’re used to search boxes on just about anything these days and HTML forms should be no different. In this post I’ll describe how you can easily filter a list down to just the elements that match text typed into a search box. It’s a pretty simple task and it’s super easy to do, but I get a surprising number of comments from developers I work with who are surprised how easy it is to hook up this sort of behavior, that I thought it’s worth a blog post. But Angular does that out of the Box, right? These days it seems everybody is raving about Angular and the rich SPA features it provides. One of the cool features of Angular is the ability to do drop dead simple filters where you can specify a filter expression as part of a looping construct and automatically have that filter applied so that only items that match the filter show. I think Angular has single handedly elevated search filters to first rate, front-row status because it’s so easy. I love using Angular myself, but Angular is not a generic solution to problems like this. For one thing, using Angular requires you to render the list data with Angular – if you have data that is server rendered or static, then Angular doesn’t work. Not all applications are client side rendered SPAs – not by a long shot, and nor do all applications need to become SPAs. Long story short, it’s pretty easy to achieve text filtering effects using jQuery (or plain JavaScript for that matter) with just a little bit of work. Let’s take a look at an example. Why Filter? Client side filtering is a very useful tool that can make it drastically easier to sift through data displayed in client side lists. In my applications I like to display scrollable lists that contain a reasonably large amount of data, rather than the classic paging style displays which tend to be painful to use. So I often display 50 or so items per ‘page’ and it’s extremely useful to be able to filter this list down. Here’s an example in my Time Trakker application where I can quickly glance at various common views of my time entries. I can see Recent Entries, Unbilled Entries, Open Entries etc and filter those down by individual customers and so forth. Each of these lists results tends to be a few pages worth of scrollable content. The following screen shot shows a filtered view of Recent Entries that match the search keyword of CellPage: As you can see in this animated GIF, the filter is applied as you type, displaying only entries that match the text anywhere inside of the text of each of the list items. This is an immediately useful feature for just about any list display and adds significant value. A few lines of jQuery The good news is that this is trivially simple using jQuery. To get an idea what this looks like, here’s the relevant page layout showing only the search box and the list layout:<div id="divItemWrapper"> <div class="time-entry"> <div class="time-entry-right"> May 11, 2014 - 7:20pm<br /> <span style='color:steelblue'>0h:40min</span><br /> <a id="btnDeleteButton" href="#" class="hoverbutton" data-id="16825"> <img src="images/remove.gif" /> </a> </div> <div class="punchedoutimg"></div> <b><a href='/TimeTrakkerWeb/punchout/16825'>Project Housekeeping</a></b><br /> <small><i>Sawgrass</i></small> </div> ... more items here </div> So we have a searchbox txtSearchPage and a bunch of DIV elements with a .time-entry CSS class attached that makes up the list of items displayed. To hook up the search filter with jQuery is merely a matter of a few lines of jQuery code hooked to the .keyup() event handler: <script type="text/javascript"> $("#txtSearchPage").keyup(function() { var search = $(this).val(); $(".time-entry").show(); if (search) $(".time-entry").not(":contains(" + search + ")").hide(); }); </script> The idea here is pretty simple: You capture the keystroke in the search box and capture the search text. Using that search text you first make all items visible and then hide all the items that don’t match. Since DOM changes are applied after a method finishes execution in JavaScript, the show and hide operations are effectively batched up and so the view changes only to the final list rather than flashing the whole list and then removing items on a slow machine. You get the desired effect of the list showing the items in question. Case Insensitive Filtering But there is one problem with the solution above: The jQuery :contains filter is case sensitive, so your search text has to match expressions explicitly which is a bit cumbersome when typing. In the screen capture above I actually cheated – I used a custom filter that provides case insensitive contains behavior. jQuery makes it really easy to create custom query filters, and so I created one called containsNoCase. Here’s the implementation of this custom filter:$.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; This filter can be added anywhere where page level JavaScript runs – in page script or a seperately loaded .js file.  The filter basically extends jQuery with a : expression. Filters get passed a tokenized array that contains the expression. In this case the m[3] contains the search text from inside of the brackets. A filter basically looks at the active element that is passed in and then can return true or false to determine whether the item should be matched. Here I check a regular expression that looks for the search text in the element’s text. So the code for the filter now changes to:$(".time-entry").not(":containsNoCase(" + search + ")").hide(); And voila – you now have a case insensitive search.You can play around with another simpler example using this Plunkr:http://plnkr.co/edit/hDprZ3IlC6uzwFJtgHJh?p=preview Wrapping it up in a jQuery Plug-in To make this even easier to use and so that you can more easily remember how to use this search type filter, we can wrap this logic into a small jQuery plug-in:(function($, undefined) { $.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; $.fn.searchFilter = function(options) { var opt = $.extend({ // target selector targetSelector: "", // number of characters before search is applied charCount: 1 }, options); return this.each(function() { var $el = $(this); $el.keyup(function() { var search = $(this).val(); var $target = $(opt.targetSelector); $target.show(); if (search && search.length >= opt.charCount) $target.not(":containsNoCase(" + search + ")").hide(); }); }); }; })(jQuery); To use this plug-in now becomes a one liner:$("#txtSearchPagePlugin").searchFilter({ targetSelector: ".time-entry", charCount: 2}) You attach the .searchFilter() plug-in to the text box you are searching and specify a targetSelector that is to be filtered. Optionally you can specify a character count at which the filter kicks in since it’s kind of useless to filter at a single character typically. Summary This is s a very easy solution to a cool user interface feature your users will thank you for. Search filtering is a simple but highly effective user interface feature, and as you’ve seen in this post it’s very simple to create this behavior with just a few lines of jQuery code. While all the cool kids are doing Angular these days, jQuery is still useful in many applications that don’t embrace the ‘everything generated in JavaScript’ paradigm. I hope this jQuery plug-in or just the raw jQuery will be useful to some of you… Resources Example on Plunker© Rick Strahl, West Wind Technologies, 2005-2014Posted in jQuery  HTML5  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • how insecure is my short password really?

    - by rika-uehara
    Using systems like TrueCrypt, when I have to define a new password I am often informed that using a short password is insecure and "very easy" to break by brute-force. I always use passwords of 8 characters in length, which are not based on dictionary words, which consists of characters from the set A-Z, a-z, 0-9 I.e. I use password like sDvE98f1 How easy is it to crack such a password by brute-force? I.e. how fast. I know it heavily depends on the hardware but maybe someone could give me an estimate how long it would take to do this on a dual core with 2GHZ or whatever to have a frame of reference for the hardware. To briute-force attack such a password one needs not only to cycle through all combinations but also try to de-crypt with each guessed password which also needs some time. Also, is there some software to brute-force hack truecrypt because I want to try to brute-force crack my own passsword to see how long it takes if it is really that "very easy".

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • TODO Formatting

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott TODO's should only be used for a short period of time to remind you that something needs to be done. They should then be addressed as soon as possible. In order to know who owns a TODO task and how long it’s been outstanding, my company uses the following standard for TODO formatting: Format:     // TODO : Owner Initials – Date Created – Description of task. Sample:     // TODO: CM – 2012/01/20 – Move this class to a new location so it can be reused. Using this pattern makes it easy to use the Resharper TODO explorer. The Carrot In order to make it easy for developers to apply this rule, a code snippet can be created in Visual Studio. Even better, I created a Resharper template. This gives the facility to use the current user name and current date macros. image This actually makes the formatting look like this. Sample:     // TODO: cmott – 2012/01/20 – Move this class to a new location so it can be reused. The Stick How to you enforce such a rule? I tried to create a custom Resharper Highlighting Pattern to perform custom code analysis inspection for deviations from this pattern. However, I did not have any success. The find dialog would not accept // text. If I work it out, I will update this blog post. StyleCop Instead I created a custom StyleCop rule. I followed the approach used with the StyleCop Contrib project. This provides a simple to use base class and easy to use unit testing framework. I will upload this todo format analyzer as a patch to that project. image

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • How to move mail among Google Apps for Domains users

    - by Paul Roub
    Considering moving the domain used by my extended family for email to Google Apps. One less server for me to manage, better spam filtering, etc. One thing that's been nice about running my own has been the way I manage my kids' incoming email - it comes to me first, and I drop good mail in a symlinked IMAP folder that we share. A little procmail is all it takes, and straight-through exceptions are easy to implement. (FYI, no I'm not advocating censorship, but manually filtering spam and viruses from my 8-year-old's inbox seems like the right thing to do. YMMV) Anyway. I'm wondering if there's an easy way to do something similar in Google Apps - setting up filters to auto-redirect to me looks easy enough (any gotchas there?), but moving things back is not obvious. Yes, I could access both accounts via IMAP and drag mails across, but does anyone have an easier way?

    Read the article

  • Powershell access a single value in a table

    - by falkaholic
    this should be a really easy one but i can't seem to find an easy way. For example, in powershell and am using a CSV file, which is then used to look up some configuration data based on an ID. Here is what I have now, it works, but there has got to be a better way. $configList = import-csv "C:\myconfig.csv" $id = "5001" $configList | where-object {$_.id -eq $id} | foreach-object{ $configData = $_.configData} If use format-table etc, I always get the column header, which I would then have to cut off. Again, this has got to be really easy and this isn't a show stopper. But there has to be a better way to get just the data out of a table without the column header.

    Read the article

  • What makes one language any better than another when both are designed for the same goals? [closed]

    - by Justin808
    I'm in the process of creating a grammar for a scripting language but as I'm working on it I started to wonder what makes a language good in the first place. I know the goals for my script but there are always 1000 different ways to go about doing things. Goals: Easy to use and understand (not my grandma could do it easy, but the secretary at the front desk could do it or the VP of marketing could do it type of easy) No user defined functions or subroutines. Its use would be in events of objects in a system similar to HyperCard. Conceptually I was thinking of a language like this: set myVariable to 'Hello World' set counter to 0 repeat 5 times with x begin set counter to counter add x end set myVariable to myVariable plus ' ' plus counter popup myVariable set text of label named 'label' to 'new text' set color of label named 'label' to blue The end result would popup a dialog with the contents Hello World 15 it would also change the text of a label and make it blue. But I could do the same thing 1000 different ways. So what makes one language any better than another when both are designed for the same goals?

    Read the article

  • Resons why I'm using php rather then asp.net [closed]

    - by spirytus
    I have basic idea of how asp .Net works, but finds all framework hard to use if you are a newbie. I found compiling, web applications vs websites and all that stuff you should know to program in asp .net a bit tedious and so personally I go with php to create small, to medium applications for my clients. There are couple of reasons for it: php is easy scripting language, top to bottom and you done. You still can create objects, classes and if you have idea of MVC its fairly easy to create basic structure yourself so you can keep you presentation layer "relatively" clean. Although I find myself still mixing basic logic in my view's, I am trying to stick to booleans, and for each loops. ASP .net keeps it cleaner as far as I know and I agree that this is great. Heaps of free stuff for php and lots of help everywhere Although choice of IDE's for php is very limited, I still don't have to be stuck with VisualStudio. Lets be honest.. you can program in whatever you like but does anyone uses anything else other than VS? For basic applications I create, Visual Studio doesn't come even close to notepad :) / phpEdit (or similar) combination. It lacks of many features I constantly use, although armies of developers are using it and it must be for good reason. Personally not a big fan of VS though. Being on the market for that long should make editing much easier. I know .Net comes with awesome set of controls, validators etc. which is truly awesome. For me the problem starts if I want my validator to behave slightly different way and lets say fade in/out error messages. I know its possible to extend it behavior, plug into lifecycle and output different JS to the client and so on. I just never see it happen in places I work, and honestly, I don't even think most of .net developers I worked with during last couple of years would know how to do that. In php I have to grab some plugin for jQuery and use it for validation, which is fairly easy task once you had done it before. Again I'm sure its easy for .net gurus, but for newbie like me its almost impossible. I found that many asp .net programmers are very limited in what they are able to do and basically whack together .net applications using same lame set of controls, not even bothering in looking into how it works and what if? Now I don't want to anger anyone :) I know there is huge number of excellent .Net developers who know their stuff and are able to extend controls and do all that magic in no time. I found it a bit annoying though that many of them stick to what is provided without even trying to make it better. Nothing against .net here, just a thought really :) I remember when asp.net came out the idea was that front-end people will not be able to screw anything up and do their fron-end stuff without worrying what happens behind. Now its never that easy and I always tend to get server side people to fix this and that during development. Things like ID's assigned to controls can very easily make your application break and if someone is pure HTML guy using VS its easy to break something. Thats my thoughs on php and .net and reasons why for my work I go with php. I know that once learned asp .net is awesome technology and summing all up php doesn't even come close to it. For someone like me however, individually developing small basic applications for clients, php seems to work much better. Please let me know your thoughts on the above :)

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • Increasing touch surface (#wp7dev)

    - by Laurent Bugnion
    When you design for Windows Phone 7 (or for any touch device, for that matter, and most especially small screens), you need to be very careful to give enough surface to your users’ fingers. It is easy to miss a touch on such small screens, and that can be horrifyingly frustrating. This is especially true when people are on the move, and trying to hit the control while walking and holding their device in one hand, or when the device is mounted in a car and vibrating with the engine. In my experience, a touch surface should be ideally minimum 60x60 pixels to be easy to activate on the Windows Phone 7 screen (which is, as we know, 800 pixels x 480 pixels). Ideally, I try to make my touch surfaces 80x80 pixels minimum. This causes a few design challenges of course. Using transparent backgrounds However, one thing is helping us tremendously: some surfaces can be made transparent, and yet react to touch. The secret is the following: If you have a panel that has a Null background (i.e. the Background is not set at all), then the empty surface does not react to touch. If however the Background is set to the Transparent color (or any color where the Alpha channel is set to 0), then it will react to touch. Setting a transparent background is easy. For example: <Grid Background="#00000000"> </Grid> or <Grid Background="Transparent"> </Grid> In C#: var grid = new Grid { Background = new SolidColorBrush( Colors.Transparent) }; Using negative margins Having a transparent background reactive to touch is a good start, but in addition, you must make sure that the surface is big enough for my clumsy fingers. One way to achieve that is to increase the transparent, touch-reactive surface, and reposition the element using negative margins. For example, consider the following UI. I changed the transparent background of the HyperlinkButton to Red, in order to visualize the touch surface. In this figure, the Settings HyperlinkButton is 105 pixels x 31 pixels. This is wide enough, but really small in height and easy to miss. To improve this, we can use negative margins, for instance: <HyperlinkButton Content="Settings" HorizontalAlignment="Right" VerticalAlignment="Bottom" Height="60" Margin="0,0,0,-15" /> Notice the usage of negative bottom margin to bring the HyperlinkButton back at the bottom of the main Grid’s first row, where it belongs. And the result is: Notice how the touch surface is much bigger than before. This makes the HyperlinkButton easier to reach, and improves the user experience. With the background set back to normal, the UI looks exactly the same, as it should: In summary: Remember to maximize the touch surface for your controls. Plan your design in consequence by reserving enough room around each control to allow their hit surface to be expanded as shown in this article. Do not cram too many controls in one page. If REALLY needed, use an additional page (or even better: use a Pivot control with multiple pivot items) for the controls that don’t fit on the first one. This should ensure a smoother user experience and improved touch behavior. Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • The All New Hotmail Looks Very Impressive [Video Tour]

    - by Gopinath
    With loads of new new features being introduced into GMail every now and then, Microsoft can’t sit and relax any more. Microsoft realized this and worked hard to introduce really impressive features in upcoming version of Windows Live Hotmail that was previewed couple of days ago. Most of the new features announced in the upcoming version are focusing on the important need of email users – de-clutter the mail box and effectively manage email over load easily. Here is the list highlight of new features New Features Sweep away clutter – This is the most impressive in the set of new features. It allows you to manage email overload. If you’ve subscribed to a newsletter but decided to not to allow it into your inbox, you can activate the sweep feature to move all the messages of the newsletter in to a folder other than your inbox. This may sound similar to filters option in GMail but the workflow is very easy in Hotmail. Quickly find message – Easy to use options are provided to see mails in separate views likes mails from contacts, social networking mail, mails from e-mail subscription services, etc. Now it’s easy to prioritize email checking like how you wish to. I prefer to check mails from my contacts first, then social networking messages and then the newsletter subscriptions. Improved spam detection – The span detection rules are tightened for better spam protection and also hotmail learns from user actions to effectively catch spam No more mail box storage restrictions – With a smart decision of Microsoft, users  no longer need to worry about the storage restrictions of their mail box – large attachments of hotmail can be stored in Windows Live SkyDrive. With Hotmail, we’ve combined the simplicity of sending photos through email with the power of Windows Live SkyDrive so that you can send up to 200 photos, each up to 50 MB in size, all in a single email. You can send all your vacation photos at once without worrying about attachment limits, Excellent Integration With Office Web Apps -  View and editing of office documents attached to the emails are made very easy by integrating Office Web Apps with Hotmail. When you receive a document/presentation/spreadsheet in hotmail, you can view it, edit it, save it or even you can send the modified document to original sender – all these without leaving hotmail. Inline viewing options for Photos, Videos, Social Network Messages – You can view photos embedded in the mail as slideshows(with the help of SilverLight), YouTube  & Hulu videos can be played inline  and track shipping notifications. Threaded conversations – emails in Hotmail are grouped just like it happens in GMail Others - enhanced account protection, full-session SSL, multiple email accounts, subfolders, contact management Video Tour Of New Features Here is an impressive video tour of new Hotmail features. When are these new features coming to Hotmail? Majority of the new features announced today are rolled out in coming weeks gradually to all the users. But advanced features like Office Integration with Hotmail is expected to take couple of months for general availability. Will You Switch back to Hotmail? Will these features lure GMail/Yahoo users to switch back to Hotmail? May be not immediately but these features may hold the existing users from leaving Hotmail. I used Hotmail, in the pre GMail era and now I use  Hotmail id only to sign-in to Microsoft websites that requites Hotmail authentication. It’s been years since I composed a new email in Hotmail. Even though the new features announced by Hotmail are very impressive, I like the way how GMail rapidly brings new features at regular intervals. If Hotmail also keeps innovating with new features at regular intervals, then there are good chances for it’s old users to return home. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Growing Talent

    The subtitle of Daniel Coyles intriguing book The Talent Code is Greatness Isnt Born. Its Grown. Heres How. The Talent Code proceeds to layout a theory of how expertise can be cultivated through specific practices that encourage the growth of myelin in the brain. Myelin is a material that is produced and wraps around heavily used circuits in the brain, making them more efficient. Coyle uses an analogy that geeks will appreciate. When a circuit in the brain is used a lot (i.e. a specific action is repeated), the myelin insulates that circuit, increasing its bandwidth from telephone over copper to high speed broadband. This leads to the funny phenomenon of effortless expertise. Although highly skilled, the best players make it look easy. Coyle provides some biological backing for the long held theory that it takes 10,000 hours of practice to achieve mastery over a given subject. 10,000 hours or 10 years, as in, Teach Yourself Programming in Ten Years and others. However, it is not just that more hours equals more mastery. The other factors that Coyle identifies includes deep practice, practice which crucially involves drills that are challenging without being impossible. Another way to put it is that every day you spend doing only tasks you find monotonous and automatic, you are literally stagnating your brains development! Perhaps Coyles subtitle, needs one more phrase, Greatness Isnt Born. Its Grown. Heres How. And oh yeah, its not easy. Challenging yourself, continuing to persist in the face of repeated failures, practicing every day is not easy. As consultants, we sell our expertise, so it makes sense that we plan projects so that people can play to their strengths. At the same time, an important part of our culture is constant improvement, challenging yourself to be better. And the balancing contest ensues. I just finished working on a proof of concept (POC) we did for a project we are bidding on. Completely time boxed, so our team naturally split responsibilities amongst ourselves according to who was better at what. I must have been pretty bad at the other components, as I found myself working on the user interface, not my usual strength. The POC had a website frontend, and one thing I do know is HTML. After starting out in pure ASP.NET WebForms, I got frustrated as time was ticking, I knew what I wanted in HTML, but I couldnt coax the right output out of the ASP.NET controls. I needed two or three elements on the screen that were identical in layout, with different content. With a backup plan in  of writing the HTML into the response by hand, I decided to challenge myself a bit and see what I could do in an hour or two using the Microsoft submitted jQuery micro-templating JavaScript library. This risk paid off. I was able to quickly get the user interface up and running, responsive to the JSON data we were working with. I felt energized by the double win of getting the POC ready and learning something new. Opportunities  specifically like this POC dont come around often, but the takeaway is that while it wont be easy, there are ways to generate your own opportunities to grow towards greatness.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Taking a look at the Mindscape Phone Elements for WP7.

    - by mbcrump
    I recently heard that Mindscape HQ had released the Windows Phone 7 Controls and had to take a look at them. 100 FREE LICENSE GIVEAWAY! Before we get to the screenshots, you will be pleased to learn that my usergroup called “Allaboutxaml” has partnered with Mindscape HQ and are giving away 100 license. You can check out the site here to get your free controls. But please hurry as after the 100 are gone then I will not have any more to give away! A few links to read first: The official blog post from Mindscape HQ detailing the release. They also have the links to download the trial and get started. The phone elements official forum! So, let’s get started. After you download the controls go ahead and double click the .exe to get started installing them. After everything is installed then you will have the following program group. I’d recommend clicking on the Phone Elements Directory to get started: Let’s go over each element: Bin – Just the .DLL that’s required to use Mindscape HQ WP7 Controls in your project. Documentation – a .CHM File that will show you how to get your project up and running quickly. Resources – Just a few image files Samples – This is a full WP7 project that details every controls. The thing that I was most interested in was how the controls look and is it easy to use. I always believed if your paying for controls then you should hold my hand through using them. You will be pleased to know that Mindscape made it very easy to use. First, the WP7 project in the “Samples” folder just works. Double click on the solution file and you are in an emulator looking at the controls. Since you have the source code for every control, it’s a matter of copying/pasting the code in your project to get it to work. What I did, was play with the controls in the emulator until I found one I could use. Then I looked at the Visual Studio solution and found the Page that contained the control. Mindscape makes this very easy to do with their layout: So, the one that I was interested in was the Looping List Box.  Here is a demo of it: I wanted to see how they were populating the numbers 1-100 so I found the code behind and noticed it was just this one line. LoopingListBox1.DataSource = new NumericDataSource() { MinValue = 1, MaxValue = 100 }; In case you are wondering, the NumericDataSource was created by MindScape and you can view the Declaration to find out more about it:   So, the controls are pretty much that easy to use. Play with the emulator and find the control you want to use. Find the XAML file in the Sample Solution and copy/paste the code. Let’s go ahead and take a look at the controls available: They also have a great variety of Charting controls: Overall it’s a nice set of WP7 controls. Feel free to leave a comment below on anything you would like to see and I will make sure that Mindscape HQ get the message. Don’t forget if you are the first 100 people reading this article then you will get a free license.  Subscribe to my feed CodeProject

    Read the article

  • Why is jQuery so widely adopted versus other Javascript frameworks?

    - by Andrew Moore
    I manage a group of programmers. I do value my employees opinion but lately we've been divided as to which framework to use on web projects. I personally favor MooTools, but some of my team seems to want to migrate to jQuery because it is more widely adopted. That by itself is not enough for me to allow a migration. I have used both jQuery and MooTools. This particular essay tends to reflect how I feel about both frameworks. jQuery is great for DOM Manipulation, but seem to be limited to helping you do that. Feature wise, both jQuery and MooTools allow for easy DOM Selection and Manipulation: // jQuery $('#someContainer div[class~=dialog]') .css('border', '2px solid red') .addClass('critical'); // MooTools $('#someContainer div[class~=dialog]') .setStyle('border', '2px solid red') .addClass('critical'); Both jQuery and MooTools allow for easy AJAX: // jQuery $('#someContainer div[class~=dialog]') .load('/DialogContent.html'); // MooTools (Using shorthand notation, you can also use Request.HTML) $('#someContainer div[class~=dialog]') .load('/DialogContent.html'); Both jQuery and MooTools allow for easy DOM Animation: // jQuery $('#someContainer div[class~=dialog]') .animate({opacity: 1}, 500); // MooTools (Using shorthand notation, you can also use Fx.Tween). $('#someContainer div[class~=dialog]') .set('tween', {duration: 500}) .tween('opacity', 1); jQuery offers the following extras: Large community of supporters Plugin Repository Integration with Microsoft's ASP.NET and VisualStudio Used by Microsoft, Google and others MooTools offers the following extras: Object Oriented Framework with Classic OOP emulation for JS Extended native objects Higher consistency between browsers for native functions support. More easy code reuse Used by The World Wide Web Consortium, Palm and others. Given that, it seems that MooTools does everything jQuery does and more (some things I cannot do in jQuery and I can in MooTools) but jQuery has a smaller learning curve. So the question is, why did you or your team choose jQuery over another JavaScript framework? Note: While I know and admit jQuery is a great framework, there are other options around and I'm trying to take a decision as to why jQuery should be our choice versus what we use right now (MooTools)?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >