Search Results

Search found 108599 results on 4344 pages for 'one click publish'.

Page 266/4344 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • One motherboard crashed. Changed and wants to crash again

    - by CoachNono
    My computer started to freeze randomly. I was pretty sure it was the hard drive that was starting to fail. Before I could change it, the motherboard was completely dead. I bought the same one and reinstall it. Everything went working well until my computer started to freeze again and doing the same problem has before. I don't want my motherboard to burn again and I'm really wondering what can be the problem... Could it be the power supply or the video cards that burned the motherboard ? I tested the voltages of the power supply and they seemed fine... The computer worked as is for four years... Here are the specs: ASUS P5N-D Motherboard LGA 775 NVIDIA 750i SLI Intel Q6600 2X EN9600GT 512mb 650w Corsair

    Read the article

  • Mount drive with two drive letters instead of one.

    - by grub
    Hi everyone a co-worker of mine absolutely insists that it's possible to mount a drive in windows server 2003 with two letters instead of one. He's not talking about mounting a drive into an empty ntfs - folder. example: use ab:\ instead of a:. I'm pretty sure that's not possible. I'm working with over 300 windows servers and never noticed that kind of feature. I also cant find any knowledge base or technet article which describes that kind of feature. Please tell me if it's possible or not. If it's possible please refer to the corresponding knowledge base or technet articles from microsoft. Thank you very much.

    Read the article

  • ftp-client works fine. ftp-tls-client fails on one computer and works on another

    - by ispiro
    Connecting to ftp - from a Windows Server 2012 - it works both secure (-over tls) and unsecure. From a Windows 7 it succeeds unsecure but fails when secure. (Using explicit TLS and passive mode.) filezilla: 234 AUTH command ok. Expecting TLS Negotiation. Initializing TLS... Connection timed out I've tried many things but nothing helps. (I'm also trying this programmatically. For details see: http://stackoverflow.com/questions/25393716/ftp-ssl-fails-after-expecting-tls-negotiation ) The fact that it does succeed from one computer proves that the ftp server is fine. And the fact that the Windows 7 computer succeeds without tls proves that it's not a NAT/firewall problem (besides, it failed even after disabling firewall etc.). I'm not sure where to start looking. Perhaps a difference between PC Windows and Windows Server? EDIT The ftp server is on a Windows Server 2012.

    Read the article

  • Is is better to combine Apache for file manipulation and upload and Nginx for static file serving, or to use one of the two alone

    - by user1032393
    Based on my research, I've read that nginx is best and ideal for serving up static files and images. My application depends heavily on uploading of images and rewriting them, then serving them up. Given that I only have one VPS currently, it has been suggested that I use nginx to serve up the images and website, and reverse proxy to Apache (on the same VPS) to rewrite files with image magick and handle the file uploads. Which would be the best solution, Apache, Nginx, or Apache + Nginx? In terms of best solution, I'm looking at minimal average RAM consumption, while maintaining decent load speed of maybe sub 2 seconds?

    Read the article

  • Why might one hard disk perform slower than another?

    - by Styne666
    I have just bought two WD 3TB Reds (WD30EFRX) for a FreeNAS box and whilst doing burn-in testing it seems like one is consistently taking about 10% longer than the other. So far I've done: a dd read test of the whole device, a long SMART test and it's currently halfway through a badblocks -wvs. The second device is lagging behind the first on all of them. I'm running these commands on Debian stable in two Konsole tabs. Is there a reason this could be considered normal behaviour or is it worth running the tests independantly? They're both plugged in to the LSI 2308 (IT mode) on a Supermicro X10SL7-F.

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • How can one setup a version control system on a local network, without a server?

    - by Andrew
    Edit: Ok so I learned that I guess I need an distributed source control, however are there any UI based ones, and do they allow you to merge with other users on the network? This is kind of a two part question, so here it goes. I want to start developing a web application at home (with multiple developers). However, I don't have a dedicated server nor want to pay for on. So first, I don't know which version control system to use for this case, as at work we mostly have TFS setup, so I am not to familiar with whats out there. What are the best free CVS/SVN tools out there? Second, is it possible to somehow setup the CVS/SVN where there is no dedicated server and both clients store up to one week of the source code from the last check-in? Also, it would be helpful if it could integrate with visual studio, again this isn't that important at all. Problem: There are Five users, one is a Server. Server Connected: All Ok Server Disconnected: No one can share. What I am looking for: No Server: Users still have versioning based on version id of last check-in. Users must check all version on network to make sure they aren't outdated based on their last version id. If not check-in, otherwise merge/get latest. If they are update checkin, and set current version id +1.

    Read the article

  • How to Create Div Toggle Effect using Jquery?

    - by ricky roy
    Hi, I want the following requirement. But there is slit change on it. http://acrisdesign.com/demo/toggle/ Plz consider the above link for bellow example. There are two toggle effect on Hover and Click.. My requirement when some on click on the top of the div. it will expand and there should be a closed button in the div some on mouse over to this image or click the div will closed. When some one mouse hover the link it will expand. First time click expand when mouse over the link collapse. there are three toggle links "click here" when some one click on the link it will create a space between each other but my requirement to it display top of the three link "click here". Plz if you have example then let me know.. Thanks & regards, Basat

    Read the article

  • C#/.NET library for source code formatting, like the one used by Stack Overflow?

    - by Lasse V. Karlsen
    I am writing a command line tool to convert Markdown text to html output, which seems easy enough. However, I am wondering how to get nice syntax coloring for embedded code blocks, like the one used by Stack Overflow. Does anyone know either: What library StackOverflow is using or if there's a library out there that I can easily reuse? Basically it would need to have some of the same "intelligence" found in the one that Stack Overflow uses, by basically doing a best-attempt at figuring out the language in use to pick the right keywords. Basically, what I want is for my own program to handle a block like the following: if (a == 0) return true; if (a == 1) return false; // fall-back Markdown Sharp, the library I'm using, by default outputs the above as a simple pre/code html block, with no syntax coloring. I'd like the same type of handling as the formatting on Stack Overflow does, the above contains blue "return" keywords for example. Or, hmm, after checking the source of this Stack Overflow page after adding the code example, I notice that it too is formatted like a simple pre/code block. Is it pure javascript-magic at works here, so perhaps there's no such library? If there's no library that will automagically determine a possible language by the keywords used, is there one that would work if I explicitly told it the language? Since this is "my" markdown-commandline-tool, I can easily add syntax if I need to.

    Read the article

  • What is better in WPF for UI layout, using one Grid, or nested Grids.

    - by Matthijs Wessels
    I am making a UI in WPF, I have a bunch of functional areas and I use a Grid to organize it. Now the Grid that I want is not uniform, as in, some functional area will span multiple cells in the Grid. I was wondering what the best practise is in solving this. Should I create one grid and then for each functional area set it to span multiple cells, or should I split it up into multiple nested Grids. In this image, the leftmost panel (panels separated by the gray bar) is what I want. The middle panel shows one grid where the blue lines are overlapped by a functional area. The rightmost panel shows how I could do it with nested grids. You can see the green grid has one horizontal split. In the bottom cell is the yellow Grid with a vertical split. In side the left cell is the red Grid with again a horizontal split. I was just wondering what is best practise, the middle or the right panel.

    Read the article

  • Detecting one point's location compared to two other points.

    - by WizardOfOdds
    Hi all, you can check my profile, this is not homework. I've got an interesting little problem to solve in a very real software and I'm looking for an easy way to solve it. I've got two fixed points on screen (they're fixed, but I don't know beforehand their position) that are not at the same location. These two fixed points form an imaginary line. Now I've got a third point that is "on one side" of that line (it cannot be on the line). The user can grab the point (the user actually grab an object, whose I track by its center, which is the point I'm interested in) and drag it. But it cannot "cross" the imaginary line. What is the easiest way to detect if the user is crossing the imaginary line? Example: a c / / (c cannot be dragged here) / b Or: c b -------------- a (c cannot be dragged here) So what is an easy to detect if c is staying on the correct "side" of the line (I draw segments here, but it really can be thought of as a line). One way to detect this is to take the destination point d and see if segment (c,d) intersects with line (a,b), but isn't there an easier way? Can't I just do some 2D dot-product magic here and have basically a one or two liner solving my issue?

    Read the article

  • How do I link one listview to another to create a football league table?

    - by Richard Nixon
    Hi I am creating a football system in windows forms c#. I have one listview where data is entered. I have 4 columns, 2 with team names linked to combo boxes and 2 with the scores linked to numericupdown controls. There are 3 buttons to add the results, Remove and clear. the code is below: private void addButton_Click(object sender, EventArgs e) { { ListViewItem item = new ListViewItem(comboBox1.SelectedItem.ToString()); item.SubItems.Add(numericUpDown1.Value.ToString()); item.SubItems.Add(numericUpDown2.Value.ToString()); item.SubItems.Add(comboBox2.SelectedItem.ToString()); listView1.Items.Add(item); } } private void clearButton_Click(object sender, EventArgs e) { listView1.Items.Clear(); } private void removeButton_Click(object sender, EventArgs e) { foreach (ListViewItem itemSelected in listView1.SelectedItems) { listView1.Items.Remove(itemSelected); } } I have another listview that i want to link the first one to. The second one is a usual english football league table and i want to use maths to add up the games played and the points etc. please help. cheers

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Rails: Multiple "types" of one model through related models?

    - by neezer
    I have a User model in my app, which I would like to store basic user information, such as email address, first and last name, phone number, etc. I also have many different types of users in my system, including sales agents, clients, guests, etc. I would like to be able to use the same User model as a base for all the others, so that I don't have to include all the fields for all the related roles in one model, and can delegate as necessary (cutting down on duplicate database fields as well as providing easy mobility from changing one user of one type to another). So, what I'd like is this: User -- first name -- last name -- email --> is a "client", so ---- client field 1 ---- client field 2 ---- client field 3 User -- first name -- last name -- email --> is a "sales agent", so ---- sales agent field 1 ---- sales agent field 2 ---- sales agent field 3 and so on... In addition, when a new user signs up, I want that new user to automatically be assigned the role of "client" (I'm talking about database fields here, not authorization, though I hope to eventually include this logic in my user authorization as well). I have a multi-step signup wizard I'm trying to build with wizardly. The first step is easy, since I'm simply calling the fields included in the base User model (such as first_name and email), but the second step is trickier since it should be calling in fields from the associated model (like--per my example above--the model client with fields client_field_1 or client_field_2, as if those fields were part of User). Does that make sense? Let me know if that wasn't clear at all, and I'll try to explain it in a different way. Can anyone help me with this? How would I do this?

    Read the article

  • Web development tool that can comprehend the concept of more than one language in a file at once

    - by thecoshman
    I currently use notepad++ on windows or gedit on ubuntu. Both of them work great with code highlighting and hinting etc. But both of them suffer from a huge flaw. I am yet to find a code editor that can handle this concept: <?php // ooh, look I am doing some php ?><a onclick="alert('hay, some javascript in here now!')"> This link is HTML?!</a> <?PHP echo("NOW we have some php as well!"); ?> At the moment, I just have to settle for the one language. I want something that can think of a that text as a default as HTML, but notice when sections are PHP. I want those sections of PHP to have there own code hinting and highlighting. Even more, lets say in an 'if else' I exit PHP, write some HTML then back into PHP, I want it to work out how the braces ( '{' and '}' ) should match up and let me know if I have missed one. I want the sections of in-line JavaScript to be picked up as such. I want all of these languages to get checked for syntax! Damn it, I want to tool that understands more than one language at once!

    Read the article

  • How can I map to a field that is joined in via one of three possible tables

    - by Mongus Pong
    I have this object : public class Ledger { public virtual Person Client { get; set; } // .... } The Ledger table joins to the Person table via one of three possible tables : Bill, Receipt or Payment. So we have the following tables : Ledger LedgerID PK Bill BillID PK, LedgerID, ClientID Receipt ReceiptID PK, LedgerID, ClientID Payment PaymentID PK, LedgerID, ClientID If it was just the one table, I could map this as : Join ( "Bill", x => { x.ManyToOne ( ledger => ledger.Client, mapping => mapping.Column ( "ClientID" ) ); x.Key ( l => l.Column ( "LedgerID" ) ); } ); This won't work for three tables. For a start the Join performs an inner join. Since there will only ever be one of Bill, Receipt or Payment - an inner join on these tables will always return zero rows. Then it would need to know to do a Coalesce on the ClientID of each of these tables to know the ClientID to grab the data from. Is there a way to do this? Or am I asking too much of the mappings here?

    Read the article

  • How to merge two icons together? (overlay one icon on top of another)

    - by demoncodemonkey
    I've got two 16x16 RGB/A .ICO icon files, each loaded into a separate System.Drawing.Icon object. How would you create a new Icon object containing the merge of the two icons (one overlaid on top of the other)? Edit: I probably wasn't too clear, I don't want to blend two images into each other, I want to overlay one icon on top of another. I should add that the icons already contain transparent parts and I do not need any transparent "blending" to make both icons visible. What I need is to overlay the non-transparent pixels of one icon over the top of another icon. The transparent pixels should let the background icon show through. For example, look at the stackoverflow icon. It has some areas that are grey and orange, and some areas that are totally transparent. Imagine you want to overlay the SO icon on top of the Firefox icon. You would see the greys and oranges of the SO icon in full colour, and where the SO icon is transparent, you would see those parts of the Firefox icon.

    Read the article

  • Rails nested attributes with a join model, where one of the models being joined is a new record

    - by gzuki
    I'm trying to build a grid, in rails, for entering data. It has rows and columns, and rows and columns are joined by cells. In my view, I need for the grid to be able to handle having 'new' rows and columns on the edge, so that if you type in them and then submit, they are automatically generated, and their shared cells are connected to them correctly. I want to be able to do this without JS. Rails nested attributes fail to handle being mapped to both a new record and a new column, they can only do one or the other. The reason is that they are a nested specifically in one of the two models, and whichever one they aren't nested in will have no id (since it doesn't exist yet), and when pushed through accepts_nested_attributes_for on the top level Grid model, they will only be bound to the new object created for whatever they were nested in. How can I handle this? Do I have to override rails handling of nested attributes? My models look like this, btw: class Grid < ActiveRecord::Base has_many :rows has_many :columns has_many :cells, :through => :rows accepts_nested_attributes_for :rows, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } accepts_nested_attributes_for :columns, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } end class Column < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :rows, :through => :grid end class Row < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :columns, :through => :grid accepts_nested_attributes_for :cells end class Cell < ActiveRecord::Base belongs_to :row belongs_to :column has_one :grid, :through => :row end

    Read the article

  • Separate "Year" to several worksheets according to one column....

    - by HACHI
    hello! This task is driving me mad... please help! Instead of manually type in the data, i have used VBA to find the year range, put into one column and delete all duplicate ones. But since excel could give more than 20 years, it would be tedious to do all the filtering manually. AND, now i need excel to separate the rows that contain the specific year range in any one the three columns and put them into a new sheet. e.g. The years that excel could find in the three columns(F:H) are ( 2001,2003,2006,2010, 2012,2020.....2033).. and they are pasted in column "S" in sheet 1. How could i tell excel create new sheets for the years ( sheets 2001, sheets 2003, sheet2006....),search through column (F:H) in sheet 1 to see if ANY of those columns contain that year, and paste them into the new sheet. To be more specific, in the newly created "Sheet 2001", the entire row where column(F:H) contains "2001" should be pasted. and in the newly created "Sheet 2033", the entire row where column(F:H) contains "2033" should be pasted.. Enclosed please find the reference. http://www.speedyshare.com/files/23851477/Book32.xls I have got sheet "2002" and "2003" here as results but for the real one i will need more years' sheets (as many as how many excel could extract in the previous stage; as shown in column L ) ...... I think this task should be quite usual (extracting by date), but i couldn't google the result....Pleas help!!I am very clueless about how to do LOOPING.. so please advice and give in more details! Thanks

    Read the article

  • Why would one want to use the public constructors on Boolean and similar immutable classes?

    - by Robert J. Walker
    (For the purposes of this question, let us assume that one is intentionally not using auto(un)boxing, either because one is writing pre-Java 1.5 code, or because one feels that autounboxing makes it too easy to create NullPointerExceptions.) Take Boolean, for example. The documentation for the Boolean(boolean) constructor says: Note: It is rarely appropriate to use this constructor. Unless a new instance is required, the static factory valueOf(boolean) is generally a better choice. It is likely to yield significantly better space and time performance. My question is, why would you ever want to get a new instance in the first place? It seems like things would be simpler if constructors like that were private. For example, if they were, you could write this with no danger (even if myBoolean were null): if (myBoolean == Boolean.TRUE) It'd be safe because all true Booleans would be references to Boolean.TRUE and all false Booleans would be references to Boolean.FALSE. But because the constructors are public, someone may have used them, which means that you have to write this instead: if (Boolean.TRUE.equals(myBoolean)) But where it really gets bad is when you want to check two Booleans for equality. Something like this: if (myBooleanA == myBooleanB) ...becomes this: if ( (myBooleanA == null && myBooleanB == null) || (myBooleanA == null && myBooleanA.equals(myBooleanB)) ) I can't think of any reason to have separate instances of these objects which is more compelling than not having to do the nonsense above. What say you?

    Read the article

  • Creating the same event for 2 different elements in jQuery (not for each one, for both!)

    - by danfromisrael
    Hey guys, I'd like to create a toggle event for 2 different TD's in my table row. the event should show / hide the next table row. <table> <tr> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td class="clickable1">6</td> <td class="clickable2">7</td> </tr> <tr><td>this row should be toggled between show/hide once one of the clickable TDs were clicked</td></tr> </table> here's the code i tried to apply but it has applied each one of the classes the event: $('.clickable1,.clickable2').toggle(function() { $(this).parent() .next('tr') .show(); }, function() { $(this).parent() .next('tr') .hide(); }); One more thing: i'm applying on each TR a css hover psuedo class. how can i make the two TRs to be highlighted (like hover effect on two of them) ? Thanks in advanced! :-Dan

    Read the article

  • Copy not null and not empty fields from one object to another object of the same type(Objects are same type) in java

    - by Chinni
    I am using hibernate, struts, extjs in my project. I have a Customer object with these fields: custId, custName, address, phone and in my project from UI side I get an object customer with custName. So I need to update the above object(custName is unique). I have only one object with the same customer name. So I will get that object using customer name (object from DB). Now I have to save the object with the updated customer name. If I save as follows I have Customer Object from UI, is cust Customer cust1 = getCustomerByName(cust.getCustName()); cust.setCustId(cust1.getCustId()); save(cust); If I do this I lose the customer address and phone number. So, how can I copy one object not null or not empty field values to another object of same type. Can any one please help. I just stuck here. It's stopping me to save. Thanks in advance!

    Read the article

  • php script gets two ajax requests, only returns one?

    - by Dan.StackOverflow
    I'll start from the beginning. I'm building a wordpress plugin that does double duty, in that it can be inserted in to a post via a shortcode, or added as a sidebar widget. All it does is output some js to make jquery.post requests to a local php file. The local php file makes a request to a webservice for some data. (I had to do it this way instead of directly querying the web service with jquery.ajax because the url contains a license key that would be public if put in the js). Anyway, When I am viewing a page in the wordpress blog that has both the sidebar widget and the plugin output via shortcode only one of the requests work. I mean it works in that it gets a response back from the php script. Once the page is loaded they both work normally when manually told to. Webpage view - send 2 post requests to my php script - both elements should be filed in, but only one is. My php script is just: <?php if(isset($_POST["zip"])) { // build a curl object, execute the request, // and basically just echo what the curl request returns. } ?> Pretty basic. here is some js some people wanted to see: function widget_getActivities( zip ){ jQuery("#widget_active_list").text(""); jQuery.post("http://localhost/wordpress/wp-content/ActiveAjax.php", { zip: zip}, function(text) { jQuery(text).find("asset").each(function(j, aval){ var html = ""; html += "<a href='" + jQuery(aval).find("trackback").text() + "' target='new'> " + jQuery(aval).find("assetName").text() + "</a><b> at </b>"; jQuery("location", aval).each(function(i, val){ html += jQuery("locationName", val).text() + " <b> on </b>"; }); jQuery("date", aval).each(function(){ html += jQuery("startDate", aval).text(); <!--jQuery("#widget_active_list").append("<div id='ActivityEntry'>" + html + " </div>");--> jQuery("#widget_active_list") .append(jQuery("<div>") .addClass("widget_ActivityEntry") .html(html) .bind("mouseenter", function(){ jQuery(this).animate({ fontSize: "20px", lineHeight: "1.2em" }, 50); }) .bind("mouseleave", function(){ jQuery(this).animate({ fontSize: "10px", lineHeight: "1.2em" }, 50); }) ); }); }); }); } Now imagine there is another function identical to this one except everything that is prepended with 'widget_' isn't prepended. These two functions get called separately via: jQuery(document).ready(function(){ w_zip = jQuery("#widget_zip").val(); widget_getActivities( w_zip ); jQuery("#widget_updateZipLink").click(function() { //start function when any update link is clicked widget_c_zip = jQuery("#widget_zip").val(); if (undefined == widget_c_zip || widget_c_zip == "" || widget_c_zip.length != 5) jQuery("#widget_zipError").text("Bad zip code"); else widget_getActivities( widget_c_zip ); }); }) I can see in my apache logs that both requests are being made. I'm guessing it is some sort of race condition but that doesn't make ANY sense. I'm new to all this, any ideas? EDIT: I've come up with a sub-optimal solution. I have my widget detect if the plugin is also being used on the page, and if so it waits for 3 seconds before performing the request. But I have a feeling this same thing is going to happen if multiple clients perform a page request at the same time that triggers one of the requests to my php script, because I believe the problem is in the php script, which is scary.

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >