Search Results

Search found 59194 results on 2368 pages for 'depth first search'.

Page 151/2368 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • How to aviod sapcing from the first div of each row

    - by sandeep
    I have DIVs which comes side by side of each other & but there only fourth DIV comes in the first row & other are shifted too the next row. I want each row first div take no space from left side but is not happened. Here is my code http://jsfiddle.net/25pwG/ I know i can do it with giving class manual to the new row DIV but i didn't want that. i want this with less css & i didn't want to change my markup NOTE: i want capability till IE8. Thanks in advance :)

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • Delete the first line that matches a pattern

    - by gioele
    How can I use sed to delete only the first line that contains a certain pattern? For example, I want to remove the first line matching FAA from this document: 1. foo bar quuz 2. foo FAA bar (this should go) 3. quuz quuz FAA (this should remain) 4. quuz FAA bar (this should also remain) The result should be 1. foo bar quuz 3. quuz quuz FAA (this should remain) 4. quuz FAA bar (this should also remain) A solution for POSIX sed would be greatly appreciated, GNU sed is OK.

    Read the article

  • About first-,second- and third-class value

    - by forest58
    First-class value can be passed as an argument returned from a subroutine assigned into a variable. Second-class value just can be passed as an argument. Third-class value even can't be passed as an argument. Why should these things defined like that? As I understand, "can be passed as an argument" means it can be pushed into the runtime stack;"can be assigned into a variable" means it can be moved into a different location of the memory; "can be returned from a subroutine" almost has the same meaning of "can be assigned into a variable" since the returned value always be put into a known address, so first class value is totally "movable" or "dynamic",second class value is half "movable" , and third class value is just "static", such as labels in C/C++ which just can be addressed by goto statement, and you can't do nothing with that address except "goto" .Does My understanding make any sense? or what do these three kinds of values mean exactly?

    Read the article

  • How to implement a search page which shows results on the same page?

    - by Andrew
    I'm using ASP.NET MVC 2 for the first time on a project at work and am feeling like a bit of a noob. I have a page with a customer search control/partial view. The control is a textbox and a button. You enter a customer id into the textbox and hit search. The page then "refreshes" and shows the customer details on the same page. In other words, the customer details appear below the customer search control. This is so that if the customer isn't the right one, the user can search again without hitting back in the browser. Or, perhaps they mistyped the customer id and need to try again. I want the URL to look like this: /Customer/Search/1 Obviously, this follows the default route in the project. Now, if I type the URL above directly into my browser, it works fine. However, when I then use the search control on that page to search for say customer 2, the page refreshes with the correct customer details but the URL does not change! It stays as /Customer/Search/1 When I want it to be /Customer/Search/2 How can I get it to change to the correct URL? I am only using the default route in Global.asax. My Search method looks like this: <AcceptVerbs(HttpVerbs.Get)> _ Function Search(ByVal id As String) As ActionResult Dim customer As Customer = New CustomerRepository().GetById(id) Return View("SearchResult", customer) End Function

    Read the article

  • iPad search bar bad memory access?

    - by Geoff Baum
    Hello all, So I am trying to implement a search bar in my app and am very close but can't seem to figure out where this memory error is occurring. This is what part of my search method looks like: filters = [[NSMutableArray alloc] init]; NSString *searchText = detailSearch.text; NSMutableArray *searchArray = [[NSMutableArray alloc] init]; // Normally holds the object (ex: 70 locations) searchArray = self.copyOfFilters ; //This is the line that is breaking after ~2-3 letters are entered in the search for (NSString *sTemp in searchArray) { NSRange titleResultsRange = [sTemp rangeOfString:searchText options:NSCaseInsensitiveSearch]; if (titleResultsRange.length > 0) [filters addObject:sTemp]; } displayedFilters = filters; copyOfFilters is a deep copy of the displayed filters that appear when the view first loads via: self.copyOfFilters = [[NSMutableArray alloc] initWithArray:displayedFilters copyItems:YES]; I have traced through the entry of letters and it is accurate after 2 letters, but once you try and enter a letter after a space in the search bar, it crashes. The value of copyOfFilters = {(int)[$VAR count]} objects. Does anyone know what may be causing this? Thanks!

    Read the article

  • Regular Expression to capture the first <p> of HTML

    - by Program.X
    I have the following regular expression: (?:<(?<tag>\w*)>(?<text>.*)</\k<tag>>) I want it t grab the text within the first HTML element. eg. <p>This should capture</p>This shouldn't Works, but ... <p>This should capture</p><p>This shouldn't</p> Doesn't work. As you'd expect, it returns: This should capture</p><p>This shouldn't I'm racking my brains here. How can I just have it select the FIRST inner text? (I'm trying to be tag-agnostic, so <strong>This should match</strong> is equally appropriate, etc.)

    Read the article

  • Get child elements from a parent but not first and last

    - by Cleiton
    I would like to know how could I write a jQuery selector that get all children from a parent element except first and last child? Example of my current HTML: <div id="parent"> <div>first child( i don't want to get)</div> <div>another child</div> <div>another child</div> <div>another child</div> (...) <div>another child</div> <div>another child</div> <div>last child (i dont want to get neither)</div> </div>

    Read the article

  • Split string on first two colons

    - by Mark Miller
    I would like to split a column of strings on the first two colons, but not on any subsequent colons: my.data <- read.table(text=' my.string some.data 12:34:56:78 -100 87:65:43:21 -200 a4:b6:c8888 -300 11:bb:ccccc -400 uu:vv:ww:xx -500', header = TRUE) desired.result <- read.table(text=' my.string1 my.string2 my.string3 some.data 12 34 56:78 -100 87 65 43:21 -200 a4 b6 c8888 -300 11 bb ccccc -400 uu vv ww:xx -500', header = TRUE) I have searched extensively and the following question is the closest to my current dilemma: Split on first comma in string Thank you for any suggestions. I prefer to use base R.

    Read the article

  • jQuery: How to get the first “td” in all rows

    - by bobby
    I have a table, and I want get the first “td” in all rows. My jquery here: $("table.SimpleTable tr td:first-child").css('background-color','red'); and my HTML here: <table class='SimpleTable' border="1" ID="Table1"> <tr> <td>Left</td> <td>Right</td> </tr> <tr> <td>Left</td> <td>Right</td> </tr> <tr> <td>Left</td> <td>Right</td> </tr> <tr> <td>Left</td> <td> <table border="1" ID="Table2"> <tr> <td>AAA</td> <td>AAA</td> <td>AAA</td> </tr> </table> </td> </tr> <tr> <td>Left</td> <td> <table border="1" ID="Table3"> <tr> <td>BBB</td> <td>BBB</td> <td>BBB</td> </tr> </table> </td> </tr> </table> The problem here it get the first "td" in the nested table of the second "td". Please help me!

    Read the article

  • iphone - navigation controller - move to first view from last view without traversing all intermedia

    - by satyam
    i'm working on iphone app which will show 4 buttons in first view. on click of a button, it will load a new view with navigation controller. this navigation controller view allows to travel upto 11 sub views. in 11th sub view, i've a reset button. on click of reset button, i've to go back to navigation controllers first view without traversing all the 11 views? is it possible to achieve it? if yes how? if no, what can be the solution?

    Read the article

  • How set default opened JQuery UI tab another than first

    - by user568920
    I have big web application using JQuery UI Tabs. In central JS file I have setted all tabs. Using $("#tabs").tabs; But on one page I need to have selected another tab than first. If I use $("#tabs").{ selected: add }); (name of tab is #add) Its not running, probably because Tabs are already set up. Does anyone know how to set opened another than first tab (in default state - after loading page) if tabs are already turned on? I hope, you will understand, my English is pretty terrible.

    Read the article

  • removing the first value in an array c# or java

    - by MrCode
    hey there i was working on a program and was thinking is it possible was to remove the value from the first element in an array. Has anyone any ideas on how this could be done ? thanks for all input is much appreciated. i have only tried removing from the last element wasnt sure on how i would remove the first this is how i done the last element try { if (isEmpty()) { throw new Exception("list is empty"); } size = size -1; return values[size]; } catch(Exception e) { System.out.println(e); return -1; }

    Read the article

  • jquery 1.3 not(:first) problem in IE

    - by sunil-mand99
    Hi, i have 3 div tags of same class, <div class="redc"> <p>sdfs</p> </div> <div class="redc"> <p>sdfs</p> </div> <div class="redc"> <p>sdfs</p> </div> to select div other than first ,$(".redc:not(:first)") works fine in mozilla,but not in IE Please suggest any alternative for IE Note: vesion jquery 1.3

    Read the article

  • jQuery is only returning first item

    - by Andy
    For some strange reason, whenever I have a selector and expect to get multiple items, jQuery is only returning the first item, instead of the entire collection. This is the HTML I have: <a id="reply-424880" class="reply" href="#" rel="nofollow">Reply</a> <a id="reply-424885" class="reply" href="#" rel="nofollow">Reply</a> And the selector: $('.reply').unbind('click').click(function(event) { ... } I have tried debugging using FireBug, and still get the same results. Using the work around I can get it to work: $('a').each(function (index, element) { if ($(element).attr('class') == 'reply') { $(this).unbind('click').click(function(event) { ... }); } }); I would like to use the built-in functionality instead of my work around. Any idea why only the first element would be returned?

    Read the article

  • Alert Box Running First?

    - by corymathews
    I have some jQuery/JS below. The first thing to run is the alert box at the end. $(document).ready(function() { $("#id1 img , .msg").stop().animate( { width: '300px', height: '300px'}, { duration: 'slow', easing: 'easeInSine' }).pause(3000); $(".msg").animate( { width: '50px', height: '50px' }, { duration: 498, easing: 'easeOutSine' }); $("#id1 img").animate( { width: '50px', height: '50px' }, { duration: 500, easing: 'easeOutSine' }); $("#id1 img , .msg").animate( { width: '300px',height: '300px'}, { duration: 'slow', easing: 'easeInSine' }).pause(3000); alert('eh?'); }); I do have a easing plugin. If I run this the alert will show, and then the first animate will happen in the background but not be shown. It will just appear at the final size. Shouldn't the alert run at the end of all the animation? Can anyone explain why this is happening?

    Read the article

  • get first parent of iframe javascript

    - by baaroz
    I have a iframe inside test.aspx,when the user click on a pay button inside the Iframe,the iframe redirct to check.aspx that has same iframe if payment was success on first time, then window.parent.location.href==test.aspx if payment was failed the iframe redirect again to check.aspx,so now the window.parent.location.href==check.aspx while the payement was failed the the iframe keep redirect to check.aspx and the parent location keep changing ,so for example if the client failed 3 time,inside check.aspx I need to do window.parent.parent.parent.location.href to get test.aspx redirect. when the user payment was success ,then I want to redirect the test.aspx but I can't know how much child iframe window he has! I need something like window.parent[0].location.href=success.aspx,so I will be able to redirect the first father window. Thanks for any Help Baaroz

    Read the article

  • jQuery lightBox plugin, problem with first image

    - by torm
    Hi, for start sorry for my bad english... but i have a little problem with this plugin. http://leandrovieira.com/projects/jquery/lightbox/ - to be sure we know what i'm talking about. So... i've got an image gallery with thumbs, and everything goes well.. except... when i click on the first image... it shows - ok, but i can go to the next image by clicking "next". It's loading all the time. When i choose second image... everything is allright, and i can skip to the next images... but i also cannot go back to the first one.

    Read the article

  • JQuery .get() only passing first two data parameters in url

    - by The.Anti.9
    I have a $.get() call to a PHP page that takes 4 GET parameters. For some reason, despite giving the $.get() call all 4, it only passes the first two. When I look at the dev console in chrome, it shows the URL that gets called, and it only passes action and dbname. Heres the code: $.get('util/util.php', { action: 'start', dbname: db, url: starturl, crawldepth: depth }, function(data) { if (data == 'true') { status = 1; $('#0').append(starturl + "<ul></ul>"); $('#gobutton').hide(); $('#loading').show("slow"); while(status == 1) { setTimeout("update()",10000); } } else { show_error("Form data incomplete!"); } }); and heres the URL that I see in the developer console: http://localhost/pci/util/util.php?action=start&dbname=1hkxorr9ve1kuap2.db

    Read the article

  • I want to search premier inn by sending a postcode to the site

    - by Mick
    Hi I want to send a postcode from my site to premier inns and return the hotels in the area , does anyone know how I can go about it please ? if there is a method for finding a search string from a site can anybody share please http://www.premierinn.com/en/homeQuickSearch!execute.action+ postcode ??? Any help would be great thanks Mick

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • Text-Editing program : muti-search-replace/multi-regex?

    - by rlb.usa
    I have a long and arduous text file, and I need to do lots and lots of the same search-replaces on it inside of selections. Is there a text editing program where I can do multiple find/replace (or regex) at one time? That is, I want to : (select text) - (do-find-replace-set-A) - (do other stuff) - (repeat) Instead of : (select text) - (f&r #1, f&r #2, f&r #3 ... ) - (do other stuff) - (repeat) I have textpad, but it's macro's won't handle find/replace.

    Read the article

  • Matching digits in Notepad++ extended search mode

    - by ketchup
    Notepad++'s manual is rather vague on the special character for numerical used in extended search mode. It says: \d### - Decimal value (between 000 and 255) but literally entering "\d###" doesn't match anything. What I am trying to do is to replace if VarA == 12 VarB = 1 with if VarA == 12 Var12=1 VarB=1

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >