Search Results

Search found 33151 results on 1327 pages for 'www browser'.

Page 214/1327 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Google Search w/ Chrome Incognito w/ Gnome Do

    - by jrc03c
    I've installed Google Chrome as my default browser in Ubuntu, and recently installed Gnome Do and enabled the Google Search plugin. The Google Search from Gnome Do works exactly as expected but for one thing: Chrome (which is typically set to open in "incognito" mode) does not open in "incognito" mode. The shortcuts on my desktop, taskbar, and menus all have the --incognito flag attached (which works just fine), but the browser refuses to open in this mode when launched from Gnome Do. Any suggestions? Also, please note the settings for the Google Search plugin in Gnome Do: It's obvious that Gnome Do just passes the Google Search blindly to the default browser. In other words, there are no configurable settings specifically for Chrome. Any thoughts?

    Read the article

  • 503 service unavailable when debugging PHP script in Zend Studio

    - by user25932
    I have a web server with apache 2.0 installed. It comes with Zend Server install pack. When I’m trying to debug my php files apache serves a blank page with 503 service unavailable. Of course slow server-side code is tying up Apache requests for far too long, but I need it to wait, until my debugging comes to end. When I call to the page from a browser it launches ZendStudio debugging my PHP script (request redirects Zend Debugger module). I debug through my script and if I finish debugging in 120 seconds, I normally return to the browser. When it takes more than 120 seconds the browser displays '503 service unavailable' and I can't return to page output. I have even forced 'max_execution_time = 300' 'max_input_time = 600' in php.ini and 'TimeOut = 500' in httpd.conf. No matter whether it is Opera, IE or Firefox. I spent two days googling it, no right answer until now.

    Read the article

  • Send all traffic over VPN connection not working Windows VPN host

    - by Adam Schiavone
    I am trying to get a mac (10.8) to connect to thru vpn to a server running Windows Server 2008 R2 pass all requests from the mac to the server. The VPN is setup and I can connect and access the server thru a web browser, but for all other sites, the DNS lookup fails. I have tried adding a DNS server to the VPN Host. ex. Lets say the the VPN server also hosts a website example.com. I connect to the VPN with my mac and point a browser to example.com and everything works fine. but when I point the browser to google.com it just sits there and will eventually come back with a DNS lookup failed message. HOWEVER: I tried running the command dig @myServersIpHere www.google.com. on the mac and it comes back with correct IP addresses. I really dont know what to do from here. How can I route all requests from my mac, thru my windows server via VPN?

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Alternate port numbers for Supermicro IPMI View software

    - by MC9000
    I'm using the IPMI View software to manage a SuperMicro server but would like to use alternate port #s within the program itself. In other words - If I use the web browser, it defaults to port 80 - While I can, say change that port to 12345 (or whatever) and type the IP address into the browser (like http://xxx.xxx.xxx.xxx:12345 ) that works just fine. However, in IPMIView, it will assume port 80 and load the browser with the IP (which, naturally, won't work, so I have to manually type in the alternate port #). I can deal with that. The clincher is if I use a port other than 623 for management - (say 55623 for example), the IPMIView will not find it. Same goes for the iKVM port #. Is there some place to specify this (to tell IPMIView to use the alternate port numbers), like a settings file? I'm running this from a Windows client.

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

  • How to run Firefox jailed without serious performance loss?

    - by Vi
    My Firefox configuration is tricky: Firefox runs at separate restricted user account which cannot connect to main X server. Firefox uses Xvfb (virtual "headless" X server) as X server. x11vnc is running on that Xvfb. On the main X server there is vncviewer running that connect to this x11vnc On powerful laptop (Acer Extensa 5220) it seems to work more or less well, but on "Acer Aspire One" netbook it is slowish (on a background that firefox is loaded with lots of extensions). How to optimise this scheme? Requirements: Browser cannot connect to main X server. Browser should be in chroot jail (no "suid" scripts, readonly for many things) Browser should have a lot of features (like in AutoPager, NoScript, WoT, AdBlockPlus)

    Read the article

  • How to forbid programs from accessing a folder

    - by prashanth
    I use firefox as my default browser. IE and chrome are used as secondary browsers. Apart from these, i need to have a alternate browser. I tried opera, but when installing it, opera automatically imported all my bookmarks and other profile data from firefox folder. I really had a hard time safely deleting these files that opera made a copy, without harming the original files which firefox uses. I don't want this to happen. Here's the situation: Now, i want to install other browser or re install opera. How do i make the firefox profile folder to be accessed only by firefox.exe and me(the administrator)? Can this be done via security tab under folder properties?

    Read the article

  • What happens when a HTTP request is terminated prematurely?

    - by Gowtham
    Suppose, I enter a URL in my browser and browser submits the HTTP request. The remote HTTP server accepts the request and initiates a long task to serve the request. If I terminate the request before it is complete (for example, press Esc or in Firefox), how is the request closed? Will the browser communicate this abort request to the server (I think it doesn't)? Presuming no, upon completion of the long task, what will the server do with the result? Does it send it back anyway? If it does, what will happen? Does it reach till my PC? Or gets lost on the way? This is just for my curiosity. Thanks for your time :)

    Read the article

  • IE I-Frame 1px border to the right

    - by Jackie
    Please look at http://www.mymix947.com In the header i have a 1px border to the right of the banner. You will see a black line dividing the banner and listen live button. This is an i-frame and I can't seem to eliminate the line. This seems to only happen with Windows 7 - IE Browser 8.0.7 When my browser is full screen - i dont see it, but if i shrink the browser slightly - the line is there. Any tips would be great! Thanks!

    Read the article

  • Setting http_proxy for Chromium in shell

    - by iceboal
    In order to set proxy in Chromium browser, one needs to go to Settings ? Under the Hood ? Change Proxy Settings ? Network Proxy. It's too complicated. How do I set http_proxy in shell? I've tried export http_proxy=http://127.0.0.1:8080/ But it doesn't seem to work. Also, if you only want to set the proxy on the Chromium browser -- not your entire network -- the command line is the only way to set the proxy just for the browser. How can one set the proxy on Chromium -- using the command line -- to solve this problem?

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Where and how does Kindle Cloud Reader store downloaded books, on a Windows 7 system?

    - by einpoklum
    I use Firefox and sometimes Chrome, on Windows 7. Amazon's in-browser Kindle Cloud Reader lets you "download" books for local/offline viewing. Where are these stored, given my OS+browser combination? I've searched the Users subdirectory for my user, and could not find a relevant (separate) file in there, specifically not in the Firefox and Chrome profile directories. To clarify, the files are obviously not downloaded as-is and are stored in some potentially-obfuscated format, possibly in the browser's local store and possibly elsewhere. The question is, where and how exactly? (This was the first of this question, but wasn't answered there since it was not the main focus of the question.)

    Read the article

  • How to uploads to the web work on local networks

    - by Saif Bechan
    Let's say I have two computers hooked up as a home network. They both use the same router, and the router is hooked up to the to the net. Now lets say I am working on computer A, and I can access files on computer B. Computer A has a drive that is mounted on computer A as a network drive. Now I want to upload a file to a website. In the browser of computer A I open a browser, and go the website. On the website I select 'upload file', now in the file browser I go to the network drive, and select a file on computer B to upload. What happens in this case. Is the file uploaded directly from computer B to the website, or is the file first transferred to computer A, and then to the website.

    Read the article

  • files appearing empty or only partially transferred on FTP server.

    - by james
    firstly, apologies if this question has been asked and answered before, but I have had a look through related queries and found nothing identical to this. I have had a website for a few years, and have never had any problem uploading any files. But today, when i went to transfer a new html file onto the server...i did so, the file arrived. so i browsed to the file in my browser to check the page as i always do, and the browser wouldn't acknowledge it. after repeated attempts to transfer it, it finally seemed to go over, but only 1/4 of the file size..4KB out of 16KB...so only the top of the page would be viewable in my browser...ive tried transferring on a number of ftp clients and no love... my expertise on this is limited and i cant really think of the next step, the server isnt full, so...im just stumped. any ideas? any and all feedback is great appreciated.

    Read the article

  • Stop Foxit PhantomPDF Standard from Opening PDFs in Internet Explorer 11

    - by pk.
    Is there a way to stop Foxit PhantomPDF from opening a linked PDF in the browser plugin? My desired behavior is for it to open the PDF in a Foxit PhantomPDF window. I've looked for an add-in, but there is none. Foxit support recommended the following solution -- select File Preferences File Associations Advanced Uncheck Include Browser... Select OK Select Make Default PDF Viewer but it didn't make a difference. How exactly is Internet Explorer making the decision to open it in the browser plugin instead of a separate PhantomPDF window? How can I change that?

    Read the article

  • Sharepoint asks for NTLM credentials for every unique URL. How do I stop it?

    - by CamronBute
    I'm tasked with troubleshooting a problem we're having with a SP2010 site. The app is external, and there are several clients that must connect. Some clients are receiving a crazy amount of credential requests when trying to log on. It appears to ask for every unique URL (eg. every different picture, link, etc) and it won't stop. Other clients are having no problems. I cannot seem to replicate the issue, either. I'm attempting to replicate by restricting all settings (including cookies) on my own browser, but to no avail. I put the HTTP request under a microscope, and it's asking for NTLM credentials. The client is using IE8, and the browser is running in Protected Mode, but the browser settings cannot be determined... I'm guessing this is a webserver thing, simply because it appears to be an authentication thing. What might the problem be?

    Read the article

  • Request exceeded the limit of 10

    - by Webnet
    My logs are FULL of [Tue Jan 11 10:20:45 2011] [error] [client 99.162.115.123] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://www.domain.com/vehicles/Chevrolet/Uplander/2006 The problem is when I enable LogLevel debug we get HUGE error logs because all of our traffic is SSL. From what I can tell the file doesn't record these errors anymore, either that or it's so buried in SSL logs that I just can't find them. Here's my .htaccess Options -indexes RewriteEngine On RewriteRule ^battery/([^/]+)$ /browser/product?sku=BATTERY+$1&type=battery RewriteRule ^vehicles/([^/]+)/([^/]+)/([^/]+)/product([0-9]+)$ /browser/index.php?make=$1&model=$2&id=$3&%{QUERY_STRING} [L,NC] RewriteRule ^vehicles/([^/]+)/([^/]+)/([^/]+)/([0-9]+)$ /browser/product.php?make=$1&model=$2&year=$3&id=$4&%{QUERY_STRING} [L,NC] RewriteRule ^vehicles/([^/]+)/([^/]+)/([^/]+)$ /store/product/list.php?make=$1&model=$2&year=$3&%{QUERY_STRING} [L,NC] RewriteRule ^vehicles/([^/]+)/([^/]+)$ /vehicle/make/model/year/list.php?make=$1&model=$2&%{QUERY_STRING} [L,NC] RewriteRule ^vehicles/([^/]+)$ /vehicle/make/model/list.php?make=$1&%{QUERY_STRING} [L,NC]

    Read the article

  • CentOS running inside VMware as WebServer times out on outside connection [migrated]

    - by Tom Hart
    I have a CentOS machine running inside VMware, and I have got PHP and Apache set up on it, so if I open a browser (on the VM) and go to either localhost, or 192.168.0.3, I get a phpinfo page I made in /var/www/html/index.php, but, if on the host (Windows 7), in my browser I go to 192.168.0.3, it times out. I can ping the IP address from Windows and get a response, I just can't through the browser. Does anyone have any ideas what I need to do to get this working? This is my first time using a VM and I'm getting lost in the network settings.

    Read the article

  • Python and mechanize login script

    - by Perun
    Hi fellow programmers! I am trying to write a script to login into my universities "food balance" page using python and the mechanize module... This is the page I am trying to log into: http://www.wcu.edu/11407.asp The website has the following form to login: <FORM method=post action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx><INPUT value=https://cf.wcu.edu/busafrs/catcard/idsearch.cfm type=hidden name=wcuirs_uri> <P><B>WCU ID Number<BR></B><INPUT maxLength=12 size=12 type=password name=id> </P> <P><B>PIN<BR></B><INPUT maxLength=20 type=password name=PIN> </P> <P></P> <P><INPUT value="Request Access" type=submit name=submit> </P></FORM> From this we know that I need to fill in the following fields: 1. name=id 2. name=PIN With the action: action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx This is the script I have written thus far: #!/usr/bin/python2 -W ignore import mechanize, cookielib from time import sleep url = 'http://www.wcu.edu/11407.asp' myId = '11111111111' myPin = '22222222222' # Browser #br = mechanize.Browser() #br = mechanize.Browser(factory=mechanize.DefaultFactory(i_want_broken_xhtml_support=True)) br = mechanize.Browser(factory=mechanize.RobustFactory()) # Use this because of bad html tags in the html... # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) # User-Agent (fake agent to google-chrome linux x86_64) br.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] # The site we will navigate into br.open(url) # Go though all the forms (for debugging only) for f in br.forms(): print f # Select the first (index two) form br.select_form(nr=2) # User credentials br.form['id'] = myId br.form['PIN'] = myPin br.form.action = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' # Login br.submit() # Wait 10 seconds sleep(10) # Save to a file f = file('mycatpage.html', 'w') f.write(br.response().read()) f.close() Now the problem... For some odd reason the page I get back (in mycatpage.html) is the login page and not the expected page that displays my "cat cash balance" and "number of block meals" left... Does anyone have any idea why? Keep in mind that everything is correct with the header files and while the id and pass are not really 111111111 and 222222222, the correct values do work with the website (using a browser...) Thanks in advance EDIT Another script I tried: from urllib import urlopen, urlencode import urllib2 import httplib url = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' myId = 'xxxxxxxx' myPin = 'xxxxxxxx' data = { 'id':myId, 'PIN':myPin, 'submit':'Request Access', 'wcuirs_uri':'https://cf.wcu.edu/busafrs/catcard/idsearch.cfm' } opener = urllib2.build_opener() opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] request = urllib2.Request(url, urlencode(data)) open("mycatpage.html", 'w').write(opener.open(request)) This has the same behavior...

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Resolve a URL from a Partial View (ASP.NET MVC)

    Working on an ASP.NET MVC application and needed the ability to resolve a URL from a partial view. For example, I have an image I want to display, but I need to resolve the virtual path (say, ~/Content/Images/New.png) into a relative path that the browser can use, such as ../../Content/Images/New.png or /MyAppName/Content/Images/New.png. Astandard view derives from the System.Web.UI.Page class, meaning you have access to the ResolveUrl and ResolveClientUrl methods. Consequently, you can write markup/code like the following:' /The problem is that the above code does not work as expected in a partial view. What's a little confusing is that while the above code compiles and the page, when visited through a browser, renders, the call to Page.ResolveClientUrl returns precisely what you pass in, ~/Content/Images/New.png, in this instance. The browser doesn't know what to do with ~, it presumes it's part of the URL, so it sends the request to the server for the image with the ~ in the URL, which results in a broken image.I did a bit of searching online and found this handy tip from Stephen Walther - Using ResolveUrl in an HTML Helper. In a nutshell, Stephen shows how to create an extension method for the HtmlHelper class that uses the UrlHelper class to resolve a URL. Specifically, Stephen shows how to add an Image extension method to HtmlHelper. I incorporated Stephen's code into my codebase and also created a more generic extension method, which I named ResolveUrl.public static MvcHtmlString ResolveUrl(this HtmlHelper htmlHelper, string url) { var urlHelper = new UrlHelper(htmlHelper.ViewContext.RequestContext); return MvcHtmlString.Create(urlHelper.Content(url)); }With this method in place you can resolve a URL in a partial view like so:' /Or you could use Stephen's Html.Image extension method (althoughmy more generic Html.ResolveUrl method could be used in non-image related scenarios where you needed to get a relative URL from a virtual one in a partial view). Thanks for the helpful tip, Stephen!Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Resolve a URL from a Partial View (ASP.NET MVC)

    Working on an ASP.NET MVC application and needed the ability to resolve a URL from a partial view. For example, I have an image I want to display, but I need to resolve the virtual path (say, ~/Content/Images/New.png) into a relative path that the browser can use, such as ../../Content/Images/New.png or /MyAppName/Content/Images/New.png. Astandard view derives from the System.Web.UI.Page class, meaning you have access to the ResolveUrl and ResolveClientUrl methods. Consequently, you can write markup/code like the following:' /The problem is that the above code does not work as expected in a partial view. What's a little confusing is that while the above code compiles and the page, when visited through a browser, renders, the call to Page.ResolveClientUrl returns precisely what you pass in, ~/Content/Images/New.png, in this instance. The browser doesn't know what to do with ~, it presumes it's part of the URL, so it sends the request to the server for the image with the ~ in the URL, which results in a broken image.I did a bit of searching online and found this handy tip from Stephen Walther - Using ResolveUrl in an HTML Helper. In a nutshell, Stephen shows how to create an extension method for the HtmlHelper class that uses the UrlHelper class to resolve a URL. Specifically, Stephen shows how to add an Image extension method to HtmlHelper. I incorporated Stephen's code into my codebase and also created a more generic extension method, which I named ResolveUrl.public static MvcHtmlString ResolveUrl(this HtmlHelper htmlHelper, string url) { var urlHelper = new UrlHelper(htmlHelper.ViewContext.RequestContext); return MvcHtmlString.Create(urlHelper.Content(url)); }With this method in place you can resolve a URL in a partial view like so:' /Or you could use Stephen's Html.Image extension method (althoughmy more generic Html.ResolveUrl method could be used in non-image related scenarios where you needed to get a relative URL from a virtual one in a partial view). Thanks for the helpful tip, Stephen!Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Beware: Upgrade to ASP.NET MVC 2.0 with care if you use AntiForgeryToken

    - by James Crowley
    If you're thinking of upgrading to MVC 2.0, and you take advantage of the AntiForgeryToken support then be careful - you can easily kick out all active visitors after the upgrade until they restart their browser. Why's this?For the anti forgery validation to take place, ASP.NET MVC uses a session cookie called "__RequestVerificationToken_Lw__". This gets checked for and de-serialized on any page where there is an AntiForgeryToken() call. However, the format of this validation cookie has apparently changed between MVC 1.0 and MVC 2.0. What this means is that when you make to switch on your production server to MVC 2.0, suddenly all your visitors session cookies are invalid, resulting in calls to AntiForgeryToken() throwing exceptions (even on a standard GET request) when de-serializing it: [InvalidCastException: Unable to cast object of type 'System.Web.UI.Triplet' to type 'System.Object[]'.]   System.Web.Mvc.AntiForgeryDataSerializer.Deserialize(String serializedToken) +104[HttpAntiForgeryException (0x80004005): A required anti-forgery token was not supplied or was invalid.]   System.Web.Mvc.AntiForgeryDataSerializer.Deserialize(String serializedToken) +368   System.Web.Mvc.HtmlHelper.GetAntiForgeryTokenAndSetCookie(String salt, String domain, String path) +209   System.Web.Mvc.HtmlHelper.AntiForgeryToken(String salt, String domain, String path) +16   System.Web.Mvc.HtmlHelper.AntiForgeryToken() +10  <snip> So you've just kicked all your active users out of your site with exceptions until they think to restart their browser (to clear the session cookies). The only work around for now is to either write some code that wipes this cookie - or disable use of AntiForgeryToken() in your MVC 2.0 site until you're confident all session cookies will have expired. That in itself isn't very straightforward, given how frequently people tend to hibernate/standby their machines - the session cookie will only clear once the browser has been shut down and re-opened. Hope this helps someone out there!

    Read the article

  • How to enable Google Drive offline access

    - by Gopinath
    Google’s latest cloud offering Google Drive provides 5GB of free storage to let you store documents, spread sheets, photos and other stuff and access them using a variety of devices – PCs, Macs, smartphones and tablets. You can also set up offline access to Google Drive so that you can access files on the move even if you don’t have access to internet connection. To access Google Drive offline you need Chrome browser and here are the simple steps to be followed for setting up. Step 1:  Login to Google Drive and click the gear icon in the upper right of your window. Step 2: Select Set up Docs offline from the drop-down menu. The “Set up offline viewing of Google Docs” dialog will appear Step 3:  Authorize Google Chrome to store your Google Drive content by clicking on “Allow offline docs” and then install “Docs Chrome web app” by clicking on “Install from Chrome web store”. You’ll be taken to the Chrome web store, where you’ll need to click Install on the right-hand side of the browser window. Step 4: Once the app is installed, you’ll be taken to a Chrome page with the Google Docs app icon. Click the icon to go back to your Documents List. Google Chrome take few minutes to prepare Google Drive for offline access by downloading all the files to your local computer. Once it’s completed, you can access Google Drive files offline. To access files of Google Drive offline point your Chrome browser to drive.google.com. When offline your Google Docs stored on Google Drive are available in view only mode. You can open Google Documents, Spread sheets & Presentations and see the content but you can’t edit them.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >