Search Results

Search found 18568 results on 743 pages for 'url shortener'.

Page 45/743 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Are shorter URLS better for SEO?

    - by articlestack
    Many people shorten their URLs. But as per my understanding it creates overhead of extra redirection, other can not guess about the target article with their url, and it should be less friendly for "inurl:..." type search. Should I shorten the URLs of my sites? Is there any advantage with short URLs besides the fact that they take fewer characters in anchor tags on the page (good for site loading)?

    Read the article

  • Force google to reindex

    - by Matthias
    I changed the structure of my urls. The pages are indexed by google and have the following structure http://mypage.com/myfolder/page.apsx The new structure is http://mypage.com/page.aspx Now all urls that google knows are wrong. How can I tell google to reindex and that the structure has changed? Internally I redirect in ASP.NET when the url contains the myfolder by I want google to update the urls.

    Read the article

  • To Fix HTTP 400-499 error codes with 301 redirects in .htaccess file

    - by user2131844
    Google previously indexed my websites pages (sitemap.xml) with below format: www.domain.com/2013/04/18/hot?test-gadgets-of-2013-to-include-in-?your-list www.domain.com/2013/02/09/rin?gdroid I have resubmitted the sitemap but there are still 404 errors in Google/Bing engine. Could you please help me to write 301 redirects rule in .htaccess file so when some clicks the URL for: www.domain.com/2013/02/09/rin?gdroid They should be redirected to: www.domain.com/rin?gdroid How we can write rule in .htaccess file to remove date part 2013/02/09/?

    Read the article

  • Correct redirect after posting comment - django comments framework

    - by Sachin
    I am using django comments framework for allowing users to comment on my site, but there is a problem with the url to which user is redirected after posting the comment. If I render my comment form as {% with comment.content_object.get_absolute_url as next %} {% render_comment_form for comment.content_object %} {% endwith %} Then the url to which it is redirected after the comment is posted is <comment.content_object.get_absolute_url/?c=<comment.id> For example I posted a comment on a post with url /post/256/a-new-post/ then the url to which I am redirected after posting the comment is /post/256/a-new-post/?c=99 where assume 99 is the id comment just posted. Instead what I want is something like this /post/256/a-new-post/#c99. Secondly when I do comment.get_absolute_url() I do not get the proper url instead I get a url like /comments/cr/58/14/#c99 and this link is broken. How can I get the correct url as mentioned in the documentation. Am i doing something wrong. Any help is appreciated.

    Read the article

  • How do I change hyperlink colours in LibreOffice Impress?

    - by Marita Moll
    I have a lot of Powerpoint slides that I converted to LibreOffice Impress. The resulting hyperlinks are very faded, very hard to see. I can't seem to find any way to change the colour of hyperlinks as a whole. Any colour change I do make on an individual url link does not hold when converted back to .ppt which is sometimes necessary. I have tried the tools=options=libreoffice=appearance route but it only seems to affect the very first hyperlink in the slide set

    Read the article

  • map a sub page of a domain to google sites

    - by fred
    I control a subpage of a site: economics.university.edu/summerschool I have access to CNAME, etc. However I want to use Google sites to create the site and then map the URL above to my Google site. In other words, someone on the university site that navigated to the summer school page would be redirected to the Google site transparently. However I keep getting this error from Google: "The format of the web address is unsupported." Is this possible?

    Read the article

  • What do I gain by filtering URLs through Perl's URI module?

    - by sid_com
    Do I gain something when I transform my $url like this: $url = URI-new( $url )? #!/usr/bin/env perl use warnings; use strict; use 5.012; use URI; use XML::LibXML; my $url = 'http://stackoverflow.com/'; $url = URI->new( $url ); my $doc = XML::LibXML->load_html( location => $url, recover => 2 ); my @nodes = $doc->getElementsByTagName( 'a' ); say scalar @nodes;

    Read the article

  • Canonical redirection meta tag [duplicate]

    - by sankalp
    This question already has an answer here: How to use rel='canonical' properly 2 answers There are two pages in my website with the same content; only the URL's are different: www.websitename.com and www.websitename.com/default.html. Someone suggested that I should add canonical tags to avoid them being considered as duplicate content. Where should I add canonical tags and why?

    Read the article

  • Can I use asterisks in URLs?

    - by KajMagnus
    Are there any reasons I shouldn't use an asterisk (*) in a URL? Background: With asterisks, I could provide these nice and user friendly (or what do you think??) URLs: example.com/some/folder/search-phrase* means search for pages with names starting with "search-phrase", located in /some/folder/. example.com/some/**/*search-phrase* means search for any page with "search-phrase" anywhere in its name. example.com/some/folder/* means list all pages in /some/folder/ (rather than showing the /some/folder/index page).

    Read the article

  • Directory structure and script arguments

    - by felwithe
    I'll often see a URL that looks something like this: site.com/articles/may/05/02/2011/article-name.php Surely all of those subdirectories don't actually exist? It seems like it would be a huge redundancy, even if it was only an identical index file in every directory. To change anything you'd have to change every single one. I guess my question is, is there some more elegant way that sites usually accomplish this?

    Read the article

  • Use HttpGet with illegal characters in the URL

    - by kaciula
    I am trying to use DefaultHttpClient and HttpGet to make a request to a web service. Unfortunately the web service URL contains illegal characters such as { (ex: domain.com/service/{username}). It's obvious that the web service naming isn't well written but I can't change it. When I do HttpGet(url), I get that I have an illegal character in the url (that is { and }). If I encode the URL before that, there is no error but the request goes to a different URL where there is nothing. The url, although has illegal characters, works from the browser but the HttpGet implementation doesn't let me use it. What should I do or use instead to avoid this problem?

    Read the article

  • Firefox logs invalid URL?

    - by thanks for help
    I'm writing an extension for firefox. Using dom.location to keep track of visited search results pages, i'm getting this url http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e . If you click it, the google search results for "hi" should come up. You'll know that from the title bar - because the rest of the page won't load. This happens with any google search. Oddly enough, if you cut part of it off, so say, http://www.google.com/search?hl=en&source=hp&q=hi - it works! But Googling "hi" myself does give me a longish URL - http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=db658cc5049dc510 . I know for a fact that the first time that URL was visited, the page loaded, I did it myself. Can anyone make reason out of this? I just tried my experiment again, this time saving the original URL in the location bar. It turns out, dom.location.href is giving a different value. How is this happening? Original: http://www.google.com/#hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e dom.location.href http://www.google.com/search?hl=en&source=hp&q=hi&aq=f&aqi=&oq=&fp=642c18fb4411ca2e window.addEventListener("load", function() { myExtension.init(); }, false); var myExtension = { init: function() { var appcontent = document.getElementById("appcontent"); // browser if(appcontent) appcontent.addEventListener("DOMContentLoaded", myExtension.onPageLoad, true); var messagepane = document.getElementById("messagepane"); // mail if(messagepane) messagepane.addEventListener("load", function () { myExtension.onPageLoad(); }, true); }, onPageLoad: function(aEvent) { var doc = aEvent.originalTarget; // doc is document that triggered "onload" event // do something with the loaded page. // doc.location is a Location object (see below for a link). // You can use it to make your code executed on certain pages only. var url = doc.location.href; if (url.match(/(?:p|q)(?:=)([^%]*)/)) {alert("MATCH" + url);resultsPages.push(url);} else {alert(url); } } This snippet comes directly from Mozilla with the matching and alerts my own. I apologize for not posting the code earlier.

    Read the article

  • Double encoded url being fully decoded in ASP.NET

    - by Brad R
    I have just come across something that is quite strange and yet I haven't found any mention on the interwebs of others having the same problem. If I hit my ASP.NET application with a double encoded url then the Request["myQueryParam"] will do a double decode of the query for me. This is not desirable as I have double encoded my query string for a good reason. Can others confirm I'm not doing something obviously wrong, and why this would happen. A solution to prevent it, without doing some nasty query string parsing, would be great too! As an example if you hit the url: http://localhost/MyApp?originalUrl=http%3a%2f%2flocalhost%2fAction%2fRedirect%3fUrl%3d%252fsomeUrl%253futm_medium%253dabc%2526utm_source%253dabc%2526utm_campaign%253dabc (For reference %25 is the % symbol) Then look at the Request["originalUrl"] (page or controller) the string returned is: http://localhost/Action/Redirect?Url=/someUrl?utm_medium=abc&utm_source=abc&utm_campaign=abc I would expect: http://localhost/Action/Redirect?Url=%2fsomeUrl%3futm_medium%3dabc%26utm_source%3dabc%26utm_campaign%3dabc I have also checked in Fiddler and the URL is being passed to the server correctly (one possible culprit could have been the browser decoding the URL before sending).

    Read the article

  • URL Encoding - Illegal Character Replacement

    - by ThePower
    Hi, I am doing some url redirections in a project that I am currently working on. I am new to web development and was wondering what the best practise was to remove any illegal path characters, such as ' ? etc. I'm hoping I don't have to resort to manually replacing each character with their encoded urls. I have tried UrlEncode and HTMLEncode, but UrlEncode doesn't cater for the ? and HTMLEncode doesn't cater for ' E.G. If I was to use the following: Dim name As String = "Dave's gone, why?" Dim url As String = String.Format("~/books/{0}/{1}/default.aspx", bookID, name) Response.Redirect(url) I've tried wrapping url like this: Dim encodedUrl As String = Server.UrlEncode(url) And Dim encodedUrl As String = Server.HTMLEncode(url) Thanks in advance. P.S. Happy Christmas

    Read the article

  • Android Read contents of a URL (content missing after in result)

    - by josnidhin
    I have the following code that reads the content of a url public static String DownloadText(String url){ StringBuffer result = new StringBuffer(); try{ URL jsonUrl = new URL(url); InputStreamReader isr = new InputStreamReader(jsonUrl.openStream()); BufferedReader in = new BufferedReader(isr); String inputLine; while ((inputLine = in.readLine()) != null){ result.append(inputLine); } }catch(Exception ex){ result = new StringBuffer("TIMEOUT"); Log.e(Util.AppName, ex.toString()); } in.close(); isr.close(); return result.toString(); } The problem is I am missing content after 4065 characters in the result returned. Can someone help me solve this problem. Note: The url I am trying to read contains a json response so everything is in one line I think thats why I am having some content missing.

    Read the article

  • Creating PDF documents dynamically using Umbraco and XSL-FO part 2

    - by Vizioz Limited
    Since my last post I have made a few modifications to the PDF generation, the main one being that the files are now dynamically renamed so that they reflect the name of the case study instead of all being called PDF.PDF which was not a very helpful filename, I just wanted to get something live last week, so decided that something was better than nothing :)The issue with the filenames comes down to the way that the PDF's are being generated by using an alternative template in Umbraco, this means that all you need to do is add " /pdf " to the end of a case study URL and it will create a PDF version of the case study. The down side is that your browser will merrily download the file and save it as PDF.PDF because that is the name of the last part of the URL.What you need to do is set the content-disposition header to be equal to the name you would like the file use, Darren Ferguson mentioned this on the Change the name of the PDF forum post.We have used the same technique for downloading dynamically generated excel files, so I thought it would be useful to create a small macro to set both this header and also to set the caching headers to prevent any caching issues, I think in the past we have experienced all possible issues, including various issues where IE behaves differently to other browsers when you are using SSL and so the below code should work in all situations!The template for the PDF alternative template is very simple:<%@ Master Language="C#" MasterPageFile="~/umbraco/masterpages/default.master" AutoEventWireup="true" %><asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolderDefault" runat="server"> <umbraco:Macro Alias="PDFHeaders" runat="server"></umbraco:Macro> <umbraco:Macro xsl="FO-CaseStudy.xslt" Alias="PDFXSLFO" runat="server"></umbraco:Macro></asp:Content>The following code snippet is the XSLT macro that simply creates our file name and then passes the file name into the helper function:<xsl:template match="/"> <xsl:variable name="fileName"> <xsl:text>Vizioz_</xsl:text> <xsl:value-of select="$currentPage/@nodeName" /> <xsl:text>_case_study.pdf</xsl:text> </xsl:variable> <xsl:value-of select="Vizioz.Helper:AddDocumentDownloadHeaders('application/pdf', $fileName)"/> </xsl:template>And the following code is the helper function that clears the current response and adds all the appropriate headers:public static void AddDocumentDownloadHeaders(string contentType, string fileName){ HttpResponse response = HttpContext.Current.Response; HttpRequest request = HttpContext.Current.Request; response.Clear(); response.ClearHeaders(); if (request.IsSecureConnection & request.Browser.Browser == "IE") { // Don't use the caching headers if the browser is IE and it's a secure connection // see: http://support.microsoft.com/kb/323308 } else { // force not using the cache response.AppendHeader("Cache-Control", "no-cache"); response.AppendHeader("Cache-Control", "private"); response.AppendHeader("Cache-Control", "no-store"); response.AppendHeader("Cache-Control", "must-revalidate"); response.AppendHeader("Cache-Control", "max-stale=0"); response.AppendHeader("Cache-Control", "post-check=0"); response.AppendHeader("Cache-Control", "pre-check=0"); response.AppendHeader("Pragma", "no-cache"); response.Cache.SetCacheability(HttpCacheability.NoCache); response.Cache.SetNoStore(); response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1)); } response.AppendHeader("Expires", DateTime.Now.AddMinutes(-1).ToLongDateString()); response.AppendHeader("Keep-Alive", "timeout=3, max=993"); response.AddHeader("content-disposition", "attachment; filename=\"" + fileName + "\""); response.ContentType = contentType;}I will write another blog soon with some more details about XSL-FO and how to create the PDF's dynamically.Please do re-tweet if you find this interest :)

    Read the article

  • adding slugs to the URLs afterwards

    - by altuure
    we have a website for last 5 months and we did not used slug at bottom level elements so urls was like /apps/webmasters/badges/1100 would it make sense to add name to the URL after that point and redirect to the new ones ? I am interested in building more search terms. and increase page ranks ..... /apps/webmasters/badges/1100 - redirect and served at /apps/webmasters/badges/1100-supporter Or should I keep old URLs as is and create new urls with slugs. I would also appreciate some advice on shared urls on facebook or on twitter in those cases. Thanks in advance...

    Read the article

  • siteground hosting forwarding

    - by Oleg Videnov
    I would like to ask (maybe simple) question for you. I have a website which is called let's say www.website1.com on different hosting provider(siteground) I have www.website2.com Now,www.website1.com is the old website and the boss wants .. IF someone clicks on www.website1.com/user/content/1, he/she should be redirected to www.website2.com/user/content/1 ,but the url should REMAIN www.website1.com/user/content/1 and the same thing for all the pages. If someone have an answer how to do it,it would be much appreciated. Thanks in advance! Oleg Videnov

    Read the article

  • Want to create dynamic subdomain in codeigniter?

    - by Nilay Patel
    In my site i want to add an functionality for user to use their username with domain. Like in codeigniter right now i want to give the user to use their own url to login in site and do other stuff. For eg: i Want www.username.mysite.com/login or www.username.mysite.com/category so here the user can login with their credential and add the category. so i have two controller in my site with login and category. So how to do this with the routes Or .htaccess.

    Read the article

  • Friendly URLs: is there a max length for search engines?

    - by Olivier Pons
    People from stackoverflow have been working closely with google team to help them make the panda algorithm more efficient, so I guess they've learned a lot from the google team. Thus they may have done very clever friendly URLs to maximize the page rank. I've seen from time to time very long URLs (can't find where) in stackoverflow, but after a certain "amount" of character there were only numbers, kind of "ok passed this length, SEOs will ignore this so let's put only numbers". I've done a huge work on my framework to make very friendly URLs, and my website can come up with URLs like: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/paramedical/ It's very long and I'm wondering if the previous URL won't be mixed with, say, this one: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/art/

    Read the article

  • How do I point a new domain to start on a page that's not index.html on separate hosting?

    - by Owen Campbell-Moore
    I'm using a service (CMS/Host) called SquareSpace to host my site, and today I'm registering the domain for it. Basically, how do I make it so when somebody types www.tedxoxford.com it points at http://www.tedxoxford.com/landing (currently http://tedxoxford.squarespace.com/landing) instead of the default index? Is this possible? Squarespace is quite a restricted CMS and means that your logos etc all point to the index so I don't want people ending up on my landing/splash page every time they want the home page, only on the first time they type in the URL. A dirty hack would be to check the refferer and redirect anyone hitting the index to the landing page, but that's a lot of loading overhead I'd rather avoid...

    Read the article

  • How important are SEO Friendly URLs [closed]

    - by nute
    Possible Duplicate: Is a URL with a query string better or worse for SEO then one without one? Currently, my URLs look something like http://mydomain.ext/question/5 where question is the Controller and 5 is the ID of the object or article retrieved. In theory I could spend some development time and some server resources to have URLs that would contain more information about the page loaded. However, seeing how websites like Youtube or many others just keep simple URLs with just an ID, I am asking, does it matter? It is worth it??

    Read the article

  • for a blog with posts and categories what are all the best ways to create user friendly and seo friendly urls

    - by Jayapal Chandran
    I am creating a module in my website which displays ringtones. it is like creating blog posts and categories It will have categories(tags) and posts. (i am using category and tag interchangeably) i am using the following linking for this module sitename.com/blog sitename.com/blog/category/category-name-slug/ - will list all ringtones of that category/tag sitename.com/blog/title/name-slug-of-the-ringtone/ - this will display the details and a download link in all page at the left i display the category/tag . This is how i have formed the url structure. it will be user friendly i hope yet will it be seo friendly? Please hint if i am missing something or other ways to improve. meanwhile i am browsing the net to get more information on linking content (categorizing) and to find best ways for the user and search engine.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >