Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 6/122 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Ars Technica .ars URL suffix -- Vanity or SEO Benefit?

    - by yc01
    The Technology website Ars Technica has adjusted their URL rewrite rules to end with a .ars. Traditionally, sites have taken advantage of this URL rewriting capability to completely eliminate file suffixes like .html, .php, .aspx etc, under the theory that this made for better SEO (since the content of the URL was more relevant to the content) Ars Technicas, though, look like this: http://arstechnica.com/science/news/2011/03/flow-from-the-poles-drive-sunspot-levels.ars So, is Ars Technica adding the .ars file suffix purely a vanity play? Or is it an SEO trick to improve the site's SEO by cleverly inserting their site name into every URL slug? And, if this is indeed an effective SEO trick, should other sites follow suit?

    Read the article

  • Which Document Requires A Single Uri For Web Resources?

    - by Pietro Speroni
    I know that giving short, clear URI that do not change with time is considered good manners, but I need to create a system that is designed not to have them. But to do this I need to go back and find the document in which first it was explained that there should be a single URI per resource. And that it should not change with time. It is probably a document from T.B.L. or from the w3c. Anyone knows which document would that be? Thanks

    Read the article

  • mod_rewrite for clean URL doesn't work

    - by deathlock
    Basically what I want to do is to convert this: http://localhost/jariungu/user_caleg.php?idCaleg2014=3 into this: http://localhost/jariungu/caleg/3 I have managed to make /jariungu/caleg/3 to direct to the original URL (as in, if I open that URL, it directs me to the appropriate page). The problem is, once opened, the URL returns to the original, ugly one in the address bar. This is what I tried. Could someone provide a help? <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine On RewriteBase /jariungu/ RewriteRule ^caleg\/([0-9]+)\/([a-zA-Z]+\s*[0-9]*)/?$ caleg.php?idCaleg2014=$1&namaCaleg=$2 [NC,L] RewriteRule ^caleg\/([0-9]+)/?$ caleg.php?idCaleg2014=$1 [NC,L] </IfModule>

    Read the article

  • Are generic keywords in url bad for SEO? [closed]

    - by user1661479
    Possible Duplicate: Squeezing all the SEO out of a URL as possible Need help with url structure. Let's say I'm a manufacturer of Wire EDM machines. Is it bad for me to put the keywords wire-edm in my url to help try to raise SEO ranking? For example: mywebsite.com/wire-edm/machine/model-xxxx mywebsite.com/wire-edm/customer-service mywebsite.com/wire-edm/contact Or should I leave it as the following because the gains are fairly insignificant and it doesn't help users understand my site structure: mywebsite.com/machine/model-xxxx mywebsite.com/customer-service mywebsite.com/contact I’d like to hear what everyones thoughts are on this and please provide some sources for which method is better.

    Read the article

  • htaccess Redirect / RedirectMatch with URLs that contain Special / Encoded Characters

    - by dSquared
    I'm currently in the process of applying a variety of 301 redirects in an .htaccess file for a website that recently changed its structure. Everything is working as expected, except for URLs that contain special characters, for these I am getting 404 errors. For example the following directives that have a registered trademark symbol (®) bring up 404 pages: RedirectMatch 301 ^/directory/link-with®-special-character(/)?$ somelink.com RedirectMatch 301 ^/directory/link-with%c2%ae-special-character(/)?$ somelink.com I've also tried using Redirect, RewriteRule and surrounding the urls with double quotes and nothing seems to work. Does anyone know what might be happening or the proper way to handle these types of directives? Any help is greatly appreciated.

    Read the article

  • Adding arbitrary search URLs to Firefox search bar

    - by Matthew
    New-ish versions of Firefox (I'm currently on 3.6) have the nifty "search bookmark" feature, which allows you to create searches in the location bar with custom URLs, e.g. en.wikipedia.org/wiki/%s. This is really great, but when trying to mange the engines in the search bar, I was dismayed at the lack of customisability there. It looks like the two search methods are entirely distinct. Is there a way to put custom URLs in my search bar, or do I have to just hope that whatever I want is on the long but finite list of plugins at mycroft? Thanks UPDATE: done a bit more research, posting my own answer

    Read the article

  • Clean URLS on Hiawatha

    - by Botto
    I am using the Hiawatha web server and running drupal on a FastCGI PHP server. The drupal site is using imagecache and it requires either private files or clean urls. The issue I am having with clean urls is that requests to files are being rewritten into index.php as well. My current config is: UrlToolkit { ToolkitID = drupal RequestURI exists Return Match (/files/*) Rewrite $1 Match ^/(.*) Rewrite /index.php?q=$1 } The above does not work. Drupal's apache set up is: <Directory /var/www/example.com> RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </Directory>

    Read the article

  • Dynamically loading chef recipies from URLs

    - by andy
    I'm deploying a web app on AWS. I intend to use chef to build AMIs which I'll then put into production. I want to have Chef monitor a URL stored in simpleDB. The URL would point to a tarball in S3. There would be different URLs, one for a config tarball, one for a code tarball. When I update the URL in simpleDB, I want chef to spot this and pull in and apply the configs/deploy the code. Is this possible? Has anything like this been done before or would I need to roll my own code? I think Chef can monitor URLs, but how would be the best way of getting it to load that URL from simpleDB?

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten with a question mark in it, which I don't want the user to see. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • How to remove old indexed URLs from Google?

    - by Gok Demir
    I used Cs-cart for a site. Default installation SEO is not suitable for Turkish accented characters such as "ö" which maps to automatically "ae". Then I modified the php code and now it substitutes "ö" with "o" and "s"-"s" etc. Also I changed a category name to a better one. My problem is, google indexed both previous versions and new versions. Previous wrong urls gives 404 error. I used sitemap addon and send it to google and sitemap does not include wrong urls. However these old urls are still indexed on google. What can I do to remove them. For example http://www.google.com.tr/search?q=%C3%A7ak%C4%B1l+ta%C5%9Flar%C4%B1ndan+babil+site:eeski.com&hl=tr&client=firefox-a&hs=Z31&rls=com.ubuntu:en-US:official&filter=0 gives two top results old wrong url leads to 404: http://www.eeski.com/kitap-dergi/bilim-ve-teknik/cakl-talarndan-babil-kulesine-rakamlarn-evrensel-tarihi-ii.html right current url: http://www.eeski.com/kitap/bilim-ve-teknik/cakil-taslarindan-babil-kulesine-rakamlarin-evrensel-tarihi-ii.html What could I make google only index current urls stated on sitemap?

    Read the article

  • Extensionless URLs in IIS 6

    - by Jason Marsell
    My client has asked me to build a personalized URL system so that they can send out really short URLs in postcards to customers like this: www.client.com/JasonSmith03 www.client.com/TonyAdams With these URLs, I need IIS 6 to trap the incoming request and pass that “JasonSmith03” token to my database to determine which landing page to redirect them to. I’d love to use an HttpHandler or HttpModule but they both look like they require an file extension (.aspx) in the URL. Wildcard mapping will chew up every incoming request and that’s ridiculous. ISAPI filters are just text routing files, so I can’t employ logic to call the database. According to Scott Guthrie, this would be cake if I had IIS 7, but I don’t. Can this be done using MVC? I’ve been working with MVP for the last few years, so I haven’t done any MVC and routing. I thought I remembered that MVC has the ability to use REST-style extensionless URLs. I’d be more than happy to have these personalized URLs land on a site that’s built in MVC, if it will work. Thank you!

    Read the article

  • [Apache] Creating rewrite rules for multiple urls in the same folder

    - by DavidYell
    I have been asked by our client to convert a site we created into SEO friendly url format. I've managed to crack a small way into this, but have hit a problem with having the same urls in the same folder. I am trying to rewrite the following urls, /review/index.php?cid=intercasino /review/submit.php?cid=intercasino /review/index.php?cid=intercasino&page=2#reviews I would like to get them to, /review/intercasino /submit-review/intercasino /review/intercasino/2#reviews I've almost got it working using the following rule, RewriteRule (submit-review)/(.*)$ review/submit.php?cid=$2 [L] RewriteRule (^review)/(.*) review/index.php?cid=$2 The problem, you may already see, is that /submit-review rewrites to /review, which in turn gets rewritten to index.php, thus my review submission page is lost in place of my index page. I figured that putting [L] would prevent the second rule being called, but it seems that it rewrites both urls in two seperate passes. I've also tried [QSE], and [S=1] I would rather not have to move my files into different folders to get the rewriting to work, as that just seems too much like bad practise. If anyone could give me some pointers on how to differentiate between these similar urls that would be great! Thanks (Ref: http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html)

    Read the article

  • A good machine learning technique to weed out good URLs from bad

    - by git-noob
    Hi, I have an application that needs to discriminate between good HTTP GET requests and bad. For example: http://somesite.com?passes=dodgy+parameter # BAD http://anothersite.com?passes=a+good+parameter # GOOD My system can make a binary decision about whether or not a URL is good or bad - but ideally I would like it to predict whether or not a previously unseen URL is good or bad. http://some-new-site.com?passes=a+really+dodgy+parameter # BAD I feel the need for a support vector machine (SVM) ... but I need to learn machine learning. Some questions: 1) Is an SVM appropriate for this task? 2) Can I train it with the raw URLs? - without explicitly specifying 'features' 3) How many URLs will I need for it to be good at predictions? 4) What kind of SVM kernel should I use? 5) After I train it, how do I keep it up to date? 6) How do I test unseen URLs again the SVM to decide whether it's good or bad? I

    Read the article

  • What does using RESTful URLs buy me?

    - by Spike Williams
    I've been reading up on REST, and I'm trying to figure out what the advantages to using it are. Specifically, what is the advantage to REST-style URLs that make them worth implementing over a more typical GET request with a query string? Why is this URL: http://www.parts-depot.com/parts/getPart?id=00345 Considered inferior to this? http://www.parts-depot.com/parts/00345 In the above examples (taken from here) the second URL is indeed more elegant looking and concise. But it comes at a cost... the first URL is pretty easy to implement in any web language, out of the box. The second requires additional code and/or server configuration to parse out values, as well as additional documentation and time spent explaining the system to junior programmers and justifying it to peers. So, my question is, aside from the pleasure of having URLs that look cool, what advantages do RESTful URLs gain for me that would make using them worth the cost of implementation?

    Read the article

  • Extracting URLs (to array) in Ruby

    - by FearMediocrity
    Good afternoon, I'm learning about using RegEx's in Ruby, and have hit a point where I need some assistance. I am trying to extract 0 to many URLs from a string. This is the code I'm using: sStrings = ["hello world: http://www.google.com", "There is only one url in this string http://yahoo.com . Did you get that?", "The first URL in this string is http://www.bing.com and the second is http://digg.com","This one is more complicated http://is.gd/12345 http://is.gd/4567?q=1", "This string contains no urls"] sStrings.each do |s| x = s.scan(/((http|https):\/\/[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(([0-9]{1,5})?\/.[\w-]*)?)/ix) x.each do |url| puts url end end This is what is returned: http://www.google.com http .google nil nil http://yahoo.com http nil nil nil http://www.bing.com http .bing nil nil http://digg.com http nil nil nil http://is.gd/12345 http nil /12345 nil http://is.gd/4567 http nil /4567 nil What is the best way to extract only the full URLs and not the parts of the RegEx? Thanks Jim

    Read the article

  • Reject (Hard 404) ASP.NET MVC-style URLs

    - by James D
    Hi, ASP.NET MVC web app that exposes "friendly" URLs: http://somesite.com/friendlyurl ...which are rewritten (not redirected) to ASP.NET MVC-style URLs under the hood: http://somesite.com/Controller/Action The user never actually sees any ASP.NET MVC style URLS. If he requests one, we hard 404 it. ASP.NET MVC is (in this app) an implementation detail, not a fundamental interface. My question: how do you examine an arbitrary incoming URL and determine whether or not that URL matches a defined ASP.NET MVC path? For extra credit: how do you do it from inside an ASP.NET-style IHttpModule, where you're getting invoked upstream from the ASP.NET MVC runtime? Thanks!

    Read the article

  • mod_rewrite to change my urls

    - by user319859
    Hi, I've been fighting with mod-rewrite for a while. Basically, I have a website that I'm moving to a difference namespace/directory. What I'd like to do is change urls that look like this: http://mydomain.com/index.php?a=xxxxxxxxxx These urls will always have "index.php?a=". I have a different/new site that also has an index.php file, so it's important that I do a rewrite only when a= is in the URL. The new url should be like http://mydomain.com/ns1/index.php?a=xxxxxxxxxxx Seems pretty simple, but i can't seem to get mod_rewrite to do it for me, here's what I have: # rewrite old urls to new namespace RewriteRule ^/index.php\?a=(.*)$ /gc1/index.php\?x=1&a=$1 [R=301,L] See anything wrong?

    Read the article

  • ASP.NET MVC - how to modify requested URLs?

    - by Marek
    The baidu spider seems to be adding ¤ to end of some crawled urls (it seems that it happens with urls containing single unicode character as the last character) The baidu-requested url looks like this: site.com/abc/ä¤ while site.com/abc/ä is the valid url and as linked from many places on my site. The internal problem is that a different route is matched for this kind of url and an unhandled exception occurs. I would not like to lose baidu because of too many 500 errors on the site. I would like to change the requested URL to a different URL by removing the added character before any ASP.NET MVC processing of the request starts. Can I write a request filter/http module or something similar in ASP.NET MVC to remove the trailing '¤' from the urls? I would not like to alter my routes to counter-hack this behavior.

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • Django URL Conf Returns Incorrect "Current URL"

    - by natnit
    I have a django app that is mostly done, and the URLs work perfectly when I run it with the manage.py runserver command. However, I've recently tried to get it running via lighttpd, and many links have stopped working. For example: http://mysite.com/races/32 should work, but instead throws this error message. Page not found (404) Request Method: GET Request URL: http://mysite.com/races/32 Using the URLconf defined in racetrack.urls, Django tried these URL patterns, in this order: ^admin/ ^create/$ ^races/$ ^races/(?P<race_id>\d+)/$ ^races/(?P<race_id>\d+)/manage/$ ^races/(?P<text>\w+)/$ ^user/(?P<kol_id>\d+)/$ ^$ ^login/$ ^logout/$ The current URL, 32, didn't match any of these. The request URL is accurate, but the last line (which displays the current URL) is giving 32 instead of races/32 as expected. Here is my urlconf: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('racetrack.races.views', (r'^admin/', include(admin.site.urls)), (r'^create/$', 'create'), (r'^races/$', 'index'), (r'^races/(?P<race_id>\d+)/$', 'detail'), (r'^races/(?P<race_id>\d+)/manage/$', 'manage'), (r'^races/(?P<text>\w+)/$', 'index'), (r'^user/(?P<kol_id>\d+)/$', 'user'), # temporary for index page replace with welcome page (r'^$', 'index'), ) urlpatterns += patterns('django.contrib.auth.views', (r'^login/$', 'login', {'template_name': 'races/login.html'}), (r'^logout/$', 'logout', {'next_page': '/'}), ) Thank you.

    Read the article

  • Cake Php After Php GD library installation comes error as appending 'index.php' in urls

    - by Jusnit
    I am using using Cake PHP with nginx server, inorder to enable captcha support , I installed the PHP GD library to server After the installation , All the urls in cake php is appended with 'index.php' Like www.mydomain.com/index.php instead of www.mydomain.com There cake php HtmlHelper link and image function, it all appending url "/index.php/img/flower.jpg" instead "/img/flower.jpg". Please help to solve this problem..

    Read the article

  • Nginx+Passenger: 502 Bad Gateway from Nginx when passing urlencoded URLs in GET vars

    - by jimeh
    Here's an example of the URLs that don't work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fperson http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2F However, the following URL does work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com Also, this only happens with Nginx, using Passenger with Apache it works fine, but we use Nginx on our production machines. Here's the entry in Nginx's error log: 2009/12/01 09:30:51 [error] 6407#0: *136 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: domain, request: "GET /do?url=http%3A%2F%2Fwww.linkedin.com%2F HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.6335/master/helper_server.sock:", host: "domain"

    Read the article

  • Looking for a tool to expand shortened urls

    - by Rich Seller
    The interwebs seem to be infested with shortened urls (Twitter I'm looking at you). I'm always reluctant to click these as it is a leap into the unknown. Are there any browser plugins or Greasemonkey scripts that will auto-expand the shortened URL or give me a tooltip with the resolved target? I've seen LongUrl.org, which has an API I could use to roll my own, but I'd like to avoid the effort if this is a solved problem.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >