Search Results

Search found 8370 results on 335 pages for 'seo friendly urls'.

Page 89/335 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Strange routing

    - by astropanic
    How I can setup my rails app to respond to such urls: http://mydomain.com/white-halogene-lamp http://mydomain.com/children-lamps http://mydomain.com/contact-form The first one should link to my products controller, and show my product with this name The second one should link to my categories controller, and show the category with this name The third one should link to my sites controller, and show the site with this title. All three models (product, category, site) have a method to_seo, giving the mentioned above urls (after the slash) I know it's not the restful way, but please don't disuss here, it is wrong approach or not, that's not the question. The question is how to accomplish this weird routing ? I know we have catch all routes, but how I can tell rails to use different controllers on different urls in the catch all route ? Or have You an better idea ? I would avoid redirects to other urls.

    Read the article

  • Extracting URLs (to array) in Ruby

    - by FearMediocrity
    Good afternoon, I'm learning about using RegEx's in Ruby, and have hit a point where I need some assistance. I am trying to extract 0 to many URLs from a string. This is the code I'm using: sStrings = ["hello world: http://www.google.com", "There is only one url in this string http://yahoo.com . Did you get that?", "The first URL in this string is http://www.bing.com and the second is http://digg.com","This one is more complicated http://is.gd/12345 http://is.gd/4567?q=1", "This string contains no urls"] sStrings.each do |s| x = s.scan(/((http|https):\/\/[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,5}(([0-9]{1,5})?\/.[\w-]*)?)/ix) x.each do |url| puts url end end This is what is returned: http://www.google.com http .google nil nil http://yahoo.com http nil nil nil http://www.bing.com http .bing nil nil http://digg.com http nil nil nil http://is.gd/12345 http nil /12345 nil http://is.gd/4567 http nil /4567 nil What is the best way to extract only the full URLs and not the parts of the RegEx? Thanks Jim

    Read the article

  • mod_rewrite to change my urls

    - by user319859
    Hi, I've been fighting with mod-rewrite for a while. Basically, I have a website that I'm moving to a difference namespace/directory. What I'd like to do is change urls that look like this: http://mydomain.com/index.php?a=xxxxxxxxxx These urls will always have "index.php?a=". I have a different/new site that also has an index.php file, so it's important that I do a rewrite only when a= is in the URL. The new url should be like http://mydomain.com/ns1/index.php?a=xxxxxxxxxxx Seems pretty simple, but i can't seem to get mod_rewrite to do it for me, here's what I have: # rewrite old urls to new namespace RewriteRule ^/index.php\?a=(.*)$ /gc1/index.php\?x=1&a=$1 [R=301,L] See anything wrong?

    Read the article

  • [SEO] sitemap.xml What is the precision of the priority field?

    - by Christoph
    Unfortunately the specification does not tell anything about precision. The xml scheme definition states that it is of the type xsd:decimal: <xsd:restriction base="xsd:decimal"> <xsd:minInclusive value="0.0"/> <xsd:maxInclusive value="1.0"/> </xsd:restriction> I have a sitemap generator that uses up to 10 positions after decimal point. Where often only the last few positions differ. These numbers are perfectly right according to the xsd, but yet i found some pages(3,4) that state that only 0.0, 0.1, 0.2, .., 1.0 are valid values. How will the search engines react to such a sitemap? Will some just round the value? I know that it is unlikely that someone can provide an answer to that question, unless he works for that search engine, but i think experiences will also do.

    Read the article

  • ASP.NET MVC - how to modify requested URLs?

    - by Marek
    The baidu spider seems to be adding ¤ to end of some crawled urls (it seems that it happens with urls containing single unicode character as the last character) The baidu-requested url looks like this: site.com/abc/ä¤ while site.com/abc/ä is the valid url and as linked from many places on my site. The internal problem is that a different route is matched for this kind of url and an unhandled exception occurs. I would not like to lose baidu because of too many 500 errors on the site. I would like to change the requested URL to a different URL by removing the added character before any ASP.NET MVC processing of the request starts. Can I write a request filter/http module or something similar in ASP.NET MVC to remove the trailing '¤' from the urls? I would not like to alter my routes to counter-hack this behavior.

    Read the article

  • SEO - Does google+other search engines index links within <noscript> tags?

    - by Joe
    I have setup some dropdown menus allowing users to find pages on my website by selecting options across multiple dropdowns: eg. Color of Car, Year This would generate a link like: mysite.xyz/blue/2010/ The only problem is, because this link is dynamically assembled with Javascript, I've also had to assemble each possible combination from the dropdowns into a list like: <noscript> No javascript enabled? Here are all the links: <a href='mysite.xyz/blue/2009/'>mysite.xyz/blue/2009/</a> <a href='mysite.xyz/blue/2010/'>mysite.xyz/blue/2010/</a> <a href='mysite.xyz/red/2009/'>mysite.xyz/red/2009/</a> <a href='mysite.xyz/red/2010/'>mysite.xyz/red/2010/</a> </noscript> My question is, if I put these in a tag like this, will I be penalized or anything by search engines such as Google? I've already been doing so for some navigational stuff which required offsets etc. However, now I would be listing a whole list of links here too. I want to provide them here, moreso so that google can actually index my pages - but for those without javascript, they can still navigate too. Your thoughts? Also.. even though I have some links that appear to have been indexed, I AM NOT 100% SURE, which is why I'm asking :P

    Read the article

  • best SEO method for date in url structure? [closed]

    - by Haroldo
    I'm working on an events website so dates are very important search terms, ie: 'whats on on fri 14th september' I've seen it done in various methods for example: domain/whats-on/city-hall/14-09-2010/event-name.html domain/whats-on/city-hall/2010/09/14/event-name.html the first is 'shallower'. the second could be clearer for google to synonym-ize as a date, has anyone else got any experience or input?

    Read the article

  • How to deal with missing items the SEO way?

    - by Brandon Montgomery
    I am working on a public-facing web site which serves up articles for people to read. After some time, articles become stale and we remove them from the site. My question is this: what is the best way to handle the situation when a search engine visits a URL corresponding to a removed article? Should the app respond with a permanent redirect (301 Moved Permanently) to a "article not found" page, or is there a better way to handle this?

    Read the article

  • SEO Problem for new dictionary site, google hasn't indexed content.

    - by John
    I loaded about 15,000 pages, letters A & B of a dictionary and submitted to google a text site map. I'm using google's search with advertisement as the planned mechanism to go through my site. Google's webmaster accepted the site mapps as good but then did not index. My index page has been indexed by google and at this point have not linked to any pages. So to get google's search to work I need to get all my content indexed. It appears google will not just index from the site map and so I was thinking of adding pages that spider in links from the main index page. But I don't want to create a bunch of pages that programicly link all of the pages without knowing if this has a chance to work. Eventually I plan on having about 150,000 pages each page being a word or phrase being defined. I wrote a program that is pulling this from a dictionary database. I would like to prove the content that I have to anyone interested to show the value of the dictionary in releation to the dictionary software that I'm completing. Suggestions for getting the entire site indexed by google so I can appear in the search results? Thanks

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • Django URL Conf Returns Incorrect "Current URL"

    - by natnit
    I have a django app that is mostly done, and the URLs work perfectly when I run it with the manage.py runserver command. However, I've recently tried to get it running via lighttpd, and many links have stopped working. For example: http://mysite.com/races/32 should work, but instead throws this error message. Page not found (404) Request Method: GET Request URL: http://mysite.com/races/32 Using the URLconf defined in racetrack.urls, Django tried these URL patterns, in this order: ^admin/ ^create/$ ^races/$ ^races/(?P<race_id>\d+)/$ ^races/(?P<race_id>\d+)/manage/$ ^races/(?P<text>\w+)/$ ^user/(?P<kol_id>\d+)/$ ^$ ^login/$ ^logout/$ The current URL, 32, didn't match any of these. The request URL is accurate, but the last line (which displays the current URL) is giving 32 instead of races/32 as expected. Here is my urlconf: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('racetrack.races.views', (r'^admin/', include(admin.site.urls)), (r'^create/$', 'create'), (r'^races/$', 'index'), (r'^races/(?P<race_id>\d+)/$', 'detail'), (r'^races/(?P<race_id>\d+)/manage/$', 'manage'), (r'^races/(?P<text>\w+)/$', 'index'), (r'^user/(?P<kol_id>\d+)/$', 'user'), # temporary for index page replace with welcome page (r'^$', 'index'), ) urlpatterns += patterns('django.contrib.auth.views', (r'^login/$', 'login', {'template_name': 'races/login.html'}), (r'^logout/$', 'logout', {'next_page': '/'}), ) Thank you.

    Read the article

  • Cake Php After Php GD library installation comes error as appending 'index.php' in urls

    - by Jusnit
    I am using using Cake PHP with nginx server, inorder to enable captcha support , I installed the PHP GD library to server After the installation , All the urls in cake php is appended with 'index.php' Like www.mydomain.com/index.php instead of www.mydomain.com There cake php HtmlHelper link and image function, it all appending url "/index.php/img/flower.jpg" instead "/img/flower.jpg". Please help to solve this problem..

    Read the article

  • Nginx+Passenger: 502 Bad Gateway from Nginx when passing urlencoded URLs in GET vars

    - by jimeh
    Here's an example of the URLs that don't work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fperson http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2F However, the following URL does work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com Also, this only happens with Nginx, using Passenger with Apache it works fine, but we use Nginx on our production machines. Here's the entry in Nginx's error log: 2009/12/01 09:30:51 [error] 6407#0: *136 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: domain, request: "GET /do?url=http%3A%2F%2Fwww.linkedin.com%2F HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.6335/master/helper_server.sock:", host: "domain"

    Read the article

  • Looking for a tool to expand shortened urls

    - by Rich Seller
    The interwebs seem to be infested with shortened urls (Twitter I'm looking at you). I'm always reluctant to click these as it is a leap into the unknown. Are there any browser plugins or Greasemonkey scripts that will auto-expand the shortened URL or give me a tooltip with the resolved target? I've seen LongUrl.org, which has an API I could use to roll my own, but I'd like to avoid the effort if this is a solved problem.

    Read the article

  • Rewritten URLs with parameter length > 255 don't work

    - by philfreo
    I'm using mod_rewrite to rewrite URLs like this: http://example.com/1,2,3,4/foo/ By doing this in .htaccess: RewriteRule ^([\d,]+)/foo/$ /foo.php?id=$1 [L,QSA] It works fine, except for when "1,2,3,4" turns into a string longer than 255 characters, Apache returns a "403 Forbidden". Is there some apache setting I should tweak?

    Read the article

  • Service tag urls

    - by Joshua
    I like to keep links to my various laptop support pages in my wiki. Dell makes it easy if you know the service tag. http://support.dell.com/support/topics/global.aspx/support/my_systems_info/details?ServiceTag={Your Service Tag} Are there any equivalent urls for HP, IBM, etc where you can just plug in your service tag equivalent and get a page with links to drivers and what not.

    Read the article

  • shorter URLs for locally hosted files

    - by Ashwini
    I have seen in many developer talks, the presenter using a demo.local URL instead of the conventional localhost/demo for faster access. I've read about editing host entries over here How can I create shorter URLs to sites on my computer? but my question is since the localhost IP is the same 127.0.0.1 for every folder inside my var/www or htdocs then how to make it accessible in the shorter format?

    Read the article

  • EC2 Instance of Wordpress not mapping URLs correctly

    - by Benjamin
    I'm using an AWS EC2 micro instance to run a wordpress blog. I've successfully mapped a subdomain to the Elastic IP for the micro instance. After a few minor changes, the URL I mapped to the Elastic IP (blog.example.com) opens up the wordpress home page but whenever I click on any of the wordpress links the domain changes to the AWS public DNS for that instance (http://ec2-123-45-678-910.compute-1.amazonaws.com/wordpress/). How do I fix the URLs so that they all follow the subdomain I have setup?

    Read the article

  • Can rel="next" and rel="prev" can be ignored in blog listings

    - by Saahil Sinha
    We have a blog - which is current spread to 9 pages, every page has a unique title - page 1, page 2, page 3 and so on. Also, as it's a blog, every page has unique 10 listing entry on one page. Is rel="prev" and rel="next" can be safely ignored as all these are listing and not content pages of article. What I read in all through Google Search is that rel="next" and rel="prev" should be applicable on where the content is spread across multiple pages. But - as it's a blog, it has blog listings and every listing has unique content This is the blog: http://www.mycarhelpline.com/index.php?option=com_easyblog&view=latest&Itemid=91. May recommend, if by ignoring rel="next" and rel="prev" - are we inviting Google to treat the blog listing pages as duplicate.

    Read the article

  • Can google “see” this custom javascript code which displays links from an external site to mine

    - by webmasters
    I have a javascript code on my site who displays links from another site. This is what I have on my source before: <script language="JavaScript" type="text/javascript">showLink(1);</script> This is what I have copied from my source after the page has loaded: <script language="JavaScript" type="text/javascript">showLink(1);</script><a rel="nofollow" target="_blank" class="anc" href="http://x5.external_site.net/sc/out.php?s=5483&amp;o=http%3A%2F%2Fwww.bluetooth.com">Bluetooth Devices</a> Can google see this link?

    Read the article

  • Interesting links week #24 and #25

    - by erwin21
    Below a list of interesting links that I found this week: Interaction: Design Usability and All About It Frontend: CSS Lint – CSS Cleaning Tool 10 HTML Entity Crimes You Really Shouldn’t Commit Development: OWASP Top 10 for .NET developers part 7: Insecure Cryptographic Storage C#/.NET Fundamentals: Choosing the Right Collection Class Mobile: Tips to Design a Website for Mobile Marketing: 30 (New) Google Ranking Factors You May Over- or Underestimate Other: 5 Little-Known Web Files That Can Enhance Your Website Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >