Search Results

Search found 8370 results on 335 pages for 'seo friendly urls'.

Page 140/335 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • How to make CodeIgniter accept "query string" URLs?

    - by Peter
    According to CI's docs, CodeIgniter uses a segment-based approach, for example: example.com/my/group If I want to find a specific group (id=5), I can visit example.com/my/group/5 And in the controller, define function group($id='') { ... } Now I want to use the traditional approach, which CI calls "query string" URL. Example: example.com/my/group?id=5 If I go to this URL directly, I get a 404 page not found. So how can I enable this?

    Read the article

  • What is the best way to generate a sitemap?

    - by Zakaria
    Hi everybody, I need to build a sitemap for my website. The url will be "www.example.com/mysitemap.html". I know that there are some tools that generate automatically an XML file that contains the reachable URLs and also improve the SEO. So my questions are: How can I build this HTML page going from the generated XML? Or am I wrong and this kind of HTML page is built manually? If not, how do we integrate the XML and convert it to the website? Thank you very much. Regards.

    Read the article

  • Escaping ampersands in URLs for HttpClient requests

    - by jpatokal
    So I've got some Java code that uses Jakarta HttpClient like this: URI aURI = new URI( "http://host/index.php?title=" + title + "&action=edit" ); GetMethod aRequest = new GetMethod( aURI.getEscapedPathQuery()); The problem is that if title includes any ampersands (&), they're considered parameter delimiters and the request goes screwy... and if I replace them with the URL-escaped equivalent %26, then this gets double-escaped by getEscapedPathQuery() into %2526. I'm currently working around this by basically repairing the damage afterward: URI aURI = new URI( "http://host/index.php?title=" + title.replace("&", "%26") + "&action=edit" ); GetMethod aRequest = new GetMethod( aURI.getEscapedPathQuery().replace("%2526", "%26")); But there has to be a nicer way to do this, right? Note that the title can contain any number of unpredictable UTF-8 chars etc, so escaping everything else is a requirement.

    Read the article

  • How do I replace outbound link URLs in a PDF document, using PHP

    - by Alex Poole
    I have a PDF document with some external links. I'd like to parse the document, replace the destination of the links then close (and serve) the PDF document, all using PHP I know I can do this with PDFLib but I don't want to incur this cost. I could re-write the document with FPDF or DomPDF, but some of these PDFs are quite complex so this would be a major time investment. Surely there must be a way to do this directly to PDF docs, using native PHP? TIA

    Read the article

  • Why is Google Webmaster Tools crawling invalid URLS and showing 500 errors?

    - by Amos Kane
    Google Webmaster tools is reporting 12k+ 500 errors. Eeek! None of the URLS are valid- they all contain www.youtube.com. First, why is Google crawling these URLS if they don't exist? I supplied a sitemap, and they are of course not in the sitemap. I don't have a robots.txt blocking anything. I've checked for invalid redirects--none, and checked for unclosed tags or something that would throw www.youtube.com into the URL by accident--none. In every 'linked from', the referring URL is also a bad URL, with www.youtube.com in it. The Google Tools report no malware, and I can't check the server logs because the host won't give me access. Really stuck!! Any ideas appreciated!

    Read the article

  • Regular expressions and matching question marks in URLs

    - by James P.
    I'm having trouble finding a regular expression that matches the following String. Korben;http://feeds.feedburner.com/KorbensBlog-UpgradeYourMind?format=xml;1 One problem is escaping the question mark. Java's pattern matcher doesn't seem to accept \? as a valid escape sequence but it also fails to work with the tester at myregexp.com. Here's what I have so far: ([a-zA-Z0-9])+;http://([a-zA-Z0-9./-]+);[0-9]+ Any suggestions?

    Read the article

  • Customizing the look of S3 expiring urls

    - by Gregoriy
    Creating expiring links for Amazon S3 buckets, could I somehow change the name of AWSAccessKeyId to something else (to custom it, to speak so) so that not to disclose the use of Amazon in my web applications? For now, it looks like so: http://video.mysite.com/T154456.flv?AWSAccessKeyId=1ESOMESPECIALIDJJAKJ6RA82&Expires=1241372284&Signature=ddfr%2BlkoSEPAL%2BGbMwlMzj6q%2BCY%3D Or, in order to not open another question, what are other (tricky?) ways to expire S3 links without the use of proxy?

    Read the article

  • UIWebView: webViewDidStartLoad/webViewDidFinishLoad delegate methods not called when loading certain URLs

    - by Dia
    I have basic web browser implemented using a UIWebView. I've noticed that for some pages, none of the UIWebViewDelegate methods are called. An example page in which this happens is: http://www.youtube.com/user/google. Here are the steps to reproduce the issue (make sure you insert NSLog calls in your controller's UIWebViewDelegate methods): Load the above youtube URL into the UIWebView [notice that here, the UIWebViewDelegate methods do get called when the page loads] Touch the "Uploads" category on the page Touch any video in that category [issue: notice that a new page is loaded, but none of the UIWebView delegates are called] I know that this is not an issue of UIWebView's delegate not being set properly, since the delegate methods do get invoked when loading other links (e.g. if you try clicking on a link that takes you outside of youtube, you'll notice the delegate methods getting called). My gut feeling initially was that it might be because the page is loaded using AJAX, which may not invoke the delegate method. But then when I checked Safari, it did not exhibit this problem, so it must be something on my side. I've also noticed that Three20's TTWebController has the exact same issue as I'm having. But the problem that arises from this issue is that without the delegate methods called, I'm unable to update the UI to enable/disable the back and forward browsing buttons when new requests are loaded. And idea why this is happening or how can I work around it to update the UI when a new request is made?

    Read the article

  • rewrite URLs in CSS files

    - by Don
    Hi, I'm writing a Maven plugin that merges CSS files together. So all the CSS files that match /foo/bar/*.css might get merged to /foo/merged.css. A concern is that in a file such as /foo/bar/baz.css there might be a property such as: background: url("images/pic.jpg") So when the file is merged into /foo/merged.css this will need to be changed to background: url("bar/images/pic.jpg") The recalculated URL obviously depends on 3 factors: original URL original CSS file location merged CSS file location Assuming that the original and merged CSS files are both on the same filesystem, is there a general formula (or Java library) that can be used to calculate the new url given these 3 inputs? Thanks, Don

    Read the article

  • noindex, follow on list views?

    - by Fabrizio
    On one of our client's website we have lot's of list views with links to detail views. (Image a blog with the posts overview and the single pages). The detail views don't change, but the list views will change when new items come up. The pages displaying the list view don't contain any other valuable content. So my question is: Does it make sense to define meta "noindex, follow" on the list view pages (and of course "index, follow" on the detail views) to prevent search engines to point to the list views when the keyword is found in the title or teaser of the list view. By the time the visitor clicks on the list view search result it might have changed and the content is not visible anymore, whereas if he goes directly to the single view he will definitly find what he was searching for? Related question: The startpage also contains mainly a list view. Is it a bad idea to have the start page not indexed? Any SEO gurus here? :) Thanks, Fabrizio.

    Read the article

  • Map a domain to an MVC area

    - by Simon_Weaver
    Anybody got any experience in mapping a domain to an MVC area? Here's our situation: Old system (still active but will soon redirect to new store): www.example.com - our main site where we send traffic store.example.com - our store site which is a completely separate site that is indexed in google New system: www.example.com - same site as before www.example.com/store - new store site - built in an ASP.NET MVC area Because store is a separate domain google gives it a separate entry in the search results. I'd like to keep this benefit in future but wondering whether or not there is a good way to map a domain (store.example.com) to the MVC area or if its just going to be more trouble than its worth. PS. I'm not trying to keep existing indexing - its a completely separate store so thats not possible. I just want to redirect to the corresponding page in the new store. I'm just trying not to lose the benefit of two domains for SEO purposes.

    Read the article

  • Spider a Website and Return URLs Only

    - by Rob Wilkerson
    I'm not quite sure how best to define/articulate this, but I'm looking for a way to pseudo-spider a website. The key is that I don't actually want the content, but rather a simple list of URIs. I can get reasonably close to this idea with Wget using the --spider option, but when piping that output through a grep, I can't seem to find the right magic to make it work: wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:' The grep filter seems to have absolutely no affect on the wget output. Have I got something wrong or is there another tool I should try that's more geared towards providing this kind of limited result set? Thanks. UPDATE So I just found out offline that, by default, wget writes to stderr. I missed that in the man pages (in fact, I still haven't found it if it's in there). Once I piped the return to stdout, I got closer to what I need: wget --spider --force-html -r -l1 http://somesite.com 2>&1 | grep 'Saving to:' I'd still be interested in other/better means for doing this kind of thing, if any exist.

    Read the article

  • Bookmarkable URLs after Ajax for Wicket

    - by Wolfgang
    There is this well-known problem that browsers don't put Ajax request in the request history and cause problems for bookmarkability, forward/back button, and refresh. Also, there is a common solution to that problem that appends the hash symbol # and some additional parameters to the URL by using Javascript window.location.hash = .... In this question a basic solution to this problem is proposed, for example. = My question is if such a solution has been integrated in Wicket, so that existing Wicket facilities are used and no custom Javascript had to be added. If not, I'd be interested in how this could be done. Such a solution had to answer the question what should be put after the hash. I like the idea that the bookmarkable URL that (in the non-Ajax case) were in front of the hash could be put behind it. For example, when you are on http://host/catalog and reach a page http://host/product/xyz the Ajax-triggered URL would be http://host/catalog#/product/xyz. Then it would be easy to write an onload handler that checks for the # and does a redirect to the URL after the hash.

    Read the article

  • Read file:// URLs in IE XMLHttpRequest

    - by Dan Fabulich
    I'm developing a JavaScript application that's meant to be run either from a web server (over http) or from the file system (on a file:// URL). As part of this code, I need to use XMLHttpRequest to load files in the same directory as the page and in subdirectories of the page. This code works fine ("PASS") when executed on a web server, but doesn't work ("FAIL") in Internet Explorer 8 when run off the file system: <html><head> <script> window.onload = function() { var xhr = new XMLHttpRequest(); xhr.open("GET", window.location.href, false); xhr.send(null); if (/TestString/.test(xhr.responseText)) { document.body.innerHTML="<p>PASS</p>"; } } </script> <body><p>FAIL</p></body> Of course, at first it fails because no scripts can run at all on the file system; the user is prompted a yellow bar, warning that "To help protect your security, Internet Explorer has restricted this webpage from running scripts or ActiveX controls that could access your computer." But even once I click on the bar and "Allow Blocked Content" the page still fails; I get an "Access is Denied" error on the xhr.open call. This puzzles me, because MSDN says that "For development purposes, the file:// protocol is allowed from the Local Machine zone." This local file should be part of the Local Machine Zone, right? How can I get code like this to work? I'm fine with prompting the user with security warnings; I'm not OK with forcing them to turn off security in the control panel. EDIT: I am not, in fact, loading an XML document in my case; I'm loading a plain text file (.txt).

    Read the article

  • Apache mod-rewrite for shorter urls

    - by Don
    Is it possible do do something like this with mod-rewrite? Current url: www.example.com/Departments/dynamicPage.php?DeptID=10&DeptName=HR to set up a rewrite so: www.example.com/hr could redirect to the above (with the arguments)? I know I could create an "hr" folder on the root level and put in an html page with a meta refresh, but I hate the extra clutter. I don't think a .htaccess 301 is possible, but please correct me if I'm wrong. I'm looking for an elegant solution that can be added to for future instances.

    Read the article

  • Case Insensitive URLs with mod_rewrite

    - by Paul Tarjan
    I'd like for any url that doesn't hit an existing file, to do a lookup on the other possible cases and see if those files exist, and if so, 302 to them. If that's not possible, then I'm ok with these compromises: Only check the lowercase version Only check the first path portion For example http://example.com/CoOl/PaTH/CaMELcaSE should redirect to http://example.com/cool/path/camelCase (assuming the latter exists). but of course a full solution is much more useful to me and others

    Read the article

  • Webpage layout order for my webapp - does it matter if the Sidebar is programmatically displayed bef

    - by Jack W-H
    OK that's the worst title I could ever possibly think up. But I'm not too sure how to phrase it! What I mean is, is it inefficient for the browser, search engine optimisation, or any other important factors, if programmatically my float:righted sidebar appears in the markup before the main content div, which is set to float:left? To the user, the main content appears on the left, and the sidebar on the right. In the source code it appears like so: <div id="sidebar">This is where my sidebar goes </div> <div id="content">This is where my content goes </div> Will this affect SEO or other factors in my page?

    Read the article

  • Is this a "valid" css image replacement technique?

    - by user278457
    I just came up with this, it seems to work in all modern browsers, I just tested it then on (IE8/compatibility, Chrome, Safari, Moz) HTML <img id="my_image" alt="my text" src="images/small_transparent.gif" /> CSS #my_image{ background-image:url('images/my_image.png'); width:100px; height:100px;} Pro's: image alt text is best-practice for accessibility/seo no extra HTML markup, and the css is pretty minimal too gets around the css on/images off issue where "text-indent" techniques hide text from low bandwidth users The biggest disadvantage that I can think of is the css off/images on situation, because you'll only send a transparent gif. I'd like to know, who uses images without stylesheets? some kind of mobile phone or something? I'm making some sites for clients in regional Australia (hundreds of km from the nearest city), where many users will be suffering from dial-up connections, and often outdated browsers too, so the "images off" issue is an important consideration. are there any other side effects with this technique that I haven't considered?

    Read the article

  • Uppercase and lowercase urls in PHP

    - by Arjun
    I have created folders in my root example: http://www.zipholidays.co.uk/Cuba or http://www.zipholidays.co.uk/Florida When I type http://www.zipholidays.co.uk/cuba (Cube in lowercase), it shows page not found. I'm using Apache server. People are linking to pages with lowercase, uppercase, mixed case - whatever. What do I do to make the pages case insensitive?

    Read the article

  • Problem using unicode in URLs with cgi.PATH_INFO in ColdFusion

    - by Loftx
    Hi there, My ColdFusion (MX7) site has search functionality which appends the search term to the URL e.g. http://www.example.com/search.cfm/searchterm. The problem I'm running into is this is a multilingual site, so the search term may be in another language e.g. ??????? leading to a search URL such as http://www.example.com/search.cfm/??????? The problem is when I come to retrieve the search term from the URL. I'm using cgi.PATH_INFO to retrieve the path of the search page and the search term and extracting the search term from this e.g. /search.cfm/searchterm however, when unicode characters are used in the search they are converted to question marks e.g. /search.cfm/??????. These appear actual question marks, rather than the browser not being able to format unicode characters, or them being mangled on output. I can't find any information about whether ColdFusion supports unicode in the URL, or how I can go about resolving this and getting hold of the complete URL in some way - does anyone have any ideas? Cheers, Tom

    Read the article

  • How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    - by Pat
    I have a friend who wishes to work as a freelance web developer, but insists that tables are the way forwards for layouts. Several points he maintains in favour of tables: 1 This is what was taught at the beginning of 10 years of programming & computer science degrees. 2 Large companies use tables to achieve 'technical' things. 3 It saves time I have coded him some examples of CSS exactly matching table based layouts, and provided many links to articles explaining SEO and accessibility benefits. From the perspective of a client, I have been explaining to him that I wouldn't hire someone using outdated methods as their main strategy for layout. As he is my friend and I wish him every success, I believe it is important for him to gain the best start when pitching for work. The question again: How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    Read the article

  • Scrape zipcode table for different urls based on county

    - by Dr.Venkman
    I used lxml and ran into a wall as my new computer wont install lxml and the code doesnt work. I know this is simple - maybe some one can help with a beautiful soup script. this is my code: import codecs import lxml as lh from selenium import webdriver import time import re results = [] city = [ 'amador'] state = [ 'CA'] for state in states: for city in citys: browser = webdriver.Firefox() link2 = 'http://www.getzips.com/cgi-bin/ziplook.exe?What=3&County='+ city +'&State=' + state + '&Submit=Look+It+Up' browser.get(link2) bcontent = browser.page_source zipcode = bcontent[bcontent.find('<td width="15%"'):bcontent.find('<p>')+0] if len(zipcode) > 0: print zipcode else: print 'none' browser.quit() Thanks for the help

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >