Search Results

Search found 18982 results on 760 pages for 'url rewriting'.

Page 48/760 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • cURL works but PHP cURL fails to internet [migrated]

    - by wrk2bike
    Trying to diagnose an issue using PHP to cURL to an Internet location on a RedHat Linux server. cURL is installed and working, and: <?php var_dump(curl_version()); ?> shows all the correct information in the output. The issue is I can use PHP to cURL to localhost on the box itself, but not the Internet (see below). Normally I'd suspect the firewall, but I can cURL from the command line to the Internet without a problem. The box can also update it's own software packages, etc. What am I missing? My test is: <?php function http_head_curl($url,$timeout=30) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_TIMEOUT, $timeout); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $res = curl_exec($ch); if ($res === false) { throw new RuntimeException("cURL exception: ".curl_errno($ch).": ".curl_error($ch)); } return trim($res); } // Succeeds, displaying headers echo(http_head_curl('localhost')); // Fails: echo(http_head_curl('www.google.com')); ?>

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • URL length and content optimised for SEO

    - by Brendan Vogt
    I have done some reading on what URLS should look like for search engine optimisation, but I am curious to know how mine would like, I need some advice. I have a tutorial website, and my categories is something like: Web Development -> Client Side -> JavaScript So if I have a tutorial called "What is JavaScript?", is it good to have a URL that looks something like: www.MyWebsite.com/web-development/client-side/javascript/what-is-javascipt Or would something like this be more appropriate: www.MyWebsite.com/tutorials/what-is-javascipt Just curious because I also read that it is wise to have keywords in your URLs. Do I need to add the identifiers of each categories in the link as well, something like: www.MyWebsite.com/1/web-development/5/client-side/15/javascript/100/what-is-javascipt 1 is the unique identifier (primary key) of category web development 5 is the unique identifier (primary key) of category client side 15 is the unique identifier (primary key) of category javascript 100 is the unique identifier (primary key) of tutorial what is javascript UPDATE This is not a programming question so can someone please help migrate this to the correct Q&A site without devoting my questions?

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Alternative to nofollow: custom 302 url shortener?

    - by Dogweather
    Here's the scenario: lots of blogging platforms make it tedious to insert nofollow into links within the post content. I.e., you need to edit the html, format it correctly, etc. I have a client who posts lots of content with links that should be nofollow'ed, and I thought of a novel way to handle this, since the blogging platform they're using makes it hard: I install a URL shortener web app on the client's domain. The shortener works as normal, except it redirects via 302 instead of 301. The pagerank will therefore stay at the shortener's domain, and not flow on to the target site. Part 2: In order to get the pagerank to collect meaningfully, say on the site's home page, the shortened URLs would be generated like this: /link?12345 instead of /link/12345. And then, the path /link would 301 to the home page. This way, the id is a param, not a path element. And thus, all the incoming shortened links are going to one path, which transfers pagerank to the home page. So that's my idea. I wanted to see if anybody could find problems with it. Thanks!

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • What URL scheme would be better for "nested" resources in a RESTful application?

    - by Luke404
    Let's say we want a RESTful web service to manage some logically nested resources, where each instance of resource 'B' is logically contained by an instance of resource 'A'. The first example that comes to mind, working as a sysadmin, is email accounts and their domains: [email protected] [email protected] [email protected] ... What URL scheme would you suggest? At first I'd try: /domain/[domainname] /domain/[domainname]/account/[accountname] is that in line with RESTful principles? or should I go with something like: /domain/[domainname] /account/[account@domainname]/ or anything else?

    Read the article

  • COMPLETE list of HTML tag attributes which have a URL value?

    - by system PAUSE
    Besides the following, are there any HTML tag attributes that have a URL as their value? href attribute on tags: <link>, <a>, <area> src attribute on tags: <img>, <iframe>, <frame>, <embed>, <script>, <input> action attribute on tags: <form> data attribute on tags: <object> Looking for tags in wide usage, including non-standard tags and old browsers as well as HTML 4.01, HTML 5, and XHTML. Yes this question is kinda lightweight, but I googled around for about 45 minutes and didn't find this data centralized anywhere, so I figure it might help some other developer to have it here. Plus I'm sure I'm missing something. Feel free to repeat/reorganize this list in your answer. Upvoting the most complete answers will probably be most helpful to others.

    Read the article

  • How can I accept a hash mark in a URL via $_GET?

    - by bccarlso
    From what I have been able to understand, hash marks (#) aren't sent to the server, so it doesn't seem likely that I will be able to use raw PHP to parse data like in the URL below: index.php?name=Ben&address=101 S 10th St Suite #301 I'm looking to pre-populate form fields with this $_GET data. How would I do this with Javascript (or jQuery), and is there a fallback that wouldn't break my form for people not using Javascript? Currently if there is a hash (usually in the address field), everything after that is not parsed in or stored in $_GET.

    Read the article

  • How to get link_to in Rails output an SEO friendly url?

    - by Jason
    Hi, My link_to tag is: <%= link_to("My test title",{:controller=>"search", :action=>"for-sale", :id=> listing.id, :title => listing.title, :search_term => search_term}) %> and produces this ugly URL: http://mysite.com/search/for-sale/12345?title=premium+ad+%2B+photo+%5Btest%5D How can I get link_to to generate: http://mysite.com/search/for-sale/listing-title/search-term/12345 Been trying this a few different ways and cannot find much online, really appreciate any help!

    Read the article

  • URL valid characters. java to validate.

    - by Chez
    a string like: 'www.test.com' is good. a string like: 'www.888.com' is good. a string like: 'stackoverflow.com' is good. a string like: 'GOoGle.Com' is good. why ? because those are valid urls. it does not necessarely matter if they have been registered or not. now bad strings are: 'goog*d\x' 'manydots...com' why because you can't register those urls. if I have a string in java which is supposed to be a good url what's the best way to validate it ? thanks a lot

    Read the article

  • Easy way to parse a url in C++ cross platform?

    - by Andrew Bucknell
    I need to parse a url to get the protocol host path and query in an application I am writing in c++. The application is intended to be cross platform. Im surprised I cant find anything that does this in boost or poco libraries. Is it somewhere obvious Im not looking? Any suggestions on appropriate open source libs? Or is this something I just have to do my self? Its not super complicated but it seems such a common task I am surprised there isnt a common solution.

    Read the article

  • Is there a semi-standard way to associate a URL with an IRC user?

    - by DRMacIver
    I'm in the process of doing some identity consolidation, so I'm providing URLs to me at various locations on the internet. I'm quite active on IRC, so this naturally lead me to wonder whether there was a way to provide a link to my IRC presence. This lead to me finding http://www.w3.org/Addressing/draft-mirashi-url-irc-01.txt which appears to be a draft of an RFC for associating URLs with IRC, which suggests that I would be irc://irc.freenode.net/DRMacIver,isnick Which seems a little on the lame side. Further, this RFC draft has very thoroughly expired (February 28 1997). On the other hand it seems to be implemented in chatzilla at least: http://www.mozilla.org/projects/rt-messaging/chatzilla/irc-urls.html So does anyone know if there's a superseding RFC and/or any other de facto standard for this?

    Read the article

  • Using JavaScript to change the URL used when a page is bookmarked...

    - by user30997
    JavaScript doesn't allow you to update window.location without triggering a reload. While I agree with this policy in principle (it shouldn't be possible to visit my website and have JavaScript change the location bar to read www.yourbankingsite.com,) I believe that it should be possible to change www.foo.org/index to www.foo.org/help. The only reason I care about this is for bookmarking. I'm working on a photo browser, and when a user is previewing a particular image, I want that image to be the default if they should bookmark that page. For example, if they are viewing foo.org/preview/images0-30 and they click on image #15, that image is expanded to a medium-sized view. If they then bookmark the page, I want the bookmark URL to be foo.org/preview/images0-30/active15. Any thoughts, or is there a security barrier on this one as well? I can certainly understand the same policy being applied here, but one can dream.

    Read the article

  • Can I get a "base URL" in Wordpress within a template file?

    - by alex
    Usually in my PHP apps I have a base URL setup so I can do things like this <a href="<?php echo BASE_URL; ?>tom/jones">Tom</a> Then I can move my site from development to production and swap it easily and have the change go site wide (and it seems more reliable than <base href="" />. I'm doing up a Wordpress theme, and I am wondering, does WordPress have anything like this built in, or do I need to redefine my own? I can see ABSPATH, but that is the absolute file path in the file system, not something from the document root.

    Read the article

  • WIX: How to register an Application to a URL Protocol?

    - by NOP slider
    In WIX 3.5 you can register file types easily: <ProgId Id="MyApp.File" Description="MyApp File" Icon="MyAppEXE" IconIndex="0"> <Extension Id="ext" ContentType="application/x-myapp-file"> <Verb Id="open" Command="&amp;Open" TargetFile="MyAppEXE" Argument="&quot;%1&quot;"/> </Extension> </ProgId> What if I want to register an URL protocol, as specified here? Obviously, it has no extension so where would I put the Verb tag? Or should I use another approach? Thanks.

    Read the article

  • Should I still use querystrings for page number, etc, when using ASP.NET 4 URL Routing?

    - by Scott
    I am switching from Intelligencia's UrlRewriter to the new web forms routing in ASP.NET 4.0. I have it working great for basic pages, however, in my e-commerce site, when browsing category pages, I previously used querystrings that were built into my pager control to control paging. An old url (with UrlRewriting) would be: http://www.mysite.com/Category/Arts-and-Crafts_17 I now have a MapPageRoute defined in global.asax as: routes.MapPageRoute("category-browse", "Category/{name}_{id}", ~/CategoryPage.aspx"); This works great. Now, somebody clicks to go to page 2. Previously I would have just tacked on ?page=2 as the querystring. Now, How do I handle this using web forms routing? I know I can do something like: http://www.mysite.com/Category/Arts-and-Crafts_17/page/2 But in addition to page, I can have filters, age ranges, gender, etc. Should I just keep defining routes that handle these variables, or should I continue using querystrings and can I define a route that allows me to use my querstrings like before?

    Read the article

  • Is it the filename or the whole URL used as a key in browser caches?

    - by Richard Turner
    It's common to want browsers to cache resources - JavaScript, CSS, images, etc. until there is a new version available, and then ensure that the browser fetches and caches the new version instead. One solution is to embed a version number in the resource's filename, but will placing the resources to be managed in this way in a directory with a revision number in it do the same thing? Is the whole URL to the file used as a key in the browser's cache, or is it just the filename itself and some meta-data? If my code changes from fetching '/r20/example.js' to '/r21/example.js', can I be sure that revision 20 of example.js was cached, but now revision 21 has been fetched instead and it is now cached?

    Read the article

  • Helicon ISAPI Rewrite Proxy 500 Internal Server Error

    - by Rob Stevenson-Leggett
    Hi, I have a website running at www.domain.com. The client now wants the website to appear to be running under www.otherdomain.com/whatson/brand/ Since the website is umbraco it won't run under a subfolder. I wanted to use ISAPI rewrite to proxy requests to www.domain.com using the following rule in a .htaccess at www.otherdomain.com/whatson/brand/ RewriteRule ^(.*)$ http://www.domain.com/$1 [P,L] However, when I apply this I get an ugly 500 Internal Server Error. There's nothing in the event log. So I turned on ISAPI logging and can see the following 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) init rewrite engine with requested uri /whatson/brand/home.aspx Then it testing all the other rewrite rules on the server. Then this 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) Htaccess process request w:\websites\otherdomain.com\docs2\whatson\brand\.htaccess 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (3) applying pattern '^(.*)$' to uri 'home.aspx' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) forcing proxy-throughput with http://www.domain.com/home.aspx 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) go-ahead with proxy request http://www.domain.com/home.aspx [OK] 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) rewrite 'home.aspx' -> '/whatson/brand/home.aspxx.rwhlp?p=0' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) internal redirect with /whatson/brand/home.aspxx.rwhlp?p=0 [INTERNAL REDIRECT] So it appears to work according to the logs, but I'm not seeing the page come through.. It's worth noting that www.domain.com and www.otherdomain.com are on the same box. LogLevel is 3 and RewriteLogLevel is 3 (I've tried with 9 and debug but there is too much traffic going through the other sites on the box) Any ideas?

    Read the article

  • ajax redirect dillema, how to get redirect URL OR how to set properties for redirect request

    - by Anthony
    First, I'm working in Google Chrome, if that helps. Here is the behavior: I send an xhr request via jquery to a remote site (this is a chrome Extension, and I've set all of the cross-site settings...): $.ajax({ type: "POST", contentType : "text/xml", url: some_url, data: some_xml, username: user, password: pass, success: function(data,status,xhr){ alert(data); }, error: function(xhr, status, error){ alert(xhr.status); } }); The URL that is being set returns a 302 (this is expected), and Chrome follows the redirect (also expected). The new URL returns a prompt for credentials, which are not being pulled from the original request, so Chrome shows a login dialog. If I put in the original credentials, I get back a response about invalid request sent (it's a valid HTTP request -- 200 -- the remote server just doesn't like one of the headers). When viewing the developer window in Chrome, there are two requests sent. The first is to the original URL with all settings set in the ajax request. The second is to the redirect URL, with a method of "GET", nothing from the "POST" field, and no credentials. I am at a loss as to what I can do. I either need to: Get the redirect URL so I can send a second request (xhr.getResponseHeader("Location") does NOT work), Have the new redirect request preserver the settings from the original request, or Get the final URL that the error came from so I can send another request. Ideally I don't want the user to have to put in their credentials a second time in this dialog box, but I'll take what I can get if I can just get the final URL.

    Read the article

  • Ionics Rewrite Filter setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • IIRF Setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • Using CakePHP with GoDaddy IIS 7 IIS7 and Microsoft URL Rewriter

    - by ricky
    Hi, I'm trying to move a CakePHP app from a Windows Apache setup to a GoDaddy shared IIS7 setup. It's been easy to migrate except for the Apache mod_rewrite part -- which obviously wouldn't work in IIS7. I basically have no url rewriting capability, which is crucial for Cake to work. GoDaddy now offers MS URL Rewriter, but they don't offer technical support for it. I haven't seen any blog post that discusses how to do this in detail. I'd really like to avoid third-party software, especially since GoDaddy provides MS URL Rewriter, which ought to be more than sufficient. The mod_rewrite directives that will allow Cake to work on GoDaddy look ridiculously easy (pasted below); can someone help me convert it to a web.config I can use with URL Rewriter? The URL Rewriter manual is really long and complicated. I'd rather not have to read the whole thing if I don't have to. Here's the contents of the apache .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /index.php?url=$1 [QSA,L] </IfModule> Here's a link that discusses GoDaddy's limited support for URL Rewriter: http://stackoverflow.com/questions/416727/url-rewriting-under-iis-at-godaddy Many thanks! Rich

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >