Search Results

Search found 19174 results on 767 pages for 'restful url'.

Page 49/767 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Creating Application URL for iPhone

    - by pion
    I am preparing to submit my iPhone app for approval. This is my first time. One of the requirements is "Application URL". I have done the following to create Application URL: Click Foo-Info.plist Right Click Information Property List" Click "Add Row" Select URL types Expand "URL Types" Expand Item 0 Type in "com.mycompany.Foo" in the Value field with "URL Identifier" key I am wondering if I do this correctly. Thanks in advance for your help.

    Read the article

  • UIWebView comparing current and defined URL's with a loop depending on result

    - by Syleron
    I am trying to compare the current url in webView with a defined url say google.com so in theory.. NSURLRequest *currentRequest = [webView request]; NSURL *currentURL = [currentRequest URL]; would give us our current url... NSString *newurl = @"http://www.google.com"; this would give us the compared to defined url while (!currentURL == newurl) { //do whatever here because currentURL does not equal the newurl } This does not seem to work though.. solutions?

    Read the article

  • url validation in ruby on rails

    - by jpallavi
    1)Url field should also accept url as “www.abc.com”. If user enters url like this, it should be automatically appended with “http://” resulting in value saved in database as “http://www.abc.com”. If user enters url as “http://www.xyz.com” system should not append “http://”. User should be able to save url with “https://”. what is the code for it in ruby on rails?

    Read the article

  • "Share on LinkedIn" widget chokes on encoded spaces in url param

    - by David Droddy
    Does anyone know why I am not able to include my own, URL encoded URL params with URL encoded spaces? See the URL on my jsBin page constructed from LinkedIn's example--I have added (%3FnestedParam%3Done%20space) at the end of the "URL" value. THEN, if you remove the encoded space (%3FnestedParam%3DoneSpace) it works fine: Try it out: http://jsbin.com/acosa3/3 Thanks!

    Read the article

  • Extracting numbers from a url using javascript?

    - by stormist
    var exampleURL = '/example/url/345234/test/'; var numbersOnly = [?] The /url/ and /test portions of the path will always be the same. Note that I need the numbers between /url/ and /test. In the example URL above, the placeholder word example might be numbers too from time to time but in that case it shouldn't be matched. Only the numbers between /url/ and /test. Thanks!

    Read the article

  • Get current URL in Python

    - by Alex
    How would i get the current URL with Python, I need to grab the current URL so i can check it for query strings e.g requested_url = "URL_HERE" url = urlparse(requested_url) if url[4]: params = dict([part.split('=') for part in url[4].split('&')]) also this is running in Google App Engine

    Read the article

  • cURL works but PHP cURL fails to internet [migrated]

    - by wrk2bike
    Trying to diagnose an issue using PHP to cURL to an Internet location on a RedHat Linux server. cURL is installed and working, and: <?php var_dump(curl_version()); ?> shows all the correct information in the output. The issue is I can use PHP to cURL to localhost on the box itself, but not the Internet (see below). Normally I'd suspect the firewall, but I can cURL from the command line to the Internet without a problem. The box can also update it's own software packages, etc. What am I missing? My test is: <?php function http_head_curl($url,$timeout=30) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_TIMEOUT, $timeout); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $res = curl_exec($ch); if ($res === false) { throw new RuntimeException("cURL exception: ".curl_errno($ch).": ".curl_error($ch)); } return trim($res); } // Succeeds, displaying headers echo(http_head_curl('localhost')); // Fails: echo(http_head_curl('www.google.com')); ?>

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured a virtual host in my local machine and every thing is working fine. Now I want to use SEO friendly urls. To achieve this I have used the .htaccess file. My virtual host configuration is: <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htaccess file has: AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htaccess setting is not working. After that I modified my virtual host setting and it is working. The new virtual host setting is: <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please let me know where I am going wrong in the .htacess file for url rewriting. I do not want to use the settings in virtual host, since for every change I have restart apache.

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured virtual host in my local machine and every thing working fine . Now I want to use SEO friendly urls. To achive this I have used .htacess file . My virtual host configuration is : <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htacess file has : AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htacess setting is not working . After that I have modigied my virtual host setting and it is working . new virtual host setting is : <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please guide me where I am wrong in .htacess file for url rewriting . I donot want to use setting in virtual host because for every change I have restart apache .

    Read the article

  • URL length and content optimised for SEO

    - by Brendan Vogt
    I have done some reading on what URLS should look like for search engine optimisation, but I am curious to know how mine would like, I need some advice. I have a tutorial website, and my categories is something like: Web Development -> Client Side -> JavaScript So if I have a tutorial called "What is JavaScript?", is it good to have a URL that looks something like: www.MyWebsite.com/web-development/client-side/javascript/what-is-javascipt Or would something like this be more appropriate: www.MyWebsite.com/tutorials/what-is-javascipt Just curious because I also read that it is wise to have keywords in your URLs. Do I need to add the identifiers of each categories in the link as well, something like: www.MyWebsite.com/1/web-development/5/client-side/15/javascript/100/what-is-javascipt 1 is the unique identifier (primary key) of category web development 5 is the unique identifier (primary key) of category client side 15 is the unique identifier (primary key) of category javascript 100 is the unique identifier (primary key) of tutorial what is javascript UPDATE This is not a programming question so can someone please help migrate this to the correct Q&A site without devoting my questions?

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Alternative to nofollow: custom 302 url shortener?

    - by Dogweather
    Here's the scenario: lots of blogging platforms make it tedious to insert nofollow into links within the post content. I.e., you need to edit the html, format it correctly, etc. I have a client who posts lots of content with links that should be nofollow'ed, and I thought of a novel way to handle this, since the blogging platform they're using makes it hard: I install a URL shortener web app on the client's domain. The shortener works as normal, except it redirects via 302 instead of 301. The pagerank will therefore stay at the shortener's domain, and not flow on to the target site. Part 2: In order to get the pagerank to collect meaningfully, say on the site's home page, the shortened URLs would be generated like this: /link?12345 instead of /link/12345. And then, the path /link would 301 to the home page. This way, the id is a param, not a path element. And thus, all the incoming shortened links are going to one path, which transfers pagerank to the home page. So that's my idea. I wanted to see if anybody could find problems with it. Thanks!

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • COMPLETE list of HTML tag attributes which have a URL value?

    - by system PAUSE
    Besides the following, are there any HTML tag attributes that have a URL as their value? href attribute on tags: <link>, <a>, <area> src attribute on tags: <img>, <iframe>, <frame>, <embed>, <script>, <input> action attribute on tags: <form> data attribute on tags: <object> Looking for tags in wide usage, including non-standard tags and old browsers as well as HTML 4.01, HTML 5, and XHTML. Yes this question is kinda lightweight, but I googled around for about 45 minutes and didn't find this data centralized anywhere, so I figure it might help some other developer to have it here. Plus I'm sure I'm missing something. Feel free to repeat/reorganize this list in your answer. Upvoting the most complete answers will probably be most helpful to others.

    Read the article

  • How can I accept a hash mark in a URL via $_GET?

    - by bccarlso
    From what I have been able to understand, hash marks (#) aren't sent to the server, so it doesn't seem likely that I will be able to use raw PHP to parse data like in the URL below: index.php?name=Ben&address=101 S 10th St Suite #301 I'm looking to pre-populate form fields with this $_GET data. How would I do this with Javascript (or jQuery), and is there a fallback that wouldn't break my form for people not using Javascript? Currently if there is a hash (usually in the address field), everything after that is not parsed in or stored in $_GET.

    Read the article

  • How to get link_to in Rails output an SEO friendly url?

    - by Jason
    Hi, My link_to tag is: <%= link_to("My test title",{:controller=>"search", :action=>"for-sale", :id=> listing.id, :title => listing.title, :search_term => search_term}) %> and produces this ugly URL: http://mysite.com/search/for-sale/12345?title=premium+ad+%2B+photo+%5Btest%5D How can I get link_to to generate: http://mysite.com/search/for-sale/listing-title/search-term/12345 Been trying this a few different ways and cannot find much online, really appreciate any help!

    Read the article

  • URL valid characters. java to validate.

    - by Chez
    a string like: 'www.test.com' is good. a string like: 'www.888.com' is good. a string like: 'stackoverflow.com' is good. a string like: 'GOoGle.Com' is good. why ? because those are valid urls. it does not necessarely matter if they have been registered or not. now bad strings are: 'goog*d\x' 'manydots...com' why because you can't register those urls. if I have a string in java which is supposed to be a good url what's the best way to validate it ? thanks a lot

    Read the article

  • Easy way to parse a url in C++ cross platform?

    - by Andrew Bucknell
    I need to parse a url to get the protocol host path and query in an application I am writing in c++. The application is intended to be cross platform. Im surprised I cant find anything that does this in boost or poco libraries. Is it somewhere obvious Im not looking? Any suggestions on appropriate open source libs? Or is this something I just have to do my self? Its not super complicated but it seems such a common task I am surprised there isnt a common solution.

    Read the article

  • Is there a semi-standard way to associate a URL with an IRC user?

    - by DRMacIver
    I'm in the process of doing some identity consolidation, so I'm providing URLs to me at various locations on the internet. I'm quite active on IRC, so this naturally lead me to wonder whether there was a way to provide a link to my IRC presence. This lead to me finding http://www.w3.org/Addressing/draft-mirashi-url-irc-01.txt which appears to be a draft of an RFC for associating URLs with IRC, which suggests that I would be irc://irc.freenode.net/DRMacIver,isnick Which seems a little on the lame side. Further, this RFC draft has very thoroughly expired (February 28 1997). On the other hand it seems to be implemented in chatzilla at least: http://www.mozilla.org/projects/rt-messaging/chatzilla/irc-urls.html So does anyone know if there's a superseding RFC and/or any other de facto standard for this?

    Read the article

  • Using JavaScript to change the URL used when a page is bookmarked...

    - by user30997
    JavaScript doesn't allow you to update window.location without triggering a reload. While I agree with this policy in principle (it shouldn't be possible to visit my website and have JavaScript change the location bar to read www.yourbankingsite.com,) I believe that it should be possible to change www.foo.org/index to www.foo.org/help. The only reason I care about this is for bookmarking. I'm working on a photo browser, and when a user is previewing a particular image, I want that image to be the default if they should bookmark that page. For example, if they are viewing foo.org/preview/images0-30 and they click on image #15, that image is expanded to a medium-sized view. If they then bookmark the page, I want the bookmark URL to be foo.org/preview/images0-30/active15. Any thoughts, or is there a security barrier on this one as well? I can certainly understand the same policy being applied here, but one can dream.

    Read the article

  • Can I get a "base URL" in Wordpress within a template file?

    - by alex
    Usually in my PHP apps I have a base URL setup so I can do things like this <a href="<?php echo BASE_URL; ?>tom/jones">Tom</a> Then I can move my site from development to production and swap it easily and have the change go site wide (and it seems more reliable than <base href="" />. I'm doing up a Wordpress theme, and I am wondering, does WordPress have anything like this built in, or do I need to redefine my own? I can see ABSPATH, but that is the absolute file path in the file system, not something from the document root.

    Read the article

  • ASP.NET URL Re-writing; Is this possible?

    - by James Evans
    My app is currently written to accept vendor and product information like this. http://www.mydomain.com/foo.aspx?v=1&p=100 could this be re-written like this? http://www.mydomain.com/1/100/foo Since the values in the original query string are database IDs, how would I express newly created IDs as segments of the "path" in the re-written version of the URL? My goal would be to create more of an automated solution that would accomplish this. EDIT: The app is written using ASP.NET webforms, .NET 4.0 and IIS 7

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >