Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 15/122 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Mapping Absolute / Relative (Local) Paths to Absolute URLs

    - by Alix Axel
    I need a fast and reliable way to map an absolute or relative local path (say ./images/Mafalda.jpg) to it's corresponding absolute URL, so far I've managed to come up with this: function Path($path) { if (file_exists($path) === true) { return rtrim(str_replace('\\', '/', realpath($path)), '/') . (is_dir($path) ? '/' : ''); } return false; } function URL($path) { $path = Path($path); if ($path !== false) { return str_replace($_SERVER['DOCUMENT_ROOT'], getservbyport($_SERVER['SERVER_PORT'], 'tcp') . '://' . $_SERVER['HTTP_HOST'], $path); } return false; } URL('./images/Mafalda.jpg'); // http://domain.com/images/Mafalda.jpg Seems to be working as expected, but since this is a critical feature to my app I want to ask if anyone can spot any problem that I might have missed and optimizations are also welcome since I'm going to use this function several times per each request.

    Read the article

  • Opening SSL URLs with Python

    - by RadiantHex
    Hi folks, I'm using mechanize to navigate pages, it works pretty well. Unfortunately I have a random error come up, by random I mean it occasionally appears. URLError at /test/ urlopen error [Errno 1] _ssl.c:1325: error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac I really need help on this one :) any ideas?

    Read the article

  • mod_rewrite and pretty urls

    - by Peeter
    What I'm trying to achieve: 1) http://localhost/en/script.php?param1=random is mapped to http://localhost/script.php?param1=random&language=English This has to work always. 2) http://localhost/en/random/text/here will be mapped to http://localhost/categories.php?term=random/text/here This has to work if random/text/here is 404 What I have at the moment: RewriteEngine on RewriteCond substr(%{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^en/(.+)$ categories.php?lang=English&terms=$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ee/(.+)$ categories.php?lang=Estonian&terms=$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^fi/(.+)$ categories.php?lang=Finnish&terms=$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ru/(.+)$ categories.php?lang=Russian&terms=$1 [L] RewriteRule ^en/(.*) $1?lang=English&%{QUERY_STRING} [L] RewriteRule ^ee/(.*) $1?lang=Estonian&%{QUERY_STRING} [L] RewriteRule ^ru/(.*) $1?lang=Russian&%{QUERY_STRING} [L] RewriteRule ^fi/(.*) $1?lang=Finnish&%{QUERY_STRING} [L] What I've thought: substr(%{REQUEST_FILENAME},3) would fix my problem (as currently /ee/index.php is literally mapped to /ee/index.php instead of just /index.php) Unfortunately I couldn't find a way to manipulate strings :/

    Read the article

  • are keywords in URLs good SEO or needlessly redundant?

    - by Blazemonger
    A coworker and I are locked in a debate over the value of SEO keywords in the URL of a page. She wants to change all the filenames of the HTML pages of a fencing company so they look like residential-home-chicago.html, contact-chicago-contractor.html, and so on. She is convinced that because Google highlights keywords in URLS in search results, that means that putting keywords here is more valuable. My position is that these do not improve SEO, since Google doesn't seem to give keywords in the URL any more weight than keywords in the body of the page, and might even give them less weight. In the meantime, they make it harder for me to find the pages I want when its time to edit them, and the site as a whole looks cheap and spammy. Google's own SEO guide suggests to me that yes, keywords in URLs are useful, but not superior, and that they are more useful for human readability than search engine rankings. I'm looking for authoritative sources that support either position, not blog articles from SEO optimization companies trying to promote themselves.

    Read the article

  • Issues with Rails, Amazon S3, and protected URLs

    - by Shpigford
    So I followed this little tutorial about protecting downloads of files that are uploaded to Amazon S3 with Paperclip. When I've developed locally, it's worked fine, but since pushing the exact same code to a production server...I now get this error from Amazon when I try to access the files: <Error> <Code>InvalidArgument</Code> <Message>Either the Signature query string parameter or the Authorization header should be specified, not both</Message> <ArgumentValue>Basic dGVjaHVrdWxlbGU6ZWxlbHVrdWhjZXQ=</ArgumentValue> <ArgumentName>Authorization</ArgumentName> <RequestId>F6E455857C54F95A</RequestId> <HostId>X4QA2pw9wpHtJtJ2T8qxCyINjq4PLHQVF4VrlYjpX7Ayh694BgQprh5p8H7NRCAt</HostId> </Error> Example URL: http://s3.amazonaws.com/media.example.com/assets/videos/1/original.mov?AWSAccessKeyId=MY_ACCESS_KEY&Expires=1271972624&Signature=7wWH2WYHPO0o9szwPJbimUMqAig%3D That URL is generated using AWS::S3::S3Object.url_for using the aws-s3 gem. So...not even sure where to start. The fact that it works fine when the app is running locally but not when in production really doesn't make sense. The production server is running Ubuntu 8.04.4 LTS (Hardy).

    Read the article

  • GWT RequestBuilder - Changin URLs

    - by Joe
    Hi ! I'm using GWT to dynamically load html snippets from php script. I define the snippet i want the php script to return in the url (test.php?snippet=1). Now in GWT i have a function "getSnippet(int snippet id)" that uses a RequestBuilder to retrieve the snippet. It works perfectly fine, but it bothers me that i have to create a new RequestBuilder everytime getSnippet gets called. I'd rather have one ReqestBuilder and just change the url when getSnippet is called... Is there a way to do this ? Thank you !

    Read the article

  • IE Mixed Content Warining when using https URLs and http:443 URLs?

    - by Campbeln
    I'm getting the good ole' "This page contains both secure and nonsecure items." dialog in IE when connecting to an HTTPS site. No big deal... I've just got something coming in over a non-secure connection so that should be an easy fix, right? So I go into "View Web Page Privacy Policy..." to look to see where I've included an HTTP file, and this is what I see... https://blah/path/to/file.htm https://blah/path/to/file.js http://blah:443/path/to/file.css Um... ok... so... there is an HTTP only URL being requested, but it is going over port 443 ("https://blah/" is shorthand for "http://blah:443/") so... What is the deal with this!? IE 7.0.5730.13 can't possibly be THAT stupid, can it? Is there an IIS setting that needs to be tweaked?

    Read the article

  • How to use NSObject to URLs with three20 properly

    - by Frank
    Basically i map my controllers to accept the address class to be passed into the listingpage controller. Which is done here: [map from:@"tt://listingPage/(initWithResult:)" toViewController:[ListingPageController class]]; [map from:[Address class] name:@"result" toURL:@"tt://listingPage/(initWithResult:)"]; This url is being used in my table item which I am has been invoked here: for (Address *result in [(id<SearchResultsModel>)self.model results]) { NSString* url = [result URLValueWithName:@"result"]; TTTableImageItem* tii = [TTTableMessageItem itemWithTitle:[result addressText] caption:[result addressText] text:[result subText] imageURL:[result image] URL:url]; [self.items addObject:tii]; } My app crashes, I am not sure why, seems to be getting an invalidate view. any help would be much appreciated.

    Read the article

  • Create signed urls for CloudFront with Ruby

    - by wiseleyb
    History: I created a key and pem file on Amazon. I created a private bucket I created a public distribution and used origin id to connect to the private bucket: works I created a private distribution and connected it the same as #3 - now I get access denied: expected I'm having a really hard time generating a url that will work. I've been trying to follow the directions described here: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?PrivateContent.html This is what I've got so far... doesn't work though - still getting access denied: def url_safe(s) s.gsub('+','-').gsub('=','_').gsub('/','~').gsub(/\n/,'').gsub(' ','') end def policy_for_resource(resource, expires = Time.now + 1.hour) %({"Statement":[{"Resource":"#{resource}","Condition":{"DateLessThan":{"AWS:EpochTime":#{expires.to_i}}}}]}) end def signature_for_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) policy = url_safe(policy_for_resource(resource, expires)) key = OpenSSL::PKey::RSA.new(File.readlines(private_key_file_name).join("")) url_safe(Base64.encode64(key.sign(OpenSSL::Digest::SHA1.new, (policy)))) end def expiring_url_for_private_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) sig = signature_for_resource(resource, key_id, private_key_file_name, expires) "#{resource}?Expires=#{expires.to_i}&Signature=#{sig}&Key-Pair-Id=#{key_id}" end resource = "http://d27ss180g8tp83.cloudfront.net/iwantu.jpeg" key_id = "APKAIS6OBYQ253QOURZA" pk_file = "doc/pk-APKAIS6OBYQ253QOURZA.pem" puts expiring_url_for_private_resource(resource, key_id, pk_file) Can anyone tell me what I'm doing wrong here?

    Read the article

  • Best SEO practices for mobile URLs: 301, rel=canonical, or something else?

    - by Chris
    I am developing a site with a mobile version and am trying to figure the appropriate way to manage the URLs for search engines. So far I've considered: Having a separate mobile site (m.example.com) with rel="canonical" links to the regular site. Putting both the mobile site and full site on one URL (example.com), and doing user agent sniffing. Another opinion: Spencer: "If you have a mobile site at a separate location or URL, you should 301 redirect each and every mobile page to its corresponding page on your main website. Employ user agent detection so that the mobile optimized version is served up if someone's coming in from a hand-held. - http://developer.practicalecommerce.com/articles/1722-Mobile-site-Development-Best-Practices-for-SEO-Usability Both 2 and 3 make it hard for a user who wants to switch to the full site or mobile site manually, but I'm not sure 1 is the best alternative. What's the best way to write URLs for a mobile site?

    Read the article

  • Client Templates and Ajax 4.0 And Urls

    - by RubbleFord
    Client Templating And URL's Title is required Im trying to output <a href="{{ link }}">click me</a> the data in question is spotify:track:0ucyXpQG7xL8ipoyU0Ts3A , once I remove the ":" the link comes through, any ideas on this one? As you can probably guess I'm trying to trigger the spotify protocol handler.

    Read the article

  • How to Make URLs clickable inside of UITableViewCell ?

    - by Chris
    I know how to use UIWebView and can invoke a WebView if it is associated with a specific button or UITableViewCell. What I am trying to achieve is to have a UITableViewCell with a chunk of text. That chunk of text might contain a URL. I want to make the URL into a clickable link and have that link open into a WebView. My thought so far has been that I need to detect a link and insert it as a UIButton within the cell... but I don't know if that is the right way to go about this. Any ideas or input would be much appreciated.

    Read the article

  • ASP.NET MVC: Making routes/URLs IIS6 and IIS7-friendly

    - by Seb Nilsson
    I have an ASP.NET MVC-application which I want deployable on both IIS6 and IIS7 and as we all know, IIS6 needs the ".mvc"-naming in the URL. Will this code work to make sure it works on all IIS-versions? Without having to make special adjustments in code, global.asax or config-files for the different IIS-versions. bool usingIntegratedPipeline = HttpRuntime.UsingIntegratedPipeline; routes.MapRoute( "Default", usingIntegratedPipeline ? "{controller}/{action}/{id}" : "{controller}.mvc/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); Update: Forgot to mention. No ISAPI. Hosted website, no control over the IIS-server.

    Read the article

  • Clean URLs mod_rewrite & wildcard subdomains

    - by Søren Zet
    I got this url http://domain.com/blogs/directory-param with this rule RewriteBase /blogs/directory/ RewriteRule ^/blogs/directory-([A-Za-z0-9-]+)$ /blogs/directory/index.php?cat=$1 [L] so I get /blogs/directory/index.php?cat=param now my problem is the following: I use wildcards subdomains so every *.domain.com is mapped to domain.com/blogs/ for example soeren.domain.com is mapped to domain.com/blogs and so on... My problem now is I want a rule for soeren.domain.com/directory-param which points to domain.com/blogs/directory?index.php?cat=param Do you have any ideas?

    Read the article

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • Invoking browser on streaming media URLs

    - by Maven
    I have a dirt simple little function that launches the blackberry browser on a streaming media file in order to launch the built in media player. Everything works fine but there is this annoying dialog every time from the browser asking me if I want to save or open the file. My answer is always "open" the file so is there a way I can make it default and not bring up the dialog each time? The code I'm using to launch the browser // Get the default sessionBrowserSession browserSession = Browser.getDefaultSession(); // now launch the URL browserSession.displayPage(url); This is on blackberry OS 5.0 Thanks!

    Read the article

  • How to make CodeIgniter accept "query string" URLs?

    - by Peter
    According to CI's docs, CodeIgniter uses a segment-based approach, for example: example.com/my/group If I want to find a specific group (id=5), I can visit example.com/my/group/5 And in the controller, define function group($id='') { ... } Now I want to use the traditional approach, which CI calls "query string" URL. Example: example.com/my/group?id=5 If I go to this URL directly, I get a 404 page not found. So how can I enable this?

    Read the article

  • Escaping ampersands in URLs for HttpClient requests

    - by jpatokal
    So I've got some Java code that uses Jakarta HttpClient like this: URI aURI = new URI( "http://host/index.php?title=" + title + "&action=edit" ); GetMethod aRequest = new GetMethod( aURI.getEscapedPathQuery()); The problem is that if title includes any ampersands (&), they're considered parameter delimiters and the request goes screwy... and if I replace them with the URL-escaped equivalent %26, then this gets double-escaped by getEscapedPathQuery() into %2526. I'm currently working around this by basically repairing the damage afterward: URI aURI = new URI( "http://host/index.php?title=" + title.replace("&", "%26") + "&action=edit" ); GetMethod aRequest = new GetMethod( aURI.getEscapedPathQuery().replace("%2526", "%26")); But there has to be a nicer way to do this, right? Note that the title can contain any number of unpredictable UTF-8 chars etc, so escaping everything else is a requirement.

    Read the article

  • Regular expressions and matching question marks in URLs

    - by James P.
    I'm having trouble finding a regular expression that matches the following String. Korben;http://feeds.feedburner.com/KorbensBlog-UpgradeYourMind?format=xml;1 One problem is escaping the question mark. Java's pattern matcher doesn't seem to accept \? as a valid escape sequence but it also fails to work with the tester at myregexp.com. Here's what I have so far: ([a-zA-Z0-9])+;http://([a-zA-Z0-9./-]+);[0-9]+ Any suggestions?

    Read the article

  • Customizing the look of S3 expiring urls

    - by Gregoriy
    Creating expiring links for Amazon S3 buckets, could I somehow change the name of AWSAccessKeyId to something else (to custom it, to speak so) so that not to disclose the use of Amazon in my web applications? For now, it looks like so: http://video.mysite.com/T154456.flv?AWSAccessKeyId=1ESOMESPECIALIDJJAKJ6RA82&Expires=1241372284&Signature=ddfr%2BlkoSEPAL%2BGbMwlMzj6q%2BCY%3D Or, in order to not open another question, what are other (tricky?) ways to expire S3 links without the use of proxy?

    Read the article

  • Why is Google Webmaster Tools crawling invalid URLS and showing 500 errors?

    - by Amos Kane
    Google Webmaster tools is reporting 12k+ 500 errors. Eeek! None of the URLS are valid- they all contain www.youtube.com. First, why is Google crawling these URLS if they don't exist? I supplied a sitemap, and they are of course not in the sitemap. I don't have a robots.txt blocking anything. I've checked for invalid redirects--none, and checked for unclosed tags or something that would throw www.youtube.com into the URL by accident--none. In every 'linked from', the referring URL is also a bad URL, with www.youtube.com in it. The Google Tools report no malware, and I can't check the server logs because the host won't give me access. Really stuck!! Any ideas appreciated!

    Read the article

  • modx friendly urls nginx FPM php5.3 - friendly url's not working

    - by okdan
    Hi, Im using php5.3 on nginx 0.8.53 with FPM on Modx revolution. Im trying to get "friendly url's" to work, but all I get is 404's. In modx config, friendly url's is set to yes, friendly aliases is set to no (so it drops the suffix) My config file: server { listen 80; server_name .mydomain.net; # index index.php; root /home/mylogin/htdocs; location / { index index.php index.html; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } } # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico)$ { root /home/mylogin/htdocs; access_log off; expires 30d; break; } } Fast CGI modx file: fastcgi_connect_timeout 60; fastcgi_send_timeout 300; fastcgi_buffers 4 32k; fastcgi_busy_buffers_size 32k; fastcgi_temp_file_write_size 32k; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_ignore_client_abort on; fastcgi_intercept_errors on; fastcgi_read_timeout 300; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • How do I replace outbound link URLs in a PDF document, using PHP

    - by Alex Poole
    I have a PDF document with some external links. I'd like to parse the document, replace the destination of the links then close (and serve) the PDF document, all using PHP I know I can do this with PDFLib but I don't want to incur this cost. I could re-write the document with FPDF or DomPDF, but some of these PDFs are quite complex so this would be a major time investment. Surely there must be a way to do this directly to PDF docs, using native PHP? TIA

    Read the article

  • UIWebView: webViewDidStartLoad/webViewDidFinishLoad delegate methods not called when loading certain URLs

    - by Dia
    I have basic web browser implemented using a UIWebView. I've noticed that for some pages, none of the UIWebViewDelegate methods are called. An example page in which this happens is: http://www.youtube.com/user/google. Here are the steps to reproduce the issue (make sure you insert NSLog calls in your controller's UIWebViewDelegate methods): Load the above youtube URL into the UIWebView [notice that here, the UIWebViewDelegate methods do get called when the page loads] Touch the "Uploads" category on the page Touch any video in that category [issue: notice that a new page is loaded, but none of the UIWebView delegates are called] I know that this is not an issue of UIWebView's delegate not being set properly, since the delegate methods do get invoked when loading other links (e.g. if you try clicking on a link that takes you outside of youtube, you'll notice the delegate methods getting called). My gut feeling initially was that it might be because the page is loaded using AJAX, which may not invoke the delegate method. But then when I checked Safari, it did not exhibit this problem, so it must be something on my side. I've also noticed that Three20's TTWebController has the exact same issue as I'm having. But the problem that arises from this issue is that without the delegate methods called, I'm unable to update the UI to enable/disable the back and forward browsing buttons when new requests are loaded. And idea why this is happening or how can I work around it to update the UI when a new request is made?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >