Search Results

Search found 23346 results on 934 pages for 'clean url'.

Page 152/934 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • What are the common maintenance tasks on ubuntu?

    - by DaNieL
    When i was using windows, i used to run defrags, ccleaner and revouninstaller once a month to keep the system and the registry clean. I know ubuntu (and all linux distro) has a different system structure and doesnt need defrags, but i've heard there are some mainenance tasks that help to keep the system clean (for example, sudo apt-get clean or sudo apt-get autoremove) How many of those commands/software (and please explain what they do and if they can compromise the system stability) do you know and use regularly?

    Read the article

  • groovy thread for urls

    - by Srinath
    I wrote logic for testing urls using threads. This works good for less number of urls and failing with more than 400 urls to check . class URL extends Thread{ def valid def url URL( url ) { this.url = url } void run() { try { def connection = url.toURL().openConnection() connection.setConnectTimeout(10000) if(connection.responseCode == 200 ){ valid = Boolean.TRUE }else{ valid = Boolean.FALSE } } catch ( Exception e ) { valid = Boolean.FALSE } } } def threads = []; urls.each { ur - def reader = new URL(ur) reader.start() threads.add(reader); } while (threads.size() 0) { for(int i =0; i < threads.size();i++) { def tr = threads.get(i); if (!tr.isAlive()) { if(tr.valid == true){ threads.remove(i); i--; }else{ threads.remove(i); i--; } } } Could any one please tell me how to optimize the logic and where i was going wrong . thanks in advance.

    Read the article

  • Ask for an example code for parsing xml and get attributes by using GData API

    - by ben
    When i use GData API in my app for parsing xml,how could i get attributes and it's value? Wanted a piece of example code.Thanks a lot. xml: <playurls><url islive="0" type="3" bit_stream="1">http://vods.netitv.com//dy2/2010/02/08/cf584b76-3579-4b75-a0c8-f7a473d79f8c.mp4 </url><url islive="0" type="3" bit_stream="2">http://vods.netitv.com//dy/2010/02/08/965bbc65-8ec0-4c50-98ae-c69a831926cc.mp4 </url><url islive="0" type="2" bit_stream="1">http://vods.netitv.com//dy2/2010/02/08/cf584b76-3579-4b75-a0c8-f7a473d79f8c.mp4 </url><url islive="0" type="2" bit_stream="2">http://vods.netitv.com//dy/2010/02/08/965bbc65-8ec0-4c50-98ae-c69a831926cc.mp4 </url></playurls>

    Read the article

  • net/http.rb:560:in `initialize': getaddrinfo: Name or service not known (SocketError)

    - by Sid
    ` @@timestamp = nil def generate_oauth_url @@timestamp = timestamp url = CONNECT_URL + REQUEST_TOKEN_PATH + "&oauth_callback=#{OAUTH_CALLBACK}&oauth_consumer_key=#{OAUTH_CONSUMER_KEY}&oauth_nonce=#{NONCE} &oauth_signature_method=#{OAUTH_SIGNATURE_METHOD}&oauth_timestamp=#{@@timestamp}&oauth_version=#{OAUTH_VERSION}" puts url url end def sign(url) Base64.encode64(HMAC::SHA1.digest((NONCE + url), OAUTH_CONSUMER_SECRET)).strip end def get_request_token url = generate_oauth_url signed_url = sign(url) request = Net::HTTP.new((CONNECT_URL + REQUEST_TOKEN_PATH),80) puts request.inspect headers = { "Authorization" => "Authorization: OAuth oauth_nonce = #{NONCE}, oauth_callback = #{OAUTH_CALLBACK}, oauth_signature_meth od = #{OAUTH_SIGNATURE_METHOD}, oauth_timestamp=#{@@timestamp}, oauth_consumer_key = #{OAUTH_CONSUMER_KEY}, oauth_signature = #{signed_url}, oauth_versio n = #{OAUTH_VERSION}" } request.post(url, nil,headers) end def timestamp Time.now.to_i end ` I am trying to do what oauth does in an attempt to understand how to use the Authorization headers. I am also getting the following error. I am trying to connect to the linkedin API. /usr/lib/ruby/1.8/net/http.rb:560:in 'initialize': getaddrinfo: Name or service not known (SocketError) I would really appreciate it if someone could nudge me in the right direction.

    Read the article

  • How can I make WWW:Mechanize to not fetch pages twice?

    - by planetp
    I have a web scraping application, written in OO perl. There's single WWW::Mechanize object used in the app. How can I make it to not fetch the same url twice, i.e. make the second get() with the same url nop: my $mech = WWW::Mechanize->new(); my $url = 'http:://google.com'; $mech->get( $url ); # first time, fetch $mech->get( $url ); # same url, do nothing

    Read the article

  • How can I use a class with the same name from another namespace in my class?

    - by Beau Simensen
    I have two classes with the same name in different namespaces. I want one of these classes to reference the other class. The reason is that I am migrating to some newer code and I want to update the old code to simply pass through to the newer code. Here is a super basic example: namespace project { namespace legacy { class Content { public: Content(const string& url) : url_(url) { } string url() { return url_; } private: string url_; }; }} // namespace project::legacy; namespace project { namespace current { class Content { public: Content(const string& url) : url_(url) {} string url() { return url_; } private: string url_; }} // namespace project::current; I expected to be able to do the following to project::legacy::Content, but I am having trouble with some linker issues. Is this an issue with how I'm trying to do this, or do I need to look more closely at my project files to see if I have some sort of weird dependency issues? #include "project/current/Content.h" namespace project { namespace legacy { class Content { public: Content(const string& url) : actualContent_(url) { } string url() { return actualContent_.url(); } private: project::current::Content actualContent_; }; }} // namespace project::legacy; The test application compiles fine if I try to reference an instance of project::current::Content but if I try to reference project::current::Content from project::legacy::Content I get an: undefined reference to `project::current::Content::Content(...)` UPDATE As it turns out, this was a GNU Autotoolset issue and was unrelated to the actual topic. Thanks to everyone for their help and suggestions!

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • PHP Curl and Loop based on a numeric value

    - by danit
    Im using the Twitter API to collect the number of tweets I've favorited, well to be accurate the total pages of favorited tweets. I use this URL: http://api.twitter.com/1/users/show/username.xml I grab the XML element 'favorites_count' For this example lets assume favorites_count=5 The Twitter API uses this URL to get the favorties: http://twitter.com/favorites.xml (Must be authenticated) You can only get the last 20 favorties using this URL, however you can alter the URL to include a 'page' option by adding: ?page=3 to the end of the favorites URL e.g. http://twitter.com/favorites.xml?page=2 So what I need to do is use CURL (I think) to collect the favorite tweets, but using the URL: http://twitter.com/favorites.xml?page=1 http://twitter.com/favorites.xml?page=2 http://twitter.com/favorites.xml?page=3 http://twitter.com/favorites.xml?page=4 etc... Some kind of loop to visit each URL, and collect the Tweets and then output the cotents. Can anyone help with this: - Need to use CURL to authenticate - Collect the number of pages of tweets (Already scripted this) - Then use a loop to go through each page URL based on the pages value?

    Read the article

  • Javascript Redirect to another page

    - by FearUs
    is there any way to do a redirect other than: document.location = url document.location.href = url document.location.replace = url window.location = url window.location.href = url window.location.replace = url ????? I really want to redirect a user to another page just like if he clicked on a hyperlink !!

    Read the article

  • Can't parse XML effectively using Python

    - by Harshit Sharma
    import urllib import xml.etree.ElementTree as ET def getWeather(city): #create google weather api url url = "http://www.google.com/ig/api?weather=" + urllib.quote(city) try: # open google weather api url f = urllib.urlopen(url) except: # if there was an error opening the url, return return "Error opening url" # read contents to a string s = f.read() tree=ET.parse(s) current= tree.find("current_condition/condition") condition_data = current.get("data") weather = condition_data if weather == "<?xml version=": return "Invalid city" #return the weather condition #return weather def main(): while True: city = raw_input("Give me a city: ") weather = getWeather(city) print(weather) if __name__ == "__main__": main() gives error , I actually wanted to find values from google weather xml site tags

    Read the article

  • How to find a hidden streaming video/audio link and record it

    - by Stan
    I've been using 'URL snooper' to find the hidden streaming url. And feed that url to VLC to record streaming video/audio. But the VLC can't read those url. Then I also found that the url is like a floating url that changes every several hours. So the same audio station won't have same url. The streaming audio provider has bunches of audio stations and shuffle the link frequently. Is there any way to record the streaming media in this case? Please advise, thanks.

    Read the article

  • how to associate the activemq-core.xsd url with the activemq.xsd found in the jar file?

    - by livia
    Hello, Somebody knows how to associate the activemq-core.xsd url with the activemq.xsd found in the jar file (activemq-core-5.2.0.jar)? I dig some solution in internet but didn't work. I'm getting this error: Caused by: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 19 in XML document from class path resource [jms-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'amq:broker'. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:404) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:342) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:212) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:81) at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:42) at org.springframework.test.context.TestContext.loadApplicationContext(TestContext.java:173) at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:197) This shoud me hapening because we remove the URL from the schemaLocation (http://activemq.apache.org/schema/core classpath:activemq.xsd), and we are using localy. But I don't know why XML are not finding the activemq.xsd during the build process. I add the JAR file in the classpath, should not work like that?? Thanks in advance for any help

    Read the article

  • Is it possible to write an IIS URL Rewrite Rule that examines content of HTTP Post?

    - by JohnRudolfLewis
    I need to split a portion of functionality away from a legacy ISAPI dll onto another solution (ASP.NET MVC most likely). IIS7's URL Rewrite sounded like a perfect candidate for the job, but it turns out I cannot find a way to configure the rules the way I need. I need to write a rule that examines the content of the HTTP post for a particular value. i.e. <form method="post" action="legacy_isapi.dll"> <input name="foo" /> </form> if (Request.Form["foo"] == "bar") Context.RewritePath("/some_other_url/on_the_same_machine/foo/bar"); As a proof of concept, I was able to create an IHttpModule that examines context.Request.Form collection and performs a rewrite when certain parameters are present. I installed this module in my website, and it works. Rather than a custom module, however, I'd rather extend the existing URL Rewrite module to support examining the content of the HTTP Post as one of its rules. Is this possible?

    Read the article

  • Using PHP to read a web page with fsockopen(), but fgets is not working

    - by asdasd
    Im using this code here: http://www.digiways.com/articles/php/httpredirects/ public function ReadHttpFile($strUrl, $iHttpRedirectMaxRecursiveCalls = 5) { // parsing the url getting web server name/IP, path and port. $url = parse_url($strUrl); // setting path to '/' if not present in $strUrl if (isset($url['path']) === false) $url['path'] = '/'; // setting port to default HTTP server port 80 if (isset($url['port']) === false) $url['port'] = 80; // connecting to the server] // reseting class data $this->success = false; unset($this->strFile); unset($this->aHeaderLines); $this->strLocation = $strUrl; $fp = fsockopen ($url['host'], $url['port'], $errno, $errstr, 30); // Return if the socket was not open $this-success is set to false. if (!$fp) return; $header = 'GET / HTTP/1.1\r\n'; $header .= 'Host: '.$url['host'].$url['path']; if (isset($url['query'])) $header .= '?'.$url['query']; $header .= '\r\n'; $header .= 'Connection: Close\r\n\r\n'; // sending the request to the server echo "Header is: ".str_replace('\n', '\n', $header).""; $length = strlen($header); if($length != fwrite($fp, $header, $length)) { echo 'error writing to header, exiting'; return; } // $bHeader is set to true while we receive the HTTP header // and after the empty line (end of HTTP header) it's set to false. $bHeader = true; // continuing untill there's no more text to read from the socket while (!feof($fp)) { echo "in loop"; // reading a line of text from the socket // not more than 8192 symbols. $good = $strLine = fgets($fp, 128); if(!$good) { echo 'bad'; return; } // removing trailing \n and \r characters. $strLine = ereg_replace('[\r\n]', '', $strLine); if ($bHeader == false) $this-strFile .= $strLine.'\n'; else $this-aHeaderLines[] = trim($strLine); if (strlen($strLine) == 0) $bHeader = false; echo "read: $strLine"; return; } echo "after loop"; fclose ($fp); } This is all I get: Header is: GET / HTTP/1.1\r\n Host: www.google.com/\r\n Connection: Close\r\n\r\n in loopbad So it fails the fgets($fp, 128);

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • How do I edit an XML file with Perl?

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with Perl. I can export the data in a XML file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the XML file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'D:/Movies/'; my %titles_locations = (); find(\&file_handler, $dir_target); sub file_handler { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; $titles_locations{$title} = {'filename'=>$filename, 'path'=>$path }; } my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $title = {'key1' => 'title', 'key2' => 'links'}; foreach my $link (keys %$title) { } print Data::Dumper->Dump([$title]); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • ASP.NET MVC jQuery autocomplete with url.action helper in a script included in a page.

    - by Boob
    I have been building my first ASP.NET MVC web app. I have been using the jQuery autocomplete widget in a number of places like this: <head> $("#model").autocomplete({ source: '<%= Url.Action("Model", "AutoComplete") %>' }); </head> The thing is I have this jQuery code in a number of different places through my web app. So i thought I would create a seperate javascript script (script.js) where I could put this code and then just include it in the master page. Then i can put all these repeated pieces of code in that script and just call them where I need too. So I did this. My code is shown below: In the site.js script I put this function: function doAutoComplete() { $("#model").autocomplete({ source: '<%= Url.Action("Model", "AutoComplete") %>' }); } On the page I have: <head> <script src="../../Scripts/site.js" type="text/javascript"></script> doAutoComplete(); </head> But when I do this I get an Invalid Argument exception and the autocomplete doesnt work. What am I doing wrong? Any ideas?Do i need to pass something to the doAutoComplete function?

    Read the article

  • How to find a hidden stream video/audio link and record it

    - by Stan
    I've been using 'URL snooper' to find the hidden streaming url. And feed that url to VLC to record streaming video/audio. But the VLC can't read those url. Then I also found that the url is like a floating url that changes every several hours. So the same audio station won't have same url. The streaming audio provider has bunches of audio stations and shuffle the link frequently. Is there any way to record the streaming media in this case? Please advise, thanks.

    Read the article

  • URI scheme is not "file"

    - by Ankur
    I get the exception: "URI scheme is not file" The url I am playing with is ... and it very much is a file http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/domefisheye/ladybug/fish4.jpg What I am doing is trying to get the name of a file and then save that file (from another server) onto my computer/server from within a servlet. I have a String called "url", from thereon here is my code: url = Streams.asString(stream); //gets the URL from a form on a webpage System.out.println("This is the URL: "+url); URI fileUri = new URI(url); File fileFromUri = new File(fileUri); onlyFile = fileFromUri.getName(); URL fileUrl = new URL(url); InputStream imageStream = fileUrl.openStream(); String fileLoc2 = getServletContext().getRealPath("pics/"+onlyFile); File newFolder = new File(getServletContext().getRealPath("pics")); if(!newFolder.exists()){ newFolder.mkdir(); } IOUtils.copy(imageStream, new FileOutputStream("pics/"+onlyFile)); } The line causing the error is this one: File fileFromUri = new File(fileUri); I have added the rest of the code so you can see what I am trying to do.

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • PHP - DOM class - numbered entities and encodings problem

    - by user343607
    Hi guys, I'm having some difficult with PHP DOM class. I am making a sitemap script, and I need the output of $doc-saveXML() to be like <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&#xE7;os/redesign</loc> </url> </root> or <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&#231;os/redesign</loc> </url> </root> but I am getting: <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&amp;#xE7;os/redesign</loc> </url> </root> This is the closet I could get, using a replace named to numbered entities function. I was also able to reproduce <?xml version="1.0" ?> <root> <url> <loc>http://www.somesite.com/servi&amp;#xE7;os/redesign</loc> </url> </root> But without the encoding specified. The best solution (the way I think the code should be written) would be: <?php $myArray = array(); // do some stuff to populate the with URL strings $doc = new DOMDocument('1.0', 'UTF-8'); // here we modify some property. Maybe is the answer I am looking for... $urlset = doc->createElement("urlset"); $urlset = $doc->appendChild($urlset); foreach($myArray as $address) { $url = $doc->createElement("url"); $url = $urlset->appendChild($url); $loc = $doc->createElement("loc"); $loc = $url->appendChild($loc); $valueContent = $doc->createTextNode($value); $valueContent = $loc->appendChild($address); } echo $doc->saveXML(); ?> Notes: Server response header contains charset as UTF-8; PHP script is saved in UTF-8; URLs read are UTF-8 strings; Above script contains encoding declaration on DOMDocument constructor, and does not use any convert functions, like htmlentities, urlencode, utf8_encode... I've tried changing the DOMDocument properties DOMDocument::$resolveExternals and DOMDocument::$substituteEntities values. None combinations worked. And yes, I know I can made all process without specifying the character set on DOMDocument constructor, dump string content into a variable and make a very simple string substitution with string replace functions. This works. But I would like to know where I am slipping, how can this be made using native API's and settings, or even if this is possible. Thanks in advance.

    Read the article

  • How do I use .htaccess to redirect to a URL containing HTTP_HOST?

    - by Jon Cram
    Problem I need to redirect some short convenience URLs to longer actual URLs. The site in question uses a set of subdomains to identify a set of development or live versions. I would like the URL to which certain requests are redirected to include the HTTP_HOST such that I don't have to create a custom .htaccess file for each host. Host-specific Example (snipped from .htaccess file) Redirect /terms http://support.dev01.example.com/articles/terms/ This example works fine for the development version running at dev01.example.com. If I use the same line in the main .htaccess file for the development version running under dev02.example.com I'd end up being redirected to the wrong place. Ideal rule (not sure of the correct syntax) Redirect /terms http://support.{HTTP_HOST}/articles/terms/ This rule does not work and merely serves as an example of what I'd like to achieve. I could then use the exact same rule under many different hosts and get the correct result. Answers? Can this be done with mod_alias or does it require the more complex mod_rewrite? How can this be achieved using mod_alias or mod_rewrite? I'd prefer a mod_alias solution if possible. Clarifications I'm not staying on the same server. I'd like: http://example.com/terms/ - http://support.example.com/articles/terms/ https://secure.example.com/terms/ - http://support.example.com/articles/terms/ http://dev.example.com/terms/ - http://support.dev.example.com/articles/terms/ https://secure.dev.example.com/terms/ - http://support.dev.example.com/articles/terms/ I'd like to be able to use the same rule in the .htaccess file on both example.com and dev.example.com. In this situation I'd need to be able to refer to the HTTP_HOST as a variable rather than specifying it literally in the URL to which requests are redirected. I'll investigate the HTTP_HOST parameter as suggested but was hoping for a working example.

    Read the article

  • Should I be using callbacks or should I override attributes?

    - by ryeguy
    What is the more "rails-like"? If I want to modify a model's property when it's set, should I do this: def url=(url) #remove session id self[:url] = url.split('?s=')[0] end or this? before_save do |record| #remove session id record.url = record.url.split('?s=')[0] end Is there any benefit for doing it one way or the other? If so, why? If not, which one is generally more common?

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • How to combine RewriteRule of index.php and queries rewrite and avoid Server Error 404?

    - by Binyamin
    Both RewriteRule's works fine, except when used together. 1.Remove all queries except query ?callback=.*: # /api?callback=foo has no rewrite # /whatever?whatever=foo has 301 redirect /whatever RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] 2.Rewrite index.php queries api and url=$1: # /api returns data index.php?api&url= # /api/whatever returns data index.php?api&url=whatever RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L] Any valid combination to this RewriteRule's on keeping its functionality? This combination will return Server Error 404 to /api/?callback=foo: # Remove all queries except query "callback" RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] # Rewrite index.php queries RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* # Server Error 404 on /api/?callback=foo and /api/whatever?callback=foo RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L]

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >