Search Results

Search found 21298 results on 852 pages for 'www mechanize'.

Page 47/852 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Squid proxy not serving modified html content

    - by Matthew
    I'm trying to use squid to modify the page content of web page requests. I followed the upside-down-ternet tutorial which showed instructions for how to flip images on pages. I need to change the actual html of the page. I've been trying to do the same thing as in the tutorial, but instead of editing the image I'm trying to edit the html page. Below is a php script I'm using to try to do it. All jpg images get flipped, but the content on the page does not get edited. The edited index.html files written contain the edited content, but the pages the users receive don't contain the edited content. #!/usr/bin/php <?php $temp = array(); while ( $input = fgets(STDIN) ) { $micro_time = microtime(); // Split the output (space delimited) from squid into an array. $temp = split(' ', $input); //Flip jpg images, this works correctly if (preg_match("/.*\.jpg/i", $temp[0])) { system("/usr/bin/wget -q -O /var/www/cache/$micro_time.jpg ". $temp[0]); system("/usr/bin/mogrify -flip /var/www/cache/$micro_time.jpg"); echo "http://127.0.0.1/cache/$micro_time.jpg\n"; } //Don't edit files that are obviously not html. $temp[0] contains url of file to get elseif (preg_match("/(jpg|png|gif|css|js|\(|\))/i", $temp[0], $matches)) { echo $input; } //Otherwise, could be html (e.g. `wget http://www.google.com` downloads index.html) else{ $time = time() . microtime(); //For unique directory names $time = preg_replace("/ /", "", $time); //Simplify things by removing the spaces mkdir("/var/www/cache/". $time); //Create unique folder system("/usr/bin/wget -q --directory-prefix=\"/var/www/cache/$time/\" ". $temp[0]); $filename = system("ls /var/www/cache/$time/"); //Get filename of downloaded file //File is html, edit the content (this does not work) if(preg_match("/.*\.html/", $filename)){ //Get the html file contents $contentfh = fopen("/var/www/cache/$time/". $filename, 'r'); $content = fread($contentfh, filesize("/var/www/cache/$time/". $filename)); fclose($contentfh); //Edit the html file contents $content = preg_replace("/<\/body>/i", "<!-- content served by proxy --></body>", $content); //Write the edited file $contentfh = fopen("/var/www/cache/$time/". $filename, 'w'); fwrite($contentfh, $content); fclose($contentfh); //Return the edited page echo "http://127.0.0.1/cache/$time/$filename\n"; } //Otherwise file is not html, don't edit else{ echo $input; } } } ?>

    Read the article

  • CSS: series of floated elements without wrapping but rather scrolling horizontally

    - by tybro0103
    I'm working on a album viewer. At the top I want a horizontal container of all the image thumbnails. Right now all the thumbnails are wrapped in a div with float:left. I'm trying to figure out how to keep these thumbnails from wrapping to the next line when there are too many, but rather stay all in one horizontal row and use the scrollbar. Here's my code: (I don't want to use tables) <style type="text/css"> div { overflow:hidden; } #frame { width:600px; padding:8px; border:1px solid black; } #thumbnails_container { height:75px; border:1px solid black; padding:4px; overflow-x:scroll; } .thumbnail { border:1px solid black; margin-right:4px; width:100px; height:75px; float:left; } .thumbnail img { width:100px; height:75px; } #current_image_container img { width:600px; } </style> <div id="frame"> <div id="thumbnails_container"> <div class="thumbnail"><img src="http://www.blueridgexotics.com/images/glry-pixie-bob-kittens.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/images/PB-KitJan08-1.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/images/PB-KitJan08-3.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/images/PB-Jan08.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/images/gallery3.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/images/gallery4.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/Gallery-Pics/kitten3.jpg" alt="foo" /></div> <div class="thumbnail"><img src="http://www.blueridgexotics.com/Gallery-Pics/kitten1.jpg" alt="foo" /></div> </div> <div id="current_image_container"> <img src="http://www.whitetailrun.com/Pixiebobs/PBkittenpics/shani-kits/Cats0031a.jpg" alt="foo" /> </div> </div>

    Read the article

  • Use a subdirectory as root with htaccess in Apache 1.3

    - by Andrew
    I'm trying to deploy a site generated with Jekyll and would like to keep the site in its own subfolder on my server to keep everything more organized. Essentially, I'd like to use the contents of /jekyll as the root unless a file similarly named exists in the actual web root. So something like /jekyll/sample-page/ would show as http://www.example.com/sample-page/, while something like /other-folder/ would display as http://www.example.com/other-folder. My test server runs Apache 2.2 and the following .htaccess (adapted from http://gist.github.com/97822) works flawlessly: RewriteEngine On # Map http://www.example.com to /jekyll. RewriteRule ^$ /jekyll/ [L] # Map http://www.example.com/x to /jekyll/x unless there is a x in the web root. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/jekyll/ RewriteRule ^(.*)$ /jekyll/$1 # Add trailing slash to directories without them so DirectoryIndex works. # This does not expose the internal URL. RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule ^(.*)$ $1/ # Disable auto-adding slashes to directories without them, since this happens # after mod_rewrite and exposes the rewritten internal URL, e.g. turning # http://www.example.com/about into http://www.example.com/jekyll/about. DirectorySlash off However, my production server runs Apache 1.3, which doesn't allow DirectorySlash. If I disable it, the server gives a 500 error because of internal redirect overload. If I comment out the last section of ReWriteConds and rules: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule ^(.*)$ $1/ …everything mostly works: http://www.example.com/sample-page/ displays the correct content. However, if I omit the trailing slash, the URL in the address bar exposes the real internal URL structure: http://www.example.com/jekyll/sample-page/ What is the best way to account for directory slashes in Apache 1.3, where useful tools like DirectorySlash don't exist? How can I use the /jekyll/ directory as the site root without revealing the actual URL structure? Edit: After a ton of research into Apache 1.3, I've found that this problem is essentially a combination of two different issues listed at the Apache 1.3 URL Rewriting Guide. I have a (partially) moved DocumentRoot, which in theory would be taken care of with something like this: RewriteRule ^/$ /e/www/ [R] I also have the infamous "Trailing Slash Problem," which is solved by setting the RewriteBase (as was suggested in one of the responses below): RewriteBase /~quux/ RewriteRule ^foo$ foo/ [R] The problem is combining the two. Moving the document root doesn't (can't?) use RewriteBase—fixing trailing slashes requires(?) it… Hmm…

    Read the article

  • htaccess redirect from root to subfolder and then mask the url?

    - by KiwiCoder
    Hello all, Two things: Firstly - I have version 2 of a website located in a folder named v2, and I want to redirect any traffic that is NOT a child of the v2 folder, to www.example.com/v2 The old site located in the root was created in iWeb and has a LOT of subfolders and sub-subfolders. So: www.example.com/v2 = New site www.example.com/Page.html www.example.com/category/Page.html ww.example.com/category/subcategory/Page.html = All generic examples of what I need to redirect. Secondly, and I don't know if this is possible, I want to hide /v2/ in the URL, so that visitors will just see www.example.com/page even though they are actually on www.example.com/v2/page Links are hardcoded to the v2 folder, like so <a href="v2/contact.html" Any help is MOST appreciated. I've spent hours trying to figure this out, but I'm only just learning about htaccess and regular expressions, and am totally confused. Thanks so much!

    Read the article

  • Is there an API for booking flights and/or cruises?

    - by zeckdude
    I'm creating a website for a travel agent. She wants to include a feature where she can let the user book a flight or cruise(especially this) from her website via an API. I would prefer a free API that provides this functionality, but I am willing to look at quality commercial API's if they offer the services I need. Here are some I have already found(but I am not sure if they do what I need): Free Cleartrip API - http://www.programmableweb.com/api/cleartrip Vianet API - http://www.programmableweb.com/api/vianet Commercial AgentFactor Travel API - http://www.programmableweb.com/api/agentfactor-travel Rezgo API - http://www.programmableweb.com/api/rezgo TravelFusion API - http://www.programmableweb.com/api/travelfusion TravelPort API - http://www.programmableweb.com/api/travelport Does anyone know any other API's/services(free & commercial) that can help me do what I need? If any of the above API's I mentioned does what I need and you recommend that, please tell me which one and why. Thank you.

    Read the article

  • bbcode hyperlink issue (help!!)

    - by Jorm
    I'm having an annoying :) I use regexes from this: http://forums.codecharge.com/posts.php?post_id=77123 if you enter [url]www.bob.com[/url] it leads too http://localhost/test/www.bobsbar.com So I added before http://$1 in the replacement. That fix it but then [url]http://www.bob.com[/url] will lead to http://http://www.bobsbar.com How would you fix this? I want my users to be able to post links with AND without http:// and i want it to redirect to the site -_- Hope you understand this. Jorm Edit function bbcode_format($str) { $str = htmlentities($str); $find = array( '/\[url\](.*?)\[\/url\]/is', // hyperlink '/\[url\](http[s]?:\/\/)(.*?)\[\/url\]/is' // hyperlink http-protocol ); $replace = array( '<a href="$1" rel="nofollow" title="$1">$1</a>', '<a href="$1$2" rel="nofollow" title="$2">$2 THIS WORKS</a>' ); $str = preg_replace($find, $replace, $str); return $str; } both www.bob.com and http://www.bob.com uses the first replacement

    Read the article

  • Problem getting Java Streams in HP Tandem (Non-Stop)

    - by AndreaG
    Hi. We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02. When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like java.io.IOException: Value out of range or java.net.SocketException: Value out of range ("value out of range" is Tandem native jargon for, well, quite everything I suppose). Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI? I suppose there is something wrong with the system, but where might it be? Thank you. EDIT: adding snippets as requested sample snippet (a) - using Runtime.exec () (adapted) Properties envVars = new Properties(); Process p = r.exec("/bin/env"); envVars.load(p.getInputStream()); Stack trace (a): java.io.IOException: Value out of range (errno:4034) at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221) at java.io.BufferedInputStream.read1(BufferedInputStream.java:254) at java.io.BufferedInputStream.read(BufferedInputStream.java:313) at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411) at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.readLine(BufferedReader.java:299) at java.io.BufferedReader.readLine(BufferedReader.java:362) at util.Environment.getVariables(Environment.java:39) Last line fails, and output gets redirected to console (!). sample snippet (b) - using HttpURLConnection: public WorkerThread (HttpURLConnection conn, String requestData, Logger logger) { this.conn = conn; ... } public void run () { OutputStream out = conn.getOutputStream (); } Stack trace (b): java.net.SocketException: Value out of range (errno:4034) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:507) at sun.net.NetworkClient.doConnect(NetworkClient.java:155) at sun.net.www.http.HttpClient.openServer(HttpClient.java:365) at sun.net.www.http.HttpClient.openServer(HttpClient.java:477) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230) Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.

    Read the article

  • htaccess redirect to subfolder and mask the url?

    - by KiwiCoder
    Hello all, Two things: Firstly - I have version 2 of a website located in a folder named v2, and I want to redirect any traffic that is NOT a child of the v2 folder, to www.example.com/v2 The old site located in the root was created in iWeb and has a LOT of subfolders and sub-subfolders. So: www.example.com/v2 = New site www.example.com/Page.html www.example.com/category/Page.html ww.example.com/category/subcategory/Page.html = All generic examples of what I need to redirect. Secondly, and I don't know if this is possible, I want to hide /v2/ in the URL, so that visitors will just see www.example.com/page even though they are actually on www.example.com/v2/page Links are hardcoded to the v2 folder, like so <a href="v2/contact.html" Any help is MOST appreciated. I've spent hours trying to figure this out, but I'm only just learning about htaccess and regular expressions, and am totally confused. Thanks so much!

    Read the article

  • Controling virtualbox internet access?

    - by HandyGandy
    I am finally going through the process of moving my XP into a vbox (host linux). The thing is that I am migrating a virtually clean install. So aside from the occasional antivirus scan, I want to make sure that my XP is not sending malware data (keystoke.logs, spam etc. ) out silently ( and thus having picked up some virus ). To that end I want to limit XP to contacting my LAN and a few internet sites. ( mainly sites that require proprietary windows only software to access, AV sites and Windows update ). I want XP to only access preapproved addresses. If it is trying to contact a nonapproved address, I want it somehow logged and access restricted until I allow access. I also don't want to have to decide whether to allow access to a site at my leisure. To keeps things clear let me give an example: I start my vbox/XP ( which I call MYXP) running on my linux box ( called MYLINUX connecting to the net through a linksys wrt54g ) and connects via samba to my LAN ( since my LAN seems to be possessed of every evil thing, it's address is 192.168.666. ). At the moment my configuration is set so that I allow MYXP to access 192.168.666 and www.MYANTIVIRUS_UPDATES.com and www.MS_UPDATES.com. Then on the VM I start a program which tries to make a connection to www.playmygame.com . www.playmygame.com is on my preapproved list so the connection goes through. Later I check attempted accesses and discover that it also tried to connect to www.mygame_high_scores.com I figure this is OK so I add www.mygame_high_scores.com to my approved list. Later, I again check address and discover that my VM/XP tried to access www.mygame_steals_your_identity.com. I do some checking and discover the address is registered to someone in Kiev, Nigeria. Since this doesn't sound kosher to me, I replace the MYXP VM with one that was backed up before I installed mygame. I remove www.playmygame.com and www.mygame_high_scores.com from my access list for MYXP. It should acomplish this with little overheard. When I am not running the VM ideally it should not have any overhead. Suggestions?

    Read the article

  • .htaccess add hidden php get variable for language selection

    - by Eric Di Bari
    I have a multiple language website, and I use a php get variable to set the cookie for the language setting. I have multiple subfolders (http://www.site.com/es and http://www.site.com/de) that each have a respective .htaccess file. When accessing these folders, the .htaccess file does this to "silently" redirect the user and add the appropriate php variable: ------- Options +FollowSymlinks RewriteEngine on RewriteOptions MaxRedirects=10 rewriterule ^http://www.site.com/es/$ http://www.site.com/?l=es [P,R=301] rewriterule ^(.*)$ http://www.site.com/$1?l=es [P,R=301] ------- When someone accesses the root directory: http://www.site.com, I want to add a ?l=en suffix "silently" to the url. How do I do that? Thanks.

    Read the article

  • Create a bitonal image for OCR using Image Magick

    - by Greg
    nice /usr/local/bin/convert -colors 2 -colorspace gray -compress group4 /var/www/html/uploads/pokemon.jpg /var/www/html/uploads/pokemontest.jpg This command worked with a really OLD version of Image Magick. With the newest version this method produces a completely black image. nice /usr/local/bin/convert -colorspace gray -compress group4 /var/www/html/uploads/pokemon.jpg /var/www/html/uploads/pokemontest.jpg nice /usr/local/bin/convert -colors 2 /var/www/html/uploads/pokemontest.jpg /var/www/html/uploads/pokemontestfinal.jpg This results in a bitonal gray and black image, but it's really rough. NOT clean at all.

    Read the article

  • Getting XML data from a external page and parsing it with PHP

    - by James P
    I'm trying to create a database of World of Warcraft gems. If I go to this page: http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=purple&searchType=items And go to View Source in Firefox, I see a tonne of XML data which is exactly what I want. I wrote up this quick script to try and parse some of it: <?php $gemUrls = array( 'Blue' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=blue&searchType=items', 'Red' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=red&searchType=items', 'Yellow' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=yellow&searchType=items', 'Meta' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=meta&searchType=items', 'Green' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=green&searchType=items', 'Orange' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=orange&searchType=items', 'Purple' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=purple&searchType=items', 'Prismatic' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=purple&searchType=items' ); // Get blue gems $blueGems = file_get_contents($gemUrls['Blue']); $xml = new SimpleXMLElement($blueGems); echo $xml->items[0]->item; ?> But I get a load of errors like this: Warning: SimpleXMLElement::__construct() [simplexmlelement.--construct]: Entity: line 20: parser error : xmlParseEntityRef: no name in C:\xampp\htdocs\WoW\index.php on line 19 Warning: SimpleXMLElement::__construct() [simplexmlelement.--construct]: if(Browser.iphone && Number(getcookie2("mobIntPageVisits")) < 3 && getcookie2( in C:\xampp\htdocs\WoW\index.php on line 19 I'm not sure what's wrong. I think file_get_contents() is bringing back data that isn't XML, maybe some Javascript files judging by the iPhone parts in the errors. Is there any way to just get back the XML from that page? Without any HTML or anything? Thanks :)

    Read the article

  • while creating archetype getting following error

    - by munna
    D:\Training\workspace\vppsourcemvn archetype:generate -B -DarchetypeGroupId=org .appfuse.archetypes -DarchetypeArtifactId=appfuse-modular-struts-archetype -Darc hetypeVersion=2.1.0-M1 -DgroupId=com.vmware -DartifactId=vpp [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [WARNING] Error reading archetype catalog http://repo1.maven.org/maven2 org.apache.maven.wagon.TransferFailedException: Error transferring file: Connect ion timed out: connect at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:143) at org.apache.maven.wagon.StreamWagon.getInputStream(StreamWagon.java:11 6) at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:88) at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61) at org.apache.maven.archetype.source.RemoteCatalogArchetypeDataSource.ge tArchetypeCatalog(RemoteCatalogArchetypeDataSource.java:97) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:195) at org.apache.maven.archetype.DefaultArchetypeManager.getRemoteCatalog(D efaultArchetypeManager.java:184) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.getArchetypesB yCatalog(DefaultArchetypeSelector.java:278) at org.apache.maven.archetype.ui.DefaultArchetypeSelector.selectArchetyp e(DefaultArchetypeSelector.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execu te(CreateProjectFromArchetypeMojo.java:186) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at sun.net.NetworkClient.doConnect(NetworkClient.java:163) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC onnection.java:860) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne ction.java:801) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection .java:726) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon nection.java:1049) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373 ) at org.apache.maven.wagon.providers.http.LightweightHttpWagon.fillInputD ata(LightweightHttpWagon.java:115) ... 28 more [WARNING] No archetype found in Remote catalog. Defaulting to internal Catalog [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 46 seconds [INFO] Finished at: Wed Jun 09 16:11:07 IST 2010 [INFO] Final Memory: 11M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

  • Mod rewrite with 3 parameters ?

    - by Axel
    Hello, I did tons of methods to figure out how to make this mod rewrite but i was completly unsuccessful. I want a .htaccess code that rewrite in the following method: http://www.mydomain.com/apple/upcoming/2 --- http://www.mydomain.com/handler.php?topic=apple&orderby=upcoming&page=2 This is easy to do, but the problem is that all parameters are not required so the link has different levels of parameters each time like this: http://www.mydomain.com/apple/popular/2 -- topic=apple&orderby=popular&page=2 http://www.mydomain.com/apple/2 -- topic=apple&orderby=&page=2 http://www.mydomain.com/all/popular/2 -- topic=all&orderby=popular&page=2 http://www.mydomain.com/apple/upcoming/ -- topic=apple&orderby=upcoming&page= So briefly, the url has 3 optional parameters in one static order: (topic) (orderby) (page) Note: the ORDERBY parameter can be "popular" or "upcoming" or nothing. Thanks

    Read the article

  • Can I use the CSS :visited pseudo class on 'wildcard' links?

    - by rabidpebble
    Let's say I have a site with multiple links as follows: www.example.com/product/1 www.example.com/product/2 www.example.com/product/3 I also append tracking info to links from time to time so that I can see how my site is being used, e.g, if somebody visits the products page from the product browser I would set a ref parameter: www.example.com/product/1&ref=pb www.example.com/product/2&ref=pb www.example.com/product/3&ref=pb The problem with this is that if the user visits a link of the first type and then views a link of the second type then the :visited pseudo class doesn't seem to apply because the browser only seems to match on exact URLs. Is there any way to have "wildcards" apply to links in this sense, so that when the user sees either the first type or the second type of link that it is highlighted? Note: I cannot change this "ref" architecture; it is inherited.

    Read the article

  • Routing configuration in cakephp

    - by ShiVik
    Hello all I am trying to implement routing in cakephp. I want the urls to mapped like this... www.example.com/nodes/main - www.example.com/main www.example.com/nodes/about - www.example.com/about So for this I wrote in my config/routes.php file.. Router::connect('/:action', array('controller' => 'nodes')); Now, I got the thing going but when I click on the links, the url in browser appears like www.example.com/nodes/main www.example.com/nodes/about Is there some way where I can get the urls to appear the way they are routed? Setting in .htaccess or httpd.conf would be easy - but I don't have access to that. Regards Vikram

    Read the article

  • Regular expression only for website

    - by Katie
    HI, I'm new to Regular Expression. I need to find just website in some text and I'm looking for a regular expression able to find out strings like: www.my.home, http://my.site.it But this regular expression should not find strings like: [email protected] or if the website is already inside html tag <a href="http://www.my.site.com/"><span style="font-style: normal;">www.mambo-test.org</span></a> I tried with this one: \b((https?://[^ ])|(www.[^ ])) but it also finds the website in the href and between the tag: <a href="http://www.my.site.com/"><span style="font-style: normal;">www.mambo-test.org</span></a> and I don't know how except this case.

    Read the article

  • JavaScript To Strip Page For URL

    - by Russell C.
    We have a javascript function we use to track page stats internally. However, the URLs it reports many times include the page numbers for search results pages which we would rather not be reported. The pages that are reports are of the form: http://www.test.com/directory1/2 http://www.test.com/directory1/subdirectory1/15 http://www.test.com/directory3/1113 Instead we'd like the above reported as: http://www.test.com/directory1 http://www.test.com/directory1/subdirectory1 http://www.test.com/directory3 Please note that the numbered 'directory' and 'subdirectory' names above are just for example purposes and that the actual subdirectory names are all different, don't necessarily include numbers at the end of the directory name, and can be many levels deep. Currently our JavaScript function produces these URLs using the code: var page = location.hostname+document.location.pathname; I believe we need to use the JavaScript replace function in combination with some regex but I'm at a complete loss as to what that would look like. Any help would be much appreciated! Thanks in advance!

    Read the article

  • Path issue in wordpress blog

    - by vsingh
    We are having some issue with our relative path in wordpress. Earlier our application was like http://www.skill-guru.com/skill . So if we type the blog address as http://www.skill-guru.com/blog it would add a / at end and open it as http://www.skill-guru.com/blog/ Now our application opens as root in domain http://www.skill-guru.com. Our blog is opening as http://www.skill-guru.com/blog/ but not as http://www.skill-guru.com/blog. I am not able to understand the reason. because of this issue , search is also not working. Can anyone please help me understand what has changed and how it can be fixed ?

    Read the article

  • PHP Script for redirecting to url

    - by Aruna
    hi, i am having a 2 Urls like http://www.abc.com and http://www.xyz.com. I am trying to redirect to http://www.xyz.com whenever i type http://www.abc.com in the browser. And also when the user types http://www.abc.com/index.php?option=com_content&view=article&id=46&Itemid=55 something like this ie. any query next to abc.com then i am trying to redirect to http://www.xyz.com/index.php?option=com_content&view=article&id=46&Itemid=55. how to do this in Php... Please help me..

    Read the article

  • Trying to switch header image if on home page

    - by novicePrgrmr
    New to PHP, and wondering what is wrong with this code because its definitely not switching the header img. This is what is used to be: <a href="http://www.finegra.in"><img id="logo" alt="fine grain Logo" src="http://www.finegra.in/wp-content/uploads/2012/04/fineGRAINlogoGOOD-WEB.png"> This is what I changed it to: <?php if (is_home()) { ?> <a href="http://www.finegra.in"><img id="logo" alt="fine grain Logo" src="http://www.finegra.in/wp-content/uploads/2012/09/finegrain-logo-sept.png"> </a> <?php } else { ?> <a href="http://www.finegra.in"><img id="logo" alt="fine grain Logo" src="http://www.finegra.in/wp-content/uploads/2012/04/fineGRAINlogoGOOD-WEB.png"> </a> <?php } ?>

    Read the article

  • How to find links and modify an Html using BeautifulSoup in Python

    - by systempuntoout
    Starting from an Html input like this: <p> <a href="http://www.foo.com">this if foo</a> <a href="http://www.bar.com">this if bar</a> </p> using BeautifulSoup, i would like to change this Html in: <p> <a href="http://www.foo.com">this if foo[1]</a> <a href="http://www.bar.com">this if bar[2]</a> </p> saving parsed links in a dictionary with a result like this: links_dict = {"1":"http://www.foo.com","2":"http://www.bar.com"} Is it possible to do this using BeautifulSoup? Any valid alternative?

    Read the article

  • Simple URL rewrite, not redirect, with .htaccess

    - by Ron
    I'm pretty inexperienced with .htaccess so please bear with me. I have a domain - "http://www.exampletest.com" which redirects to a folder at a different domain where I have hosting i.e: "http://www.differenturl.com/exampletest" Is there an easy mod_rewrite rule where I could have anything at "http://www.differenturl.com/exampletest" show up as "http://www.exampletest.com?" An example would be: "http://www.differenturl.com/exampletest/user.php" - "http://www.exampletest.com/user.php" Any assistance is greatly appreciated. I assume this easy so sorry to ask such a basic question. Thanks

    Read the article

  • RegEx replace query to pick out wiki syntax

    - by Jeremy Thake
    I've got a string of HTML that I need to grab the "[Title|http://www.test.com]" pattern out of e.g. "dafasdfasdf, adfasd. [Test|http://www.test.com/] adf ddasfasdf [SDAF|http://www.madee.com/] assg ad" I need to replace "[Title|http://www.test.com]" this with "Title". What is the best away to approach this? I was getting close with: string test = "dafasdfasdf adfasd [Test|http://www.test.com/] adf ddasfasdf [SDAF|http://www.madee.com/] assg ad "; string p18 = @"(\[.*?|.*?\])"; MatchCollection mc18 = Regex.Matches(test, p18, RegexOptions.Singleline | RegexOptions.IgnoreCase); foreach (Match m in mc18) { string value = m.Groups[1].Value; string fulltag = value.Substring(value.IndexOf("["), value.Length - value.IndexOf("[")); Console.WriteLine("text=" + fulltag); } There must be a cleaner way of getting the two values out e.g. the "Title" bit and the url itself. Any suggestions?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >