Search Results

Search found 382 results on 16 pages for 'leachianus gecko'.

Page 12/16 | < Previous Page | 8 9 10 11 12 13 14 15 16  | Next Page >

  • Javascript/jQuery: programmatically follow a link

    - by Dan
    In Javascript code, I would like to programmatically cause the browser to follow a link that's on my page. Simple case: <a id="foo" href="mailto:[email protected]">something</a> function goToBar() { $('#foo').trigger('follow'); } This is hypothetical as it doesn't actually work. And no, triggering click doesn't do it. I am aware of window.location and window.open but these differ from native link-following in some ways that matter to me: a) in the presence of a <base /> element, and b) in the case of mailto URLs. The latter in particular is significant. In Firefox at least, calling window.location.href = "mailto:[email protected]" causes the window's unload handlers to fire, whereas simply clicking a mailto link does not, as far as I can tell. I'm looking for a way to trigger the browser's default handling of links, from Javascript code. Does such a mechanism exist? Toolkit-specific answers also welcome (especially for Gecko).

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • Why Firebug pretends that my stylesheet is calling my xmlrpc ?

    - by Rebol Tutorial
    Firebug shows a request which causes a huge delay to http://reboltutorial.com/wp-content/themes/minaflow/none Details below but I don't understand why it says it comes from xmlrpc and the stylesheet: Date Sun, 04 Apr 2010 16:10:02 GMT Server Apache X-Powered-By PHP/5.2.13 X-Pingback http://reboltutorial.com/xmlrpc.php Expires Wed, 11 Jan 1984 05:00:00 GMT Cache-Control no-cache, must-revalidate, max-age=0 Pragma no-cache Set-Cookie wordpress_test_cookie=WP+Cookie+check; path=/; domain=.reboltutorial.com Last-Modified Sun, 04 Apr 2010 16:10:03 GMT Vary Accept-Encoding Content-Encoding gzip Keep-Alive timeout=2, max=94 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=UTF-8 Requêtemise en page impression GET /wp-content/themes/minaflow/none HTTP/1.1 Host: reboltutorial.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://reboltutorial.com/wp-content/themes/minaflow/style.css Cookie: _csoot=1267966575980; _csuid=4b6f27395991a2ff; wp-settings-1=editor%3Dhtml%26align%3Dleft%26m0%3Do%26m1%3Do%26m2%3Do%26m3%3Dc%26m4%3Do%26m5%3Dc%26m6%3Do%26m7%3Do%26m8%3Dc%26m9%3Dc%26m10%3Dc%26m11%3Do%26m12%3Dc%26m13%3Dc%26m14%3Dc%26m15%3Dc; wp-settings-time-1=1270384700; subscribe_checkbox_=unchecked; PHPSESSID=o70hjpjf7uj2hb4doe4k0o5co5; wordpress_test_cookie=WP+Cookie+check; xumgeqhxmhohxipF=Erjixxeeskfgnlba; SJECT=CKON; wordpress_=admin%7C1271592539%7C392c555d9051c6fa184074d8441cc472; wordpress_logged_in_=admin%7C1271592539%7C0e7a92bda53cc2f5afc32962237a1037; rcBDvgtspmuEsyzp=rmqjtFbCfheGCjBw; prli_click_15=creatingstandard; prli_visitor=m7928r

    Read the article

  • Debugging Post Request with Chrome Dev Tools

    - by benek
    I am trying to use Chrome Dev for debugging the following Angular post request : $http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader) After running the statement with right-click / evaluate, I can see the post in the network panel with a pending state. How can I get the result or "commit" the request and leave easily this "pending" state from the dev console ? I am not yet very familiar with JS callbacks, some code is expected. Thanks. EDIT I have tried to run from the console : $scope.$apply(function(){$http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader).success(function(data){console.log("error "+data)}).error(function(data){console.log("error "+data)})}) It returns : undefined EDIT The post I am trying to solve generate an HTTP 400. Here is the result : Request URL:http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader Request Method:POST Status Code:400 Mauvaise Requ?te Request Headersview source Accept:application/json, text/plain, / Accept-Encoding:gzip,deflate,sdch Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Content-Length:5354 Content-Type:application/json;charset=UTF-8 Cookie:JSESSIONID=285AF523EA18C0D7F9D581CDB2286C56 Host:picjboss.puma.lan:8880 Origin:http://picjboss.puma.lan:8880 Referer:http://picjboss.puma.lan:8880/fluxpousse/ User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 X-Requested-With:XMLHttpRequest Request Payloadview source {refHeader:IDSFP, idEntrepot:619, codeEntreprise:null, codeBanniere:null, codeArticle:7,…} cessionPrice: 78 codeArticle: "7" codeBanniere: null codeDateAppro: null codeDateDelivery: null codeDatePrepa: null codeEntreprise: null codeFournisseur: null codeUtilisateur: null codeUtilisateurLastUpdate: null createDate: null dateAppro: null dateDelivery: null datePrepa: null hasAssortControl: null hasCadenceForce: null idEntrepot: 619 isFreeCost: null labelArticle: "Mayonnaise de DIJON" labelFournisseur: null listDetail: [,…] pcbArticle: 12 pvc: 78 qte: 78 refCommande: "ref" refHeader: "IDSFP" state: "CREATED" stockArticle: 1200 updateDate: null Response Headersview source Connection:close Content-Length:996 Content-Type:text/html;charset=utf-8 Date:Fri, 08 Nov 2013 15:19:30 GMT Server:Apache-Coyote/1.1 X-Powered-By:Servlet 2.5; JBoss-5.0/JBossWeb-2.1

    Read the article

  • How do i set the proxy and SOCKs in libcurl?

    - by acidzombie24
    I am trying to configure my .NET app to use a proxy. My source is in C# but i learned CURL via C++. My question is where do i put the SOCKs IP and port? i looked through the documentation and didnt see it. I believe that is what is causing me these problems. When i run this code it will quiet literally timeout and not call my header function or writer function. If i comment out the first two curlopt lines (the two proxy lines) my code runs with no problems. In firefox i set the http proxy and SOCKs host separately, they are different IPs and ports. How do i set the sock part, the below has the dummy proxy set but i cant figure out the socks part. static void Main(string[] args) { SeasideResearch.LibCurlNet.Curl.GlobalInit((int)SeasideResearch.LibCurlNet.CURLinitFlag.CURL_GLOBAL_ALL); var curl = new Easy(); { curl.SetOpt(CURLoption.CURLOPT_PROXY, "http://127.0.0.1:1234"); curl.SetOpt(CURLoption.CURLOPT_PROXYTYPE, CURLproxyType.CURLPROXY_SOCKS5); curl.SetOpt(CURLoption.CURLOPT_URL, "http://whatismyipaddress.com/ip-lookup"); curl.SetOpt(CURLoption.CURLOPT_FOLLOWLOCATION, 1); curl.SetOpt(CURLoption.CURLOPT_USERAGENT, @"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2b5) Gecko/20091204 Firefox/3.6b5"); curl.SetOpt(CURLoption.CURLOPT_HEADERFUNCTION, hf); curl.SetOpt(CURLoption.CURLOPT_HEADERDATA, data); curl.SetOpt(CURLoption.CURLOPT_WRITEFUNCTION, wf); curl.SetOpt(CURLoption.CURLOPT_WRITEDATA, sw); curl.SetOpt(CURLoption.CURLOPT_SSL_VERIFYPEER, 0); curl.Perform(); var sz = sw.ToString(); var myrealip = sz.IndexOf("12.34.56.78") !=-1; } //Console.WriteLine(sz); SeasideResearch.LibCurlNet.Curl.GlobalCleanup(); }

    Read the article

  • Ajax Request using jQuery in Rails

    - by Steve
    Hi... I am sending an Ajax Request using jQuery. What happens is that I am getting an "405 Method Not Allowed" Error. I am just posting a form, which would get the detail from the form and insert it into the DB. Just the usual stuff.I am using WEBrick that comes as default with the rails package. Can somebody please tell me how to fix this. This is the code that triggers the Ajax Request $.post($(this).attr("action") + ".js",$(this).serialize(),null,"script"); Response Headers Cache-Control no-cache Allow GET, PUT, DELETE Content-Type text/html; charset=utf-8 Content-Length 9502 Server WEBrick/1.3.1 (Ruby/1.9.1/2009-12-07) Date Wed, 02 Jun 2010 20:41:33 GMT Connection Keep-Alive Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://localhost:3000/viewspot/3 Content-Length 141 Pragma no-cache Cache-Control no-cache

    Read the article

  • Image in table cell doesn't scale down: using ie8/7

    - by monks1975
    Can anyone help me troubleshoot my website? http://www.andrewstonyer.co.uk/test/ My problem: On IE8/7 if you click on a thumbnail (only 'Pulse' and 'Time Within The Hour' are wired in right now) an overlay appears with detail of that piece. What should happen, and does in Gecko/Webkit, is that the overlay contains a table with a heading, a scaled, centered image, and a nav menu. There is a toggle for text, which pushes up the image cell and makes the image smaller, keeping proportion. I know the overlay looks like ass right now- those are just placeholder colours :) On IE, the image doesn't fit perfectly in the table cell, which means that everything is pushed down outside the window. I can't see the nav menu. It appears to render the image at actual pixel size (in the CSS, the img class element is set to 100% height). The text cell is toggled with jquery, when toggled on in IE, it doesn't appear to 'squeeze' the above cell, which is what I want to happen. Could any experts help? Regards, Jon

    Read the article

  • Converting Source ASCII Files to JPEGs

    - by CommonsWare
    I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way. The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain. My current mechanism to do that involves: Running expand to set a consistent 2-space tab size, which pipes to... a2ps, which pipes to... A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to... ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc. That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is. Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism. So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)! Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now. Thanks in advance!

    Read the article

  • cURL cookie negative cookie expire

    - by Joe Doe
    I have problems with cookies with cURL. After problems I turned on verbose function and figured out cURL sets them negative expire date even if server sends positive date. Example: * Added cookie _c_sess=""test"" for domain test.com, path /, expire -1630024962 < Set-Cookie: _c_sess="test"; Domain=test.com; HttpOnly; expires=Mon, 26-Mar-2012 14:52:47 GMT; Max-Age=1332773567; Path=/ As you can see both expires and max-age are positive, but cURL sets expire to negative value. Somebody has idea? EDIT: Here is php code I use. $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://site.com/"); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0'); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookiepath); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookiepath); curl_setopt($ch, CURLOPT_HEADER ,1); curl_setopt($ch, CURLOPT_VERBOSE ,1); curl_setopt($ch, CURLOPT_STDERR ,$f); curl_setopt($ch, CURLOPT_RETURNTRANSFER ,1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION ,1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $data = curl_exec($ch); Data from cookie jar: #HttpOnly_.test.com TRUE / FALSE -1630016318 _test_sess "test"

    Read the article

  • Error in FireFox Loading Images

    - by Brian
    Hello, Details of the app are: ASP.NET project, local web server, hosted in IIS locally, using latest FireFox, uses forms authentication. I'm getting a logon user name/pwd box when trying to access my local web server. Using the net panel in firebug, I see the issue is with an animated GIF, showing up as 401 unauthorized. I check the details and this is what I see for this URL: http://localhost/<virtual>/Images/loading.gif I'm getting the following: Response Headers: Server Microsoft-IIS/5.1 Date Thu, 17 Jun 2010 19:02:58 GMT WWW-Authenticate Negotiate NTLM Connection close Content-Length 4046 Content-Type text/html Request headers: Host localhost User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729; .NET4.0E) Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer <correct referer> Cookie ASP.NET_SessionId=ux4bt345qjz4p3u1wm1zgmez; .ASPXFormsAuth=33F12B444040827B8ABF154EE4EDE43B6CA532432EB846987B355097E00256DF0955C76A37BC593EAA961747BF1CC1D8949FF63C6F2CA69D77213EB15B4EDFAF57A83D9E1F88AB8D821C3A09C07EA2EE; .ASPROLES=qTFrGteJydYAE3118WGXbhJthTDdjdtuQ06t4bYVrM1BwIfcEHU1HhnEcs7TqSOaV-fIN5MH3uO57oNVWXDvrhkZ8gQuURuUk_K0TpoR-DEFXuF953Gl9aIilKAdV211jutMNQmhkt2rdPE2tEhHs3pz953fADxjAOyZl7K-AqNvMk3yqJshhKHhJIf-ALMhWIYlrrKy0WsYznUwh3WCtPfzEBD5XzmXU8HVMJ2-ArLjBISuegvSmxvK1PuXBPhoMRMi9Ynaw6xi9ypGk-R6uN0ljOMCGkB2-20WUlFuP0xWTfac_zCTDT00pbpnyjtygnM-LShOXTrZ_mhoRuXfKYEYSodNihwD6SRr19Nm-8uZ5BQ-W81svM17S2C0vc0FaxtiuAcN_vHcsN1OEJeCuVfRjeqzo9xWEViupP3Vh6aOcCm6yrftgw5x94piuCJO7tCfXjJAw5RVUWDBBWv5gmid171F0k-_XZ0CSv7Gm2Eai1BRfogAqQ_MV3tyPv7XVEyJXRXqYGlf1JpkfTW8S8On4E05v9gx9RcdnKHZebiOZwbP1_ho9nG7pMwXysbhjxtxwZ-zLx-v11_rhZw_i5m7iNcLtt4BbFU-sb_crzMpCKGywHIc452Zp1E0kx1Rfx-2eUnaiLiCfGed-QqelO88NYTpJHttGKEfhFrDgmaIXZPJRtuZ-GrS6t3Vla-8qDAVb1p6ovPwoVT4z4BhQyFsk542gDx-uQDw6D0B6zo7lXfcOjtolUxDcLbETsNlYsexZaxFpRSbw7M1ldwL_k92P9wLPlv9mw4NtyhXKJesMu7GjquZuoBN3hO00AqJEe1tKFFtfrvbE5ZH7uNu7myNdtlxRPe3WZe7qukbqHo1 Pragma no-cache Cache-Control no-cache Any ideas? Thanks.

    Read the article

  • Curls and file_get_contents times out when loading a page

    - by Joseph
    Im trying to grab the content of this page(http://www.alluc.org/movies/watch-hot-tub-time-machine-2010-online/186214.html) using curl or file_get_contents but it doesnt work, it loads when i just open it in the browser, but not otherwise. Here are my settings for CURL: curl_setopt($ch1, CURLOPT_INTERFACE, "$use_proxy"); curl_setopt($ch1, CURLOPT_URL, $url); curl_setopt($ch1, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch1, CURLOPT_REFERER, 'http://'.$domain); curl_setopt($ch1, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1"); curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, TRUE); echo curl_setopt($ch1, CURLOPT_HEADER, 1); curl_setopt($ch1, CURLOPT_VERBOSE, true); It works fine for other sites just not this one for some reason, any clue as to how to make it work ? Thanx. Heres the info from curl_getinfo($ch1): [url] => http://www.alluc.org/movies/watch-hot-tub-time-machine-2010-online/186214.html [content_type] => [http_code] => 0 [header_size] => 0 [request_size] => 0 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 0 [total_time] => 0 [namelookup_time] => 0.002578 [connect_time] => 0 [pretransfer_time] => 0 [size_upload] => 0 [size_download] => 0 [speed_download] => 0 [speed_upload] => 0 [download_content_length] => -1 [upload_content_length] => -1 [starttransfer_time] => 0 [redirect_time] => 0

    Read the article

  • Python - Problems using mechanize to log into a difficult website

    - by user1781599
    × 139886 I am trying to log in to betfair.com by using mechanize. I have tried several ways but it always fail. This is the code I have developed so far, can anyone help me to identify what is wrong with it and how I can improve it to log into my betfair account? Thanks, import cookielib import urllib import urllib2 from BeautifulSoup import BeautifulSoup import mechanize from mechanize import Browser import re bf_username_name = "username" bf_password_name = "password" bf_form_name = "loginForm" bf_username = "xxxxx" bf_password = "yyyyy" urlLogIn = "http://www.betfair.com/" accountUrl = "https://myaccount.betfair.com/account/home?rlhm=0&" # This url I will use to verify if log in has been successful br = mechanize.Browser(factory=mechanize.RobustFactory()) br.addheaders = [("User-Agent","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.90 Safari/537.1")] br.open(urlLogIn) br.select_form(nr=0) print br.form br.form[bf_username_name] = bf_username br.form[bf_password_name] = bf_password print br.form #just to check username and psw have been recorded correctly responseSubmit = br.submit() response = br.open(accountUrl) text_file = open("LogInResponse.html", "w") text_file.write(responseSubmit.read()) #this file should show the home page with me logged in, but it show home page as if I was not logged it text_file.close() text_file = open("Account.html", "w") text_file.write(response.read()) #this file should show my account page, but it should a pop up with an error text_file.close()

    Read the article

  • PHP Infine Loope Problem

    - by Ashwin
    function httpGet( $url, $followRedirects=true ) { global $final_url; $url_parsed = parse_url($url); if ( empty($url_parsed['scheme']) ) { $url_parsed = parse_url('http://'.$url); } $final_url = $url_parsed; $port = $url_parsed["port"]; if ( !$port ) { $port = 80; } $rtn['url']['port'] = $port; $path = $url_parsed["path"]; if ( empty($path) ) { $path="/"; } if ( !empty($url_parsed["query"]) ) { $path .= "?".$url_parsed["query"]; } $rtn['url']['path'] = $path; $host = $url_parsed["host"]; $foundBody = false; $out = "GET $path HTTP/1.0\r\n"; $out .= "Host: $host\r\n"; $out .= "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\n"; $out .= "Connection: Close\r\n\r\n"; if ( !$fp = @fsockopen($host, $port, $errno, $errstr, 30) ) { $rtn['errornumber'] = $errno; $rtn['errorstring'] = $errstr; } fwrite($fp, $out); while (!@feof($fp)) { $s = @fgets($fp, 128); if ( $s == "\r\n" ) { $foundBody = true; continue; } if ( $foundBody ) { $body .= $s; } else { if ( ($followRedirects) && (stristr($s, "location:") != false) ) { $redirect = preg_replace("/location:/i", "", $s); return httpGet( trim($redirect) ); } $header .= $s; } } fclose($fp); return(trim($body)); } This code sometimes go infinite loop. What's wrong here?

    Read the article

  • How do I access a value of a nested Perl hash?

    - by st
    I am new to Perl and I have a problem that's very simple but I cannot find the answer when consulting my Perl book. When printing the result of Dumper($request); I get the following result: $VAR1 = bless( { '_protocol' => 'HTTP/1.1', '_content' => '', '_uri' => bless( do{\(my $o = 'http://myawesomeserver.org:8081/counter/')}, 'URI::http' ), '_headers' => bless( { 'user-agent' => 'Mozilla/5.0 (X11; U; Linux i686; en; rv:1.9.0.4) Gecko/20080528 Epiphany/2.22 Firefox/3.0', 'connection' => 'keep-alive', 'cache-control' => 'max-age=0', 'keep-alive' => '300', 'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language' => 'en-us,en;q=0.5', 'accept-encoding' => 'gzip,deflate', 'host' => 'localhost:8081', 'accept-charset' => 'ISO-8859-1,utf-8;q=0.7,*;q=0.7' }, 'HTTP::Headers' ), '_method' => 'GET', '_handle' => bless( \*Symbol::GEN0, 'FileHandle' ) }, 'HTTP::Server::Simple::Dispatched::Request' ); How can I access the values of '_method' ('GET') or of 'host' ('localhost:8081'). I know that's an easy question, but Perl is somewhat cryptic at the beginning.

    Read the article

  • Trying to login to site with PHP & cURL?

    - by motionman95
    I've never done something like this before...I'm trying to log into swagbucks.com and get retrieve some information, but it's not working. Can someone tell me what's wrong with my script? <?php $pages = array('home' => 'http://swagbucks.com/?cmd=home', 'login' => 'http://swagbucks.com/?cmd=sb-login&from=/?cmd=home', 'schedule' => 'http://swagbucks.com/?cmd=sb-acct-account&display=2'); $ch = curl_init(); //Set options for curl session $options = array(CURLOPT_USERAGENT => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; `rv:1.9.2) Gecko/20100115 Firefox/3.6',` CURLOPT_HEADER => TRUE, //CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_COOKIEFILE => 'cookie.txt', CURLOPT_COOKIEJAR => 'cookies.txt'); //Hit home page for session cookie $options[CURLOPT_URL] = $pages['home']; curl_setopt_array($ch, $options); curl_exec($ch); //Login $options[CURLOPT_URL] = $pages['login']; $options[CURLOPT_POST] = TRUE; $options[CURLOPT_POSTFIELDS] = '[email protected]&pswd=jblake&persist=on'; $options[CURLOPT_FOLLOWLOCATION] = FALSE; curl_setopt_array($ch, $options); curl_exec($ch); //Hit schedule page $options[CURLOPT_URL] = $pages['schedule']; curl_setopt_array($ch, $options); $schedule = curl_exec($ch); //Output schedule echo $schedule; //Close curl session curl_close($ch); ?> But it still doesn't log me in. What's wrong?

    Read the article

  • PHP Infine Loop Problem

    - by Ashwin
    function httpGet( $url, $followRedirects=true ) { global $final_url; $url_parsed = parse_url($url); if ( empty($url_parsed['scheme']) ) { $url_parsed = parse_url('http://'.$url); } $final_url = $url_parsed; $port = $url_parsed["port"]; if ( !$port ) { $port = 80; } $rtn['url']['port'] = $port; $path = $url_parsed["path"]; if ( empty($path) ) { $path="/"; } if ( !empty($url_parsed["query"]) ) { $path .= "?".$url_parsed["query"]; } $rtn['url']['path'] = $path; $host = $url_parsed["host"]; $foundBody = false; $out = "GET $path HTTP/1.0\r\n"; $out .= "Host: $host\r\n"; $out .= "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\n"; $out .= "Connection: Close\r\n\r\n"; if ( !$fp = @fsockopen($host, $port, $errno, $errstr, 30) ) { $rtn['errornumber'] = $errno; $rtn['errorstring'] = $errstr; } fwrite($fp, $out); while (!@feof($fp)) { $s = @fgets($fp, 128); if ( $s == "\r\n" ) { $foundBody = true; continue; } if ( $foundBody ) { $body .= $s; } else { if ( ($followRedirects) && (stristr($s, "location:") != false) ) { $redirect = preg_replace("/location:/i", "", $s); return httpGet( trim($redirect) ); } $header .= $s; } } fclose($fp); return(trim($body)); } This code sometimes go infinite loop. What's wrong here?

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by E3 Group
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • Proxying webmin with nginx

    - by TheLQ
    I am attempting to proxy webmin behind nginx for various reasons that are outside the scope of this question. However I've been trying for a while now and can't seem to figure it out and think I'm to the point where I've exhausted all the permutations of the config file I can think of. What I have now: relevant nginx config (commented out options removed, I tried many) # Proxy for webmin location /admin/quackwall-webmin { proxy_pass http://127.0.0.1:10000; # Also tried ending with /admin/quackwall-webmin proxy_set_header Host $host; } /etc/webmin/config - Relevant parts webprefix=/admin/quackwall-webmin webprefixnoredir=1 referer=(nginx domain name) Webmin itself is on the standard ports, listening on all addresses temporarily for debugging. SSL has been disabled for right now. So I make a standard request for the login page. However all the CSS and images are broken, with the standard login page returned for all of the resources. In the webmin miniserv logs I see 127.0.0.1 - - [29/Oct/2012:12:29:00 -0400] "GET /admin/quackwall-webmin/session_login.cgi HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/style.css HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/sorttable.js HTTP/1.0" 401 2453 127.0.0.1 - - [29/Oct/2012:12:29:01 -0400] "GET /admin/quackwall-webmin/unauthenticated/toggleview.js HTTP/1.0" 401 2453 So all the URL's are returning 401s. Interestingly ngrep seems to show that the requests suceeded on the backend communication between nginx and webmin T 127.0.0.1:58908 -> 127.0.0.1:10000 [AP] POST /admin/quackwall-webmin/session_login.cgi HTTP/1.0..Host: (host)..Connection: close..User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW 64; rv:16.0) Gecko/20100101 Firefox/16.0..Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8..Accept-Language: en-US,en;q=0.5. .Accept-Encoding: gzip, deflate..Referer: http://(host)/admin/quackwall-webmin/session_login.cgi..Cookie: testing=1..Cache-Control: ma x-age=0..Content-Type: application/x-www-form-urlencoded..Content-Length: 41....page=%2F&user=(user)&pass=(pass) T 127.0.0.1:10000 -> 127.0.0.1:58908 [AP] HTTP/1.0 200 Document follows.. Various other permutations of these config options and others show similar results, with the URL sent to webmin by nginx either being /admin/quackwall-webmin/session_login.cgi, /admin/quackwall-webmin//session_login.cgi, and just /session_login.cgi. All give 201 Unauthenticated responses. All requests, even those that somewhat succeed (as in I can actually load the resources of the page) Is changing the webprefix in webmin even supported? What am I doing wrong? What else can I try?

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by LOGIC9
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • Apache Mod_rewrite rule working on one server, but not another

    - by Mason
    I am using mod_jk and mod_rewrite on httpd 2.2.15. I have a rule.... RewriteCond %{REQUEST_URI} !^/video/play\.xhtml.* RewriteRule ^/video/(.*) /video/play.xhtml?vid=$1 [PT] I just want to rewrite something like /video/videoidhere to /video/play.xhtml?vid=videoidhere This works perfectly on my developer machine, but on production I get a 404 (generated by Jboss, not Apache). here is the tail of access.log and rewrite.log on prod (broken). the rewrite.log is exactly the same on dev(working) applying pattern '^/video/(.*)' to uri '/video/46279d4daf5440b2844ec831413dcc3b' RewriteCond: input='/video/46279d4daf5440b2844ec831413dcc3b' pattern='!^/video/play\.xhtml.*' => matched rewrite '/video/46279d4daf5440b2844ec831413dcc3b' -> '/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b' split uri=/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b -> uri=/video/play.xhtml, args=vid=46279d4daf5440b2844ec831413dcc3b forcing '/video/play.xhtml' to get passed through to next API URI-to-filename handler "GET /video/46279d4daf5440b2844ec831413dcc3b HTTP/1.1" 404 420 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.6) Gecko/20100628 Ubuntu/10.04 (lucid) Firefox/3.6.6" I can access http://www.fivi.com/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b but not /video/46279d4daf5440b2844ec831413dcc3b Both server are even using the EXACT same httpd.conf, and modules. I built Apache with... ./configure --prefix /usr/local/apache2.2.15 --enable-alias --enable-rewrite --enable-cache --enable-disk_cache --enable-mem_cache --enable-ssl --enable-deflate Thanks, Mason ----UPDATE---- -mod-jk.conf JkWorkersFile /usr/local/apache2.2.15/conf/workers.properties JkLogFile /var/log/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T" JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> -workers.properties worker.node1.port=8009 worker.node1.host=75.102.10.74 worker.node1.type=ajp13 worker.node1.lbfactor=20 worker.node1.ping_mode=A #As of mod_jk 1.2.27 worker.node2.port=8009 worker.node2.host=75.102.10.75 worker.node2.type=ajp13 worker.node2.lbfactor=10 worker.node2.ping_mode=A #As of mod_jk 1.2.27 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node2,node1 worker.loadbalancer.sticky_session=True worker.status.type=status -httpd.conf ServerName www.fivi.com:80 Include /usr/local/apache2.2.15/conf/mod-jk.conf NameVirtualHost * <VirtualHost *> ServerName * DocumentRoot /usr/local/apache2/htdocs JkUnMount /* loadbalancer RedirectMatch 301 /(.*) http://www.fivi.com/$1 </VirtualHost> <VirtualHost *> ServerName www.fivi.com ServerAlias www.fivi.com images.fivi.com JkMount /* loadbalancer JkMount / loadbalancer [root@fivi conf]# /usr/local/apache2.2.15/bin/httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) cache_module (static) disk_cache_module (static) mem_cache_module (static) include_module (static) filter_module (static) deflate_module (static) log_config_module (static) env_module (static) headers_module (static) setenvif_module (static) version_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) status_module (static) autoindex_module (static) asis_module (static) cgi_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) jk_module (shared) Syntax OK

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • Tomcat with virtual hosts - 404

    - by Thardas
    I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat. Here's the virtual host's configuration: <VirtualHost *:80> ServerName project.demo.us.com CustomLog logs/project.demo.us.com/access_log combined env=!VLOG ErrorLog logs/project.demo.us.com/error_log DocumentRoot /var/www/vhosts/project.demo.us.com <Directory /var/www/vhosts/project.demo.us.com> Allow from all AllowOverride All Options -Indexes FollowSymLinks </Directory> ########## ########## ########## JkMount /project/* online </VirtualHost> JkMount line defines that we use online worker and our workers.properties contains this: worker.list=..., online, ... worker.online.port=7703 worker.online.host=localhost worker.online.type=ajp13 worker.online.lbfactor=1 And tomcat's conf/server.xml contains: <Connector port="7703" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" maxThreads="80" minSpareThreads="10" maxSpareThreads="15"/> I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter? Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes. Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either. Apache's access_log contains: 88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2" I couldn't find any mention of the request in Tomcat's logs. If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat. Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16  | Next Page >