Search Results

Search found 912 results on 37 pages for 'shodan is alive'.

Page 12/37 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Java Process.waitFor() and IO streams

    - by lynks
    I have the following code; String[] cmd = { "bash", "-c", "~/path/to/script.sh" }; Process p = Runtime.getRuntime().exec(cmd); PipeThread a = new PipeThread(p.getInputStream(), System.out); PipeThread b = new PipeThread(p.getErrorStream(), System.err); p.waitFor(); a.die(); b.die(); The PipeThread class is quite simple so I will include it in full; public class PipeThread implements Runnable { private BufferedInputStream in; private BufferedOutputStream out; public Thread thread; private boolean die = false; public PipeThread(InputStream i, OutputStream o) { in = new BufferedInputStream(i); out = new BufferedOutputStream(o); thread = new Thread(this); thread.start(); } public void die() { die = true; } public void run() { try { byte[] b = new byte[1024]; while(!die) { int x = in.read(b, 0, 1024); if(x > 0) out.write(b, 0, x); else die(); out.flush(); } } catch(Exception e) { e.printStackTrace(); } try { in.close(); out.close(); } catch(Exception e) { } } } My problem is this; p.waitFor() blocks endlessly, even after the subprocess has terminated. If I do not create the pair of PipeThread instances, then p.waitFor() works perfectly. What is it about the piping of io streams that is causing p.waitFor() to continue blocking? I'm confused as I thought the IO streams would be passive, unable to keep a process alive, or to make Java think the process is still alive.

    Read the article

  • Rails3 renders a js.erb template with a text/html content-type instead of text/javascript

    - by Yannis
    Hi, I'm building a new app with 3.0.0.beta3. I simply try to render a js.erb template to an Ajax request for the following action (in publications_controller.rb): def get_pubmed_data entry = Bio::PubMed.query(params[:pmid])# searches PubMed and get entry @publication = Bio::MEDLINE.new(entry) # creates Bio::MEDLINE object from entry text flash[:warning] = "No publication found."if @publication.title.blank? and @publication.authors.blank? and @publication.journal.blank? respond_to do |format| format.js end end Currently, my get_pubmed_data.js.erb template is simply alert('<%= @publication.title %>') The server is responding with the following alert('Evidence for a herpes simplex virus-specific factor controlling the transcription of deoxypyrimidine kinase.') which is perfectly fine except that nothing happen in the browser, probably because the content-type of the response is 'text/html' instead of 'text/javascript' as shown by the response header partially reproduced here: Status 200 Keep-Alive timeout=5, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=utf-8 Is this a bug or am I missing something? Thanks for your help!

    Read the article

  • accessing a value of a nested hash

    - by st
    Hello! I am new to perl and I have a problem that's very simple but I cannot find the answer when consulting my perl book. When printing the result of Dumper($request); I get the following result: $VAR1 = bless( { '_protocol' => 'HTTP/1.1', '_content' => '', '_uri' => bless( do{\(my $o = 'http://myawesomeserver.org:8081/counter/')}, 'URI::http' ), '_headers' => bless( { 'user-agent' => 'Mozilla/5.0 (X11; U; Linux i686; en; rv:1.9.0.4) Gecko/20080528 Epiphany/2.22 Firefox/3.0', 'connection' => 'keep-alive', 'cache-control' => 'max-age=0', 'keep-alive' => '300', 'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language' => 'en-us,en;q=0.5', 'accept-encoding' => 'gzip,deflate', 'host' => 'localhost:8081', 'accept-charset' => 'ISO-8859-1,utf-8;q=0.7,*;q=0.7' }, 'HTTP::Headers' ), '_method' => 'GET', '_handle' => bless( \*Symbol::GEN0, 'FileHandle' ) }, 'HTTP::Server::Simple::Dispatched::Request' ); How can I access the values of '_method' ('GET') or of 'host' ('localhost:8081'). I know that's an easy question, but perl is somewhat cryptic at the beginning. Thank you, St.

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • X-Forwarded-For causing Undefined index in PHP

    - by bateman_ap
    Hi, I am trying to integrate some third party tracking code into one of my sites, however it is throwing up some errors, and their support isn't being much use, so i want to try and fix their code myself. Most I have fixed, however this function is giving me problems: private function getXForwardedFor() { $s =& $this; $xff_ips = array(); $headers = $s->getHTTPHeaders(); if ($headers['X-Forwarded-For']) { $xff_ips[] = $headers['X-Forwarded-For']; } if ($_SERVER['REMOTE_ADDR']) { $xff_ips[] = $_SERVER['REMOTE_ADDR']; } return implode(', ', $xff_ips); // will return blank if not on a web server } In my dev enviroment where I am showing all errors I am getting: Notice: Undefined index: X-Forwarded-For in /sites/webs/includes/OmnitureMeasurement.class.php on line 1129 Line 1129 is: if ($headers['X-Forwarded-For']) { If I print out $headers I get: Array ( [Host] => www.domain.com [User-Agent] => Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 [Accept] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [Accept-Language] => en-gb,en;q=0.5 [Accept-Encoding] => gzip,deflate [Accept-Charset] => ISO-8859-1,utf-8;q=0.7,*;q=0.7 [Keep-Alive] => 115 [Connection] => keep-alive [Referer] => http://www10.toptable.com/ [Cookie] => PHPSESSID=nh9jd1ianmr4jon2rr7lo0g553; __utmb=134653559.30.10.1275901644; __utmc=134653559 [Cache-Control] => max-age=0 ) I can't see X-Forwarded-For in there which I think is causing the problem. Is there something I should add to the function to take this into account? I am using PHP 5.3 and Apache 2 on Fedora

    Read the article

  • Bluetooth in Java Mobile: Handling connections that go out of range

    - by Albus Dumbledore
    I am trying to implement a server-client connection over the spp. After initializing the server, I start a thread that first listens for clients and then receives data from them. It looks like that: public final void run() { while (alive) { try { /* * Await client connection */ System.out.println("Awaiting client connection..."); client = server.acceptAndOpen(); /* * Start receiving data */ int read; byte[] buffer = new byte[128]; DataInputStream receive = client.openDataInputStream(); try { while ((read = receive.read(buffer)) > 0) { System.out.println("[Recieved]: " + new String(buffer, 0, read)); if (!alive) { return; } } } finally { System.out.println("Closing connection..."); receive.close(); } } catch (IOException e){ e.printStackTrace(); } } } It's working fine for I am able to receive messages. What's troubling me is how would the thread eventually die when a device goes out of range? Firstly, the call to receive.read(buffer) blocks so that the thread waits until it receives any data. If the device goes out of range, it would never proceed onward to check if meanwhile it has been interrupted. Secondly, it would never close the connection, i.e. the server would not accept the device once it goes back in range. Thanks! Any ideas would be highly appreciated! Merry Christmas!

    Read the article

  • Why is Firefox prompting to download a file that is POST'd to?

    - by alex
    This is the most peculiar thing. It is from an old in house CMS. When I attempt to submit my changes, it prompts to save the file linked in the action attribute of the form. Headers Request POST /~site/edit/articles/article_save.php?id=54 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://example.com Content-Type: multipart/form-data; boundary=---------------------------10102754414578508781458777923 Content-Length: 940 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="title" Home Content -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="catid" 18 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="activecheck" 1 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="image" -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgToolbarSelectBlock" <p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="content" <p>Edit your article in this text box.</p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgEditor" true -----------------------------10102754414578508781458777923-- Response HTTP/0.9 200 OK And then Firefox shows.... I can't determine from the response headers as to why this is prompting to open/save. It has always worked. All other PHP files on the site work fine. Anyone have a clue? Thanks Update Apparently, it just crashes Safari.

    Read the article

  • How spoof referrer using curl

    - by golu molu
    I am using curl code below to spoof referrer , it works fine but there is error on every page - Curl error: $url = somesite.com function doMagic($url) { $curl = curl_init(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; curl_setopt($curl, CURLOPT_URL, $url); curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.12011-10-16 20:23:00"); curl_setopt($curl, CURLOPT_HTTPHEADER, $header); curl_setopt($curl, CURLOPT_REFERER, "http://www.facebook.com"); curl_setopt($curl, CURLOPT_ENCODING, "gzip,deflate"); curl_setopt($curl, CURLOPT_AUTOREFERER, true); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_TIMEOUT, 30); curl_setopt($curl, CURLOPT_FOLLOWLOCATION,true); $html = curl_exec($curl); echo 'Curl error: '. curl_error($curl); curl_close($curl); return $html; } $text = doMagic($url); print("$text"); what i'm doing wrong?

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • Struts 2 discard cache header

    - by Dewfy
    I have strange discarding behavior of struts2 while setting cache option for my image. I'm trying to put image from db to be cached on client side To render image I use ( http://struts.apache.org/2.x/docs/how-can-we-display-dynamic-or-static-images-that-can-be-provided-as-an-array-of-bytes.html ) where special result type render as follow: public void execute(ActionInvocation invocation) throws Exception { ...//some preparation HttpServletResponse response = ServletActionContext.getResponse(); HttpServletRequest request = ServletActionContext.getRequest(); ServletOutputStream os = response.getOutputStream(); try { byte[] imageBytes = action.getImage(); response.setContentType("image/gif"); response.setContentLength(imageBytes.length); //I want cache up to 10 min Date future = new Date(((new Date()).getTime() + 1000 * 10*60l)); ; response.addDateHeader("Expires", future.getTime()); response.setHeader("Cache-Control", "max-age=" + 10*60 + ""); response.addHeader("cache-Control", "public"); response.setHeader("ETag", request.getRequestURI()); os.write(imageBytes); } catch(Exception e) { response.sendError(HttpServletResponse.SC_NOT_FOUND); } os.flush(); os.close(); } But when image is embedded to page it is always reloaded (Firebug shows code 200), and neither Expires, nor max-age are presented in header Host localhost:9090 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://localhost:9090/web/result?matchId=1 Cookie JSESSIONID=4156BEED69CAB0B84D950932AB9EA1AC; If-None-Match /web/_srv/teamcolor Cache-Control max-age=0 I have no idea why it is dissapered, may be problem in url? It is forms with parameter: http://localhost:9090/web/_srv/teamcolor?loginId=3

    Read the article

  • Error in FireFox Loading Images

    - by Brian
    Hello, Details of the app are: ASP.NET project, local web server, hosted in IIS locally, using latest FireFox, uses forms authentication. I'm getting a logon user name/pwd box when trying to access my local web server. Using the net panel in firebug, I see the issue is with an animated GIF, showing up as 401 unauthorized. I check the details and this is what I see for this URL: http://localhost/<virtual>/Images/loading.gif I'm getting the following: Response Headers: Server Microsoft-IIS/5.1 Date Thu, 17 Jun 2010 19:02:58 GMT WWW-Authenticate Negotiate NTLM Connection close Content-Length 4046 Content-Type text/html Request headers: Host localhost User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729; .NET4.0E) Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer <correct referer> Cookie ASP.NET_SessionId=ux4bt345qjz4p3u1wm1zgmez; .ASPXFormsAuth=33F12B444040827B8ABF154EE4EDE43B6CA532432EB846987B355097E00256DF0955C76A37BC593EAA961747BF1CC1D8949FF63C6F2CA69D77213EB15B4EDFAF57A83D9E1F88AB8D821C3A09C07EA2EE; .ASPROLES=qTFrGteJydYAE3118WGXbhJthTDdjdtuQ06t4bYVrM1BwIfcEHU1HhnEcs7TqSOaV-fIN5MH3uO57oNVWXDvrhkZ8gQuURuUk_K0TpoR-DEFXuF953Gl9aIilKAdV211jutMNQmhkt2rdPE2tEhHs3pz953fADxjAOyZl7K-AqNvMk3yqJshhKHhJIf-ALMhWIYlrrKy0WsYznUwh3WCtPfzEBD5XzmXU8HVMJ2-ArLjBISuegvSmxvK1PuXBPhoMRMi9Ynaw6xi9ypGk-R6uN0ljOMCGkB2-20WUlFuP0xWTfac_zCTDT00pbpnyjtygnM-LShOXTrZ_mhoRuXfKYEYSodNihwD6SRr19Nm-8uZ5BQ-W81svM17S2C0vc0FaxtiuAcN_vHcsN1OEJeCuVfRjeqzo9xWEViupP3Vh6aOcCm6yrftgw5x94piuCJO7tCfXjJAw5RVUWDBBWv5gmid171F0k-_XZ0CSv7Gm2Eai1BRfogAqQ_MV3tyPv7XVEyJXRXqYGlf1JpkfTW8S8On4E05v9gx9RcdnKHZebiOZwbP1_ho9nG7pMwXysbhjxtxwZ-zLx-v11_rhZw_i5m7iNcLtt4BbFU-sb_crzMpCKGywHIc452Zp1E0kx1Rfx-2eUnaiLiCfGed-QqelO88NYTpJHttGKEfhFrDgmaIXZPJRtuZ-GrS6t3Vla-8qDAVb1p6ovPwoVT4z4BhQyFsk542gDx-uQDw6D0B6zo7lXfcOjtolUxDcLbETsNlYsexZaxFpRSbw7M1ldwL_k92P9wLPlv9mw4NtyhXKJesMu7GjquZuoBN3hO00AqJEe1tKFFtfrvbE5ZH7uNu7myNdtlxRPe3WZe7qukbqHo1 Pragma no-cache Cache-Control no-cache Any ideas? Thanks.

    Read the article

  • Javascript XMLHttpRequest Post method

    - by user535617
    Hey So I have a question about posting using an XMLHttpRequest. In theory, if I am to post a username and password to an https domain (which I have yet to get working, unfortunately) would the responseText then change to the next website, or should the text fields become filled in? What normally happens is you navigate to this page via browser, enter a username and password, and it uses a POST method when the submit button is clicked, doing some authentication under the hood and returning a different page. I feel like maybe the responseText should even stay exactly the same (which is what happens now), but I don't know as I have no experience with this kind of thing. this.requests[1].open("POST", "https://" + this.address, true); var query = "target=%2Fcgi-bin%2FStatusConfig.cgi%3FPage%3Dindex&userfile=&username=user&password=pass&log+in=Log+in"; this.requests[1].setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); this.requests[1].setRequestHeader("Content-length", query.length); this.requests[1].setRequestHeader("Keep-Alive", 115); this.requests[1].setRequestHeader("Connection", "keep-alive"); this.requests[1].setRequestHeader("Host", this.address); this.requests[1].send(query); this.requests[1].onreadystatechange = onReadyStateChange1; Then basically onReadyStateChange1 displays the responseText when ready. Any light that could be shed on what SHOULD be happening with the post and responseText would be very appreciated. As would any advice in getting the new, logged into page. For further clarification, what I'm trying to do is log in and then return the new page, because the login page displays only log in information/functionality and the page after logging in has a lot of relevant information. I'm not trying to check the credentials as much as I'm trying to get it (the script) to log in so it can access the next page. Granted, the credentials will have to be valid for that. Thanks all.

    Read the article

  • How to detect a Socket disconnection?

    - by AngryHacker
    I've implemented a task using the async Sockets pattern in Silverlight 3. I started with Michael Schwarz's implementation and built on top of that. So basically, my Silverlight app establishes a persistent socket connection to a device and then data flows both ways as necessary between the device and the Silverlight app. One thing I am struggling with is how to detect disconnection. I could think of 2 approaches: Keep-Alive. I know this can be done at the Sockets level, but I am not sure how to do this in an async model. How would the Socket class let me know there has been a disconnection. Manual keep alive. Basically, I am having the Silverlight app send a dummy packet every 20 seconds or so. If it fails, I'd assume disconnection. However, incredibly, SocketAsyncEventArgs.SocketError always reports success, even if I simply unplug the device that the Silverlight app is connected to. I am not sure whether this is a bug or what or perhaps I need to upgrade to SL4. Any ideas, direction or implementation would be appreciated.

    Read the article

  • Is it theoretically possible to emulate a human brain on a computer?

    - by JoelK
    Our brain consists of billions of neurons which basically work with all the incoming data from our senses, handle our consciousness, emotions and creativity as well as our hormone system, etc. So I'm completely new to this topic but doesn't each neuron have a fixed function? E.g.: If a signal of strength x enters, if the last signal was x ms ago, redirect it. From what I've learned in biology about our nerves system which includes our brain because both consist of simple neurons, it seems to me as our brain is one big, complicated computer. Maybe so complicated that things such as intelligence and cognition become possible? As the most complicated things about a neuron pretty much are the chemical aspects on generating an electric singal, keeping itself alive, and eventually segmenting itself, it should be pretty easy emulating some on a computer, or? You won't have to worry about keeping your virtual neuron alive, or? If you can emulate a single neuron on a computer, which shouldn't be too hard, could you theoretically emulate more than 1000 billions of them, recreating intelligence, cognition and maybe even creativity? In my question I'm leaving out the following aspects: Speed of our current (super) computers Actually writing a program for emulating neurons I don't know much about this topic, please tell me if I got anything wrong :) (My secret goal: Make a copy of my brain and store it on some 10 million TB HDD and make someone start it up in the future)

    Read the article

  • How do I access a value of a nested Perl hash?

    - by st
    I am new to Perl and I have a problem that's very simple but I cannot find the answer when consulting my Perl book. When printing the result of Dumper($request); I get the following result: $VAR1 = bless( { '_protocol' => 'HTTP/1.1', '_content' => '', '_uri' => bless( do{\(my $o = 'http://myawesomeserver.org:8081/counter/')}, 'URI::http' ), '_headers' => bless( { 'user-agent' => 'Mozilla/5.0 (X11; U; Linux i686; en; rv:1.9.0.4) Gecko/20080528 Epiphany/2.22 Firefox/3.0', 'connection' => 'keep-alive', 'cache-control' => 'max-age=0', 'keep-alive' => '300', 'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language' => 'en-us,en;q=0.5', 'accept-encoding' => 'gzip,deflate', 'host' => 'localhost:8081', 'accept-charset' => 'ISO-8859-1,utf-8;q=0.7,*;q=0.7' }, 'HTTP::Headers' ), '_method' => 'GET', '_handle' => bless( \*Symbol::GEN0, 'FileHandle' ) }, 'HTTP::Server::Simple::Dispatched::Request' ); How can I access the values of '_method' ('GET') or of 'host' ('localhost:8081'). I know that's an easy question, but Perl is somewhat cryptic at the beginning.

    Read the article

  • Why is Apache + Rails is spitting out two status headers for code 500?

    - by Daniel Beardsley
    I have a rails app that is working fine except for one thing. When I request something that doesn't exist (i.e. /not_a_controller_or_file.txt) and rails throws a "No Route matches..." exception, the response is this (blank line intentional): HTTP/1.1 200 OK Date: Thu, 02 Oct 2008 10:28:02 GMT Content-Type: text/html Content-Length: 122 Vary: Accept-Encoding Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Status: 500 Internal Server Error Content-Type: text/html <html><body><h1>500 Internal Server Error</h1></body></html> I have the ExceptionLogger plugin in /vendor, though that doesn't seem to be the problem. I haven't added any error handling beyond the custom 500.html in public (though the response doesn't contain that HTML) and I have no idea where this bit of html is coming from. So Something, somewhere is adding that HTTP/1.1 200 status code too early, or the Status: 500 too late. I suspect it's Apache because I get the appropriate HTTP/1.1 500 header (at the top) when I use Webrick. My production stack is as follows: Apache 2 Mongrel (5 instances) RubyOnRails 2.1.1 (happens in both 1.2 and 2.1.1) I forgot to mention, the error is caused by a "no route matches..." exception

    Read the article

  • PHP cache header override

    - by Soyo
    I've been through over 100 answers here, lots to try, NOTHING working?? Have a PHP based site. I need caching OFF for all .php files EXCEPT A SELECT FEW. So, in .htaccess, I have the following: ExpiresActive On # Eliminate caching for certain dynamic files <FilesMatch "\.(php|cgi|pl)$"> ExpiresDefault A0 Header set Cache-Control "no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, no-transform" Header set Pragma "no-cache" </FilesMatch> Using Firebug, I see the following: Cache-Control no-cache, no-store, must-revalidate, max-age=0, proxy-revalidate, no-transform Connection Keep-Alive Content-Type text/html Date Sun, 02 Sep 2012 19:22:27 GMT Expires Sun, 02 Sep 2012 19:22:27 GMT Keep-Alive timeout=3, max=100 Pragma no-cache Server Apache Transfer-Encoding chunked X-Powered-By PHP/5.2.17 Hey, Looks great! BUT, I have a couple .php pages I need some very short caching on. I thought the simple answer was having this added to the very top of each php page in which I want caching enabled: <?php header("Cache-Control: max-age=360"); ?> Nope. Then I tried various versions of the above. Nope. Then I tried meta http-equiv variations. Nope. Then I tried variations of the .htaccess code along with the above variations, such as limiting it to: # Eliminate caching for certain dynamic files <FilesMatch "\.(php|cgi|pl)$"> Header set Cache-Control "no-cache, max-age=0" </FilesMatch> Nope. It seems nothing I do will allow a single .php to be cache enabled with the .htaccess code in place, short of removing the statements from the .htaccess file altogether. Where am I going wrong? What do I have to do to get individual php pages to be cacheable while the rest remain off?? Thank you for any thoughts.

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • could not bind socket while haproxy restart

    - by shreyas
    I m restarting HAproxy by following command haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) but i get following message [ALERT] 183/225022 (9278) : Starting proxy appli1-rewrite: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli2-insert: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli3-relais: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli4-backup: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy ssl-relay: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli5-backup: cannot bind socket my haproxy.cfg file looks likefollowing global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appli1-rewrite 0.0.0.0:10001 cookie SERVERID rewrite balance roundrobin server app1_1 192.168.34.23:8080 cookie app1inst1 check inter 2000 rise 2 fall 5 server app1_2 192.168.34.32:8080 cookie app1inst2 check inter 2000 rise 2 fall 5 server app1_3 192.168.34.27:8080 cookie app1inst3 check inter 2000 rise 2 fall 5 server app1_4 192.168.34.42:8080 cookie app1inst4 check inter 2000 rise 2 fall 5 listen appli2-insert 0.0.0.0:10002 option httpchk balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 option httpclose # disable keep-alive rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address listen appli3-relais 0.0.0.0:10003 dispatch 192.168.135.17:80 listen appli4-backup 0.0.0.0:10004 option httpchk /index.html option persist balance roundrobin server inst1 192.168.114.56:80 check inter 2000 fall 3 server inst2 192.168.114.56:81 check inter 2000 fall 3 backup listen ssl-relay 0.0.0.0:8443 option ssl-hello-chk balance source server inst1 192.168.110.56:443 check inter 2000 fall 3 server inst2 192.168.110.57:443 check inter 2000 fall 3 server back1 192.168.120.58:443 backup listen appli5-backup 0.0.0.0:10005 option httpchk * balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 server inst3 192.168.114.57:80 backup check inter 2000 fall 3 capture cookie ASPSESSION len 32 srvtimeout 20000 option httpclose # disable keep-alive option checkcache # block response if set-cookie & cacheable rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address #errorloc 502 http://192.168.114.58/error502.html #errorfile 503 /etc/haproxy/errors/503.http errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http what is wrong with my aproach

    Read the article

  • Autossh dies after time

    - by Justin
    My setup is Ubuntu 10.04 on AWS Autossh to create a tunnel for MySQL The tunnel is automatically created using Upstart (/etc/init/autossh.conf): respawn console none start on (local-filesystems and net-device-up IFACE=eth0) stop on [!12345] script #user/IP Address redacted exec autossh -M 20000 -o StrictHostKeyChecking=no -L 3306:127.0.0.1:3306 [email protected] end script On boot the tunnel is created, works great. After some random idle time it dies. Any thoughts on how to keep it alive? I don't know what's killing autossh.

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by E3 Group
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • Kill proccess after some time

    - by yael
    I want to limit the time of grep process command For example If I perform: grep -qsRw -m1 "parameter" /var before running grep command I want to limit the grep process to alive not longer then 30 seconds how to do this? and if it can be how to return the no limit time again Yael

    Read the article

  • Kill proccess after some time

    - by yael
    I want to limit the time of grep process command For example If I perform: grep -qsRw -m1 "parameter" /var before running grep command I want to limit the grep process to alive not longer then 30 seconds how to do this? and if it can be how to return the no limit time again Yael

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by LOGIC9
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >