Search Results

Search found 15698 results on 628 pages for 'keep alive'.

Page 64/628 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Autossh dies after time

    - by Justin
    My setup is Ubuntu 10.04 on AWS Autossh to create a tunnel for MySQL The tunnel is automatically created using Upstart (/etc/init/autossh.conf): respawn console none start on (local-filesystems and net-device-up IFACE=eth0) stop on [!12345] script #user/IP Address redacted exec autossh -M 20000 -o StrictHostKeyChecking=no -L 3306:127.0.0.1:3306 [email protected] end script On boot the tunnel is created, works great. After some random idle time it dies. Any thoughts on how to keep it alive? I don't know what's killing autossh.

    Read the article

  • could not bind socket while haproxy restart

    - by shreyas
    I m restarting HAproxy by following command haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) but i get following message [ALERT] 183/225022 (9278) : Starting proxy appli1-rewrite: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli2-insert: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli3-relais: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli4-backup: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy ssl-relay: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli5-backup: cannot bind socket my haproxy.cfg file looks likefollowing global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appli1-rewrite 0.0.0.0:10001 cookie SERVERID rewrite balance roundrobin server app1_1 192.168.34.23:8080 cookie app1inst1 check inter 2000 rise 2 fall 5 server app1_2 192.168.34.32:8080 cookie app1inst2 check inter 2000 rise 2 fall 5 server app1_3 192.168.34.27:8080 cookie app1inst3 check inter 2000 rise 2 fall 5 server app1_4 192.168.34.42:8080 cookie app1inst4 check inter 2000 rise 2 fall 5 listen appli2-insert 0.0.0.0:10002 option httpchk balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 option httpclose # disable keep-alive rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address listen appli3-relais 0.0.0.0:10003 dispatch 192.168.135.17:80 listen appli4-backup 0.0.0.0:10004 option httpchk /index.html option persist balance roundrobin server inst1 192.168.114.56:80 check inter 2000 fall 3 server inst2 192.168.114.56:81 check inter 2000 fall 3 backup listen ssl-relay 0.0.0.0:8443 option ssl-hello-chk balance source server inst1 192.168.110.56:443 check inter 2000 fall 3 server inst2 192.168.110.57:443 check inter 2000 fall 3 server back1 192.168.120.58:443 backup listen appli5-backup 0.0.0.0:10005 option httpchk * balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 server inst3 192.168.114.57:80 backup check inter 2000 fall 3 capture cookie ASPSESSION len 32 srvtimeout 20000 option httpclose # disable keep-alive option checkcache # block response if set-cookie & cacheable rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address #errorloc 502 http://192.168.114.58/error502.html #errorfile 503 /etc/haproxy/errors/503.http errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http what is wrong with my aproach

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by E3 Group
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • Kill proccess after some time

    - by yael
    I want to limit the time of grep process command For example If I perform: grep -qsRw -m1 "parameter" /var before running grep command I want to limit the grep process to alive not longer then 30 seconds how to do this? and if it can be how to return the no limit time again Yael

    Read the article

  • Kill proccess after some time

    - by yael
    I want to limit the time of grep process command For example If I perform: grep -qsRw -m1 "parameter" /var before running grep command I want to limit the grep process to alive not longer then 30 seconds how to do this? and if it can be how to return the no limit time again Yael

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by LOGIC9
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • AMD FX8350 CPU - CoolerMaster Silencio 650 Case - New Water Cooling System

    - by fat_mike
    Lately after a use of 6 months of my AMD FX8350 CPU I'm experiencing high temperatures and loud noise coming from the CPU fan(I set that in order to keep it cooler). I decided to replace the stock fan with a water cooling system in order to keep my CPU quite and cool and add one or two more case fans too. Here is my case's airflow diagram: http://www.coolermaster.com/microsite/silencio_650/Airflow.html My configuration now is: 2x120mm intake front(stock with case) 1x120mm exhaust rear(stock with case) 1 CPU stock I'm planning to buy Corsair Hydro Series H100i(www.corsair.com/en-us/hydro-series-h100i-extreme-performance-liquid-cpu-cooler) and place the radiator in the front of my case(intake) and add an 120mm bottom intake and/or an 140mm top exhaust fan. My CPU lies near the top of the MO. Is it a good practice to have a water-cooling system that takes air in? As you can see here the front of the case is made of aluminum. Can the fresh air go in? Does it even fit? If not, is it wiser to get Corsair Hydro Series H80i (www.corsair.com/en-us/hydro-series-h80i-high-performance-liquid-cpu-cooler) and place the radiator on top of my case(exhaust) and keep the front 2x120mm stock and add one more as intake on bottom. If you have any other idea let me know. Thank you. EDIT: The CPU fan running ~3000rpm and temp is around 40~43C on idle and save energy. When temp is going over 55C when running multiple programs and servers on localhost(tomcat, wamp) rpm is around 5500 and loud! I'm running Win8.1 CPU not overclocked PS: Due to my reputation i couldn't post the links that was necessary. I will edit ASAP.

    Read the article

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • Why are snapshots considered as temporary backups not real backups?

    - by Samselvaprabu
    I am using VMware ESXi. In our team we use to provide snapshots for long term backup. Then we faced issues like memory spillover and the server got hang up. I started reading in VMware knowledgebase articles and everywhere. Everywhere it was recommended not to have snapshots for a long time. Even VMware advised to keep snapshots for maximum of three days. But our team kept asking us to have at least two permanent snapshots (till deleting the VM). Sometimes we may use the VM for a year). one snapshot is for fresh machine state. (So when we complete testing an application, we will revert back to fresh state and install another application) (If I did not allow that, I may often need to host the VM.) Next snapshot for keeping the VM in some state (maybe they would have found an issue and keep that state for some time. Or they may install prerequisites for the application and keep the machine ready for testing.) Logically, their needs seems to be fair. But if I allow that, I am to permit them to hold the snapshots for long time. We are not using our VM as a mail server or database server. Why is keeping snapshots for long time having an adverse effect? Why are snapshots considered as temporary backups, not real backups?

    Read the article

  • Recommended partitions to migrate from Windows XP to Windows 7 and Ubuntu

    - by Juanillo
    Hello, I have a system with Windows XP. My hard disk has 189 GB NTFS. I want to change the operative system to windows 7, but I want to add Ubuntu as well. As the change might take several days (because I don't have much time) I want to install one system (or Windows 7 or Ubuntu) keeping my windows XP installed in another partition so if something doesn't work in thebrand new operating system installed I can use my Windows XP installation. So I've thouht about doing something like this: Copy the data I want to keep to an external hard disk. Make partitions enough to install windows 7, keep data in another partition and another one to install Ubuntu. Copy the data I want to keep to the partition I've just created. Install Ubuntu in the partitions for Ubuntu. Check if Ubuntu works fine If it works OK install Windows 7 on the partition of Windows XP (Windows XP will be erased). Reinstall the programs in Windows 7. So my question is: How many partitions do you recommend me to have (and the size of each one and NTFS or FAT32)? The operative system I'm going to use more is Windows 7 (though I love Linux I use many programs which are windows dependant). Do you think I should do anything else / change something in the proccess to avoid any problem? I don't know if making the partitions can harm the data I have in the disk. Thanks.

    Read the article

  • Networking DOS within Windows 7 XP Mode, with a Windows XP/7 Networked Share

    - by theonlylos
    For awhile now, one of my clients has been stuck with Corel Paradox 4.0 (it used to be the biggest database system in the DOS days, until Microsoft released Access in the early 90's) so for awhile I've managed to keep it on life support on Windows XP for a few years, however since switching to Windows 7 x64, I've had to resort to using XP Mode as the sandbox to keep it up and running. While I am able to run Paradox as usual in XP Mode, I'm having a serious issue where if I try connecting the install to the network share (which is located on the Windows 7 portion of the system), Paradox keeps exiting because it says the serial number is invalid. Now, I know for a fact that this is an issue with the virtual loopback adapter and also having the VM linked to the physical ethernet adapter -- and while I have solved this issue before, most of my fixes have been bandages since after a few weeks the issue pops up again. Long story short, I wanted to ask if there is a permanent way to link a DOS program to a network share address. For example, when I try doing \tsclient\paradox (the Windows 7 Address) I keep getting an error saying I need a valid network address. I've tried mapping that folder to various drive letters such as P:\Paradox -- but for some reason that keeps failing over time. For what it's worth, Paradox uses a .SOM file to store the network settings, however it isn't editable in Notepad but rather it's controlled by a wizard in Paradox. But if that extension rings any bells, I'd welcome any insights.

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • socket() failed: No buffer space available) while connecting to upstream,

    - by alfish
    On my ubuntu 10.04 VPS, I get regular 500 error on nginx (0.7.??)+ fcgi web server running a durpal site. and when I trace the nginx error log I see plenty of these: socket() failed: No buffer space available) while connecting to upstream ..., I have tried differnt combination configs but none fixed the problem. Currently I have 3 nginx workers, Keep-alive time out 15 seconds and and PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=1000 I really appreciate if you Can you suggest a solution to this annoying problem.

    Read the article

  • User given a login prompt when closing Word documents after viewing them in IE7

    - by Martin Owen
    When using IE7 to view Word documents on our CRM system (an ASP.NET 2.0 application running on Windows Server 2003 and IIS 6 and using Windows authenticaton) I'm finding that a prompt appears when the user closes the document. The Word document is originally opened by clicking a link in the CRM system. Are there permissions that I can set on the folder containing the Word documents to prevent this prompt? I've already tried only allowing the Read permission for the Users group (I've left Administrators with Full Control.) If there's another solution to this without using permissions please let me know. UPDATE: I ran Fiddler as suggested by JD and here is the output from the two responses after the request for the document. The first seems to be a DAV response and the second is the authentication request. How do I prevent the DAV response and just return the .doc on the server? OPTIONS / HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 200 OK Date: Thu, 18 Feb 2010 13:37:36 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET MS-Author-Via: DAV Content-Length: 0 Accept-Ranges: none DASL: <DAV:sql> DAV: 1, 2 Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK Cache-Control: private ------------------------------------------------------------------ OPTIONS /docs/ZONE%20100-105.doc HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 401 Unauthorized Content-Length: 83 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Basic realm="<REMOVED>" X-Powered-By: ASP.NET Date: Thu, 18 Feb 2010 13:37:36 GMT ------------------------------------------------------------------ UPDATE 2: I found a potential workaround for the problem via this post: http://forums.iis.net/p/1149091/1868317.aspx. I moved all of the documents that are being requested into a folder outside of the web root, and created a virtual directory for it (also outside of the web root). When I followed a link to one of the documents in IE and then closed the document I wasn't presented with a login prompt. I should point out that I'm not using FPSE, unlike the person in the forum post. Ideally I don't want to have to put the documents in a separate virtual directory, but this is the simplest solution I've found so far.

    Read the article

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • How long are fragmented TCP fragments kept in the TCP server

    - by Justin
    Suppose that a given TCP fragment is fragmented into two IP datagrams, and that the first datagram arrives to the TCP server, but the second datagram never arrives. After a certain amount of time the TCP server sends a keepalive, and determines that the client is alive. What does the TCP server then do with this first datagram? Does is wait for the second datagram to arrive, or does it discard the first datagram?

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • Disable "Do you want to change the color scheme to improve performance?" warning

    - by William Lawn Stewart
    Sometimes this dialog box will pop up (see screenshot below). Every time it appears I select "Keep the current color scheme, and don't show this message again". Windows then reminds me again -- either the next day or after reboot, or sometimes another 5 minutes later. Do you want to change the color scheme to improve performance? Windows has detected your computer's performance is slow. This could be because there are not enough resources to run the Windows Aero color scheme. To improve performance, try changing the color scheme to Windows 7 Basic. Any change you make will be in effect until the next time you log on to Windows Change the color scheme to Windows 7 Basic Keep the current color scheme, but ask me again if my computer continues to perform slowly Keep the current color scheme, and don't show this message again Is there some reason why Windows is ignoring/forgetting my attempts to suppress the dialog? I'd love to never ever see it again, it's annoying, and it alt-tabs me out of fullscreen applications. If it matters, I'm running Windows 7 x64 Professional. I believe the dialog appears because I'm forcing Vsync and Triple Buffering for DirectX applications.

    Read the article

  • Service monitoring service, which I can ping instead of getting pinged

    - by Jack Juiceson
    I'm looking for a service, which can send me an alert if my program didn't ping(some http request) in X minutes. Pretty much like any service monitoring, but instead of service pinging my server I want, my program to ping the monitor service. This is because our program, can't get incoming connections, yet we need to monitor it's alive. And easiest for us will be to have a service we can ping. Thank you, - Jack

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >