Search Results

Search found 14127 results on 566 pages for 'plz send me teh codez'.

Page 537/566 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Cisco ASA not forwarding traffic from one interface to another

    - by Antoine Benkemoun
    Hello ServerFault, I am needing help in the configuration process of my Cisco ASA 5510. I have set up 4 Cisco ASA interconnected together via a big LAN. Each Cisco ASA has 3 or 4 LANs attached to them. The IP routing part is taken care of by OSPF. My problem is on another level. A computer connected to one of the LANs attached to an ASA has no problem communicating with the outside world. The outside world being anything "after" the ASA. My problem is that I am completely unable to have them communicate with another LAN connected to the same ASA. To rephrase this, I am unable to send traffic from one interface of a given ASA to another interface of the same ASA. My configuration is the following : ! hostname Fuji ! interface Ethernet0/0 speed 100 duplex full nameif outside security-level 0 ip address 10.0.0.2 255.255.255.0 no shutdown ! interface Ethernet0/1 speed 100 duplex full nameif cs4 no shutdown security-level 100 ip address 10.1.4.1 255.255.255.0 ! interface Ethernet0/2 speed 100 duplex full no shutdown ! interface Ethernet0/2.15 vlan 15 nameif cs5 security-level 100 ip address 10.1.5.1 255.255.255.0 ! interface Ethernet0/2.16 vlan 16 nameif cs6 security-level 100 ip address 10.1.6.1 255.255.255.0 ! interface Management0/0 speed 100 duplex full nameif management security-level 100 ip address 10.6.0.252 255.255.255.0 ! access-list nat_cs4 extended permit ip 10.1.4.0 255.255.255.0 any access-list acl_cs4 extended permit ip 10.1.4.0 255.255.255.0 any access-list nat_cs5 extended permit ip 10.1.5.0 255.255.255.0 any access-list acl_cs5 extended permit ip 10.1.5.0 255.255.255.0 any access-list nat_cs6 extended permit ip 10.1.6.0 255.255.255.0 any access-list acl_cs6 extended permit ip 10.1.6.0 255.255.255.0 any ! access-list nat_outside extended permit ip any any access-list acl_outside extended permit ip any 10.1.4.0 255.255.255.0 access-list acl_outside extended permit ip any 10.1.5.0 255.255.255.0 access-list acl_outside extended permit ip any 10.1.6.0 255.255.255.0 ! nat (outside) 0 access-list nat_outside nat (cs4) 0 access-list nat_cs4 nat (cs5) 0 access-list nat_cs5 nat (cs6) 0 access-list nat_cs6 ! static (outside,cs4) 0.0.0.0 0.0.0.0 netmask 0.0.0.0 static (outside,cs5) 0.0.0.0 0.0.0.0 netmask 0.0.0.0 static (outside,cs6) 0.0.0.0 0.0.0.0 netmask 0.0.0.0 ! static (cs4,outside) 10.1.4.0 10.1.4.0 netmask 255.255.255.0 static (cs4,cs5) 10.1.4.0 10.1.4.0 netmask 255.255.255.0 static (cs4,cs6) 10.1.4.0 10.1.4.0 netmask 255.255.255.0 ! static (cs5,outside) 10.1.5.0 10.1.5.0 netmask 255.255.255.0 static (cs5,cs4) 10.1.5.0 10.1.5.0 netmask 255.255.255.0 static (cs5,cs6) 10.1.5.0 10.1.5.0 netmask 255.255.255.0 ! static (cs6,outside) 10.1.6.0 10.1.6.0 netmask 255.255.255.0 static (cs6,cs4) 10.1.6.0 10.1.6.0 netmask 255.255.255.0 static (cs6,cs5) 10.1.6.0 10.1.6.0 netmask 255.255.255.0 ! access-group acl_outside in interface outside access-group acl_cs4 in interface cs4 access-group acl_cs5 in interface cs5 access-group acl_cs6 in interface cs6 ! router ospf 1 network 10.0.0.0 255.255.255.0 area 1 network 10.1.4.0 255.255.255.0 area 1 network 10.1.5.0 255.255.255.0 area 1 network 10.1.6.0 255.255.255.0 area 1 log-adj-changes ! There is nothing really complicated in this configuration. It just NATs from one interface to another and that's it. I have tried enabling same-security-traffic permit inter-interface but that doesn't help. I therefore must be missing something a little bit more complicated. Does anyone know why I cannot foward traffic from one interface to another ? Thank you in advance for your help, Antoine

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

  • after enabling mod ssl apache stops listening on port 80

    - by zensys
    I have an ubuntu 12.04 server with zend server CE installed. I now wanted to enable https but after the first steps according to the documentation, 'a2enmod ssl' and 'apache service restart', apache does not listen on 443 but neither on 80, according to netstat -tap | grep http(s)! This is what I see in my error log, but I can't make much of it: [Fri May 25 19:52:39 2012] [notice] caught SIGTERM, shutting down [Fri May 25 19:52:41 2012] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Fri May 25 19:52:41 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:52:41 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:52:41 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:52:41 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:11 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:53:11 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:53:11 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:53:11 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:12 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.8-ZS5.5.0 configured -- resuming normal operations and here is my httpd.conf: # Name based virtual hosting <virtualhost *:80> ServerName www-redirect KeepAlive Off RewriteEngine On RewriteCond %{HTTP_HOST} ^[^\./]+\.[^\./]+$ RewriteRule ^/(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </virtualhost> Alias /shared/js "/home/web/library/js" Alias /shared/image "/home/web/library/image" <IfModule mod_expires.c> <FilesMatch "\.(jpe?g|png|gif|js|css|doc|rtf|xls|pdf)$"> ExpiresActive On ExpiresDefault "access plus 1 week" </FilesMatch> </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Location /> RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] </Location> netstat -tap gives: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:mysql *:* LISTEN 765/mysqld tcp 0 0 *:pop3 *:* LISTEN 744/dovecot tcp 0 0 *:imap2 *:* LISTEN 744/dovecot tcp 0 0 *:http *:* LISTEN 19861/apache2 tcp 0 0 *:smtp *:* LISTEN 30365/master tcp 0 0 *:4444 *:* LISTEN 634/sshd tcp 0 0 *:kamanda *:* LISTEN 1167/lighttpd tcp 0 0 *:imaps *:* LISTEN 744/dovecot tcp 0 0 *:amandaidx *:* LISTEN 1167/lighttpd tcp 0 0 localhost.loc:amidxtape *:* LISTEN 19861/apache2 tcp 0 0 *:pop3s *:* LISTEN 744/dovecot tcp 0 384 mail.mysite.:4444 231.214.14.37.dyn:41909 ESTABLISHED 19039/sshd: web [pr tcp 0 0 localhost.localdo:mysql localhost.localdo:48252 ESTABLISHED 765/mysqld tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54686 TIME_WAIT - tcp 0 0 mail.mysite.:4444 231.214.14.37.dyn:42419 ESTABLISHED 19372/sshd: web [pr tcp 0 0 localhost.localdo:48252 localhost.localdo:mysql ESTABLISHED 19884/auth tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54685 TIME_WAIT - tcp6 0 0 [::]:pop3 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:imap2 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:smtp [::]:* LISTEN 30365/master tcp6 0 0 [::]:4444 [::]:* LISTEN 634/sshd tcp6 0 0 [::]:imaps [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:pop3s [::]:* LISTEN 744/dovecot Anyone knows what I am doing wrong? Perhaps I should take some additional steps to make apache listen 0n 443 but that it stops listening on 80 altogether I can't understand.

    Read the article

  • How to transfer data between two networks efficiently

    - by Tono Nam
    I would like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I would like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the VPN is because it is two slow. Having high upload speed is very expensive/impossible in residential places so I would like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with Dropbox is that the free version comes with only 2 GB of storage. I think the deals they offer are OK and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I would like it to be even faster. Anyway I was thinking why not create a program myself. This is the algorithm that I was thinking of. Let me know if it sounds too crazy. (Remember my goal is to transfer files as fast as possible) Things that I will use in this algorithm: Server on the internet called S (Has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it.) Client A at location 1 Client B at location 2 So lets say at location 1, 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each packet. Have server S help speed up things. For example every time a packet is lost we can use Server S to inform client A that it needs to resend a packet. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data while it is being compressed. Or if it is possible to start decompressing data even if we are not done receiving the whole file. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So I guess my question is could I increase performance by using UDP instead of TCP and by using an extra server to keep track of lost packets? And how should I compress files before sending? Compressing a 1 GB file with the highest compression ratio takes about 1 hour! I would like to take advantage of that time by sending it as it is being compressed.

    Read the article

  • Unable to connect to OpenVPN server

    - by Incognito
    I'm trying to get a working setup of OpenVPN on my VM and authenticate into it from a client. I'm not sure but it looks to me like it's socket related, as it's not set to LISTEN, and localhost seems wrong. I've never set up VPN before. # netstat -tulpn | grep vpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 127.0.0.1:1194 0.0.0.0:* 24059/openvpn I don't think this is set up correctly. Here's some detail into what I've done. I have a VPS from MediaTemple: These are my interfaces before starting openvpn: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39482 errors:0 dropped:0 overruns:0 frame:0 TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3237452 (3.2 MB) TX bytes:3237452 (3.2 MB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:4885284 errors:0 dropped:0 overruns:0 frame:0 TX packets:4679884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:835278537 (835.2 MB) TX bytes:1989289617 (1.9 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:205.[redacted] P-t-P:205.186.148.82 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 I've followed this guide on setting up a basic server and getting a .p12 file, however, I was receiving an error that stated /dev/net/tun was missing, so I created it mkdir -p /dev/net mknod /dev/net/tun c 10 200 chmod 600 /dev/net/tun This resolved the error preventing the service from launching, however, I am unable to connect. On the server I've set up the myserver.conf file (as per the tutorial) to indicate local 127.0.0.1 (I've also attempted with the public IP address, perhaps I don't understand what they mean by local IP?). The server launches without error, this is what the log looks like when it starts: Sun Apr 1 17:21:27 2012 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Mar 11 2011 Sun Apr 1 17:21:27 2012 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Apr 1 17:21:27 2012 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Sun Apr 1 17:21:27 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Sun Apr 1 17:21:27 2012 TUN/TAP device tun0 opened Sun Apr 1 17:21:27 2012 /sbin/ifconfig tun0 10.8.0.1 pointopoint 10.8.0.2 mtu 1500 Sun Apr 1 17:21:27 2012 GID set to openvpn Sun Apr 1 17:21:27 2012 UID set to openvpn Sun Apr 1 17:21:27 2012 UDPv4 link local (bound): [AF_INET]127.0.0.1:1194 Sun Apr 1 17:21:27 2012 UDPv4 link remote: [undef] Sun Apr 1 17:21:27 2012 Initialization Sequence Completed This creates a tun0 interface that looks like this: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the netstat command still indicates the state is not set to LISTEN. On the client-side I've installed the p12 certs onto two devices (one is an android tablet, the other is an Ubuntu desktop). I don't see port 1194 as open either. Both clients install the cert files and then ask me for the L2TP secret (which was set on the file), but then they oddly ask me for a username and a password, which I don't know where I could possibly get those from. I attempted all of my logins, and some whacky guesses that were frantically pulling at straws. If there's any more information I could provide let me know.

    Read the article

  • What do these messages in the Qmail maillog indicate?

    - by Griffo
    There seems to be an endless supply of messages in the Qmail maillog for a single address. Can anyone shed some light on why this might be and whether it is a problem? To me it looks like either spam or some sort of unhandled problem. It strikes me as unusual that the 'from=' field is blank. This is on a VPS using Plesk in case that's important. Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23593]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: [email protected] EDIT Here's a sample of one of the emails: Received: (qmail 5603 invoked for bounce); 29 Jun 2011 07:46:31 +0100 Date: 29 Jun 2011 07:46:31 +0100 From: [email protected] To: [email protected] Subject: failure notice Hi. This is the qmail-send program at vps-1001108-595.cp.blacknight.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 200.147.36.13 does not like recipient. Remote host said: 450 4.7.1 Client host rejected: cannot find your hostname, [78.153.208.195] Giving up on 200.147.36.13. I'm not going to try again; this message has been in the queue too long. --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15585 invoked by uid 48); 22 Jun 2011 07:38:26 +0100 Date: 22 Jun 2011 07:38:26 +0100 Message-ID: <[email protected]> To: [email protected] Subject: Cadastre-se e Concorra ? um Carro! MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Cielo Fidelidade <[email protected]> <!DOCTYPE HTML> <html> ... <body text removed> <html> If I understand this correctly, this is saying that an email sent by my server, from address [email protected], could not be delivered. However, [email protected] is not a valid email address on my server, so how can email be sent from this address on my server? I have tested whether my server is acting as an open relay, and it isn't. So how else could this be happening? I am getting thousands of these every day. What can I do to prevent it?

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • Courier Maildrop error user unknown. Command output: Invalid user specified

    - by cad
    Hello I have a problem with maildrop. I have read dozens of webs/howto/emails but couldnt solve it. My objective is moving automatically spam messages to a spam folder. My email server is working perfectly. It marks spam in subject and headers using spamassasin. My box has: Ubuntu 9.04 Web: Apache2 + Php5 + MySQL MTA: Postfix 2.5.5 + SpamAssasin + virtual users using mysql IMAP: Courier 0.61.2 + Courier AuthLib WebMail: SquirrelMail I have read that I could use Squirrelmail directly (not a good idea), procmail or maildrop. As I already have maildrop in the box (from courier) I have configured the server to use maildrop (added an entry in transport table for a virtual domain). I found this error in email: This is the mail system at host foo.net I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: user unknown. Command output: Invalid user specified. Final-Recipient: rfc822; [email protected] Action: failed Status: 5.1.1 Diagnostic-Code: x-unix; Invalid user specified. ---------- Forwarded message ---------- From: test <[email protected]> To: [email protected] Date: Sat, 1 May 2010 19:49:57 +0100 Subject: fail fail An this in the logs May 1 18:50:18 foo.net postfix/smtpd[14638]: connect from mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/smtpd[14638]: 8A9E9DC23F: client=mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/cleanup[14643]: 8A9E9DC23F: message-id=<[email protected]> May 1 18:50:19 foo.net postfix/qmgr[14628]: 8A9E9DC23F: from=<[email protected]>, size=1858, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/pickup[14627]: 1D4B4DC2AA: uid=5002 from=<[email protected]> May 1 18:50:23 foo.net postfix/cleanup[14643]: 1D4B4DC2AA: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/pipe[14644]: 8A9E9DC23F: to=<[email protected]>, relay=spamassassin, delay=3.8, delays=0.55/0.02/0/3.2, dsn=2.0.0, status=sent (delivered via spamassassin service) May 1 18:50:23 foo.net postfix/qmgr[14628]: 8A9E9DC23F: removed May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: from=<[email protected]>, size=2173, nrcpt=1 (queue active) **May 1 18:50:23 foo.netpostfix/pipe[14648]: 1D4B4DC2AA: to=<[email protected]>, relay=maildrop, delay=0.22, delays=0.06/0.01/0/0.15, dsn=5.1.1, status=bounced (user unknown. Command output: Invalid user specified. )** May 1 18:50:23 foo.net postfix/cleanup[14643]: 4C2BFDC240: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/qmgr[14628]: 4C2BFDC240: from=<>, size=3822, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/bounce[14651]: 1D4B4DC2AA: sender non-delivery notification: 4C2BFDC240 May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: removed May 1 18:50:24 foo.net postfix/smtp[14653]: 4C2BFDC240: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.211.97]:25, delay=0.91, delays=0.02/0.03/0.12/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1272739824 37si5422420ywh.59) May 1 18:50:24 foo.net postfix/qmgr[14628]: 4C2BFDC240: removed My config files: http://lar3d.net/main.cf (/etc/postfix) http://lar3d.net/master.c (/etc/postfix) http://lar3d.net/local.cf (/etc/spamassasin) http://lar3d.net/maildroprc (maildroprc) If I change master.cf line (as suggested here) maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d ${recipient} with maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d vmail ${recipient} I get the email in /home/vmail/MailDir instead of the correct dir (/home/vmail/foo.net/info/.SPAM ) After reading a lot I have some guess but not sure. - Maybe I have to install userdb? - Maybe is something related with mysql, but everything is working ok - If I try with procmail I will face same problem... - What are flags DRhu for? Couldnt find doc about them - In some places I found maildrop line with more parameters flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d $ ${recipient} ${extension} ${recipient} ${user} ${nexthop} ${sender} I am really lost. Dont know how to continue. If you have any idea or need another config file please let me know. Thanks!!!

    Read the article

  • Nagios notifications definitions

    - by Colin
    I am trying to monitor a web server in such a way that I want to search for a particular string on a page via http. The command is defined in command.cfg as follows # 'check_http-mysite command definition' define command { command_name check_http-mysite command_line /usr/lib/nagios/plugins/check_http -H mysite.example.com -s "Some text" } # 'notify-host-by-sms' command definition define command { command_name notify-host-by-sms command_line /usr/bin/send_sms $CONTACTPAGER$ "Nagios - $NOTIFICATIONTYPE$ :Host$HOSTALIAS$ is $HOSTSTATE$ ($OUTPUT$)" } # 'notify-service-by-sms' command definition define command { command_name notify-service-by-sms command_line /usr/bin/send_sms $CONTACTPAGER$ "Nagios - $NOTIFICATIONTYPE$: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ ($OUTPUT$)" } Now if nagios doesn't find "Some text" on the home page mysite.example.com, nagios should notify a contact via sms through the Clickatell http API which I have a script for that that I have tested and found that it works fine. Whenever I change the command definition to search for a string which is not on the page, and restart nagios, I can see on the web interface that the string was not found. What I don't understand is why isn't the notification sent though I have defined the host, hostgroup, contact, contactgroup and service and so forth. What I'm I missing, these are my definitions, In my web access through the cgi I can see that I have notifications have been defined and enabled though I don't get both email and sms notifications during hard status changes. host.cfg define host { use generic-host host_name HAL alias IBM-1 address xxx.xxx.xxx.xxx check_command check_http-mysite } *hostgroups_nagios2.cfg* # my website define hostgroup{ hostgroup_name my-servers alias All My Servers members HAL } *contacts_nagios2.cfg* define contact { contact_name colin alias Colin Y service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email,notify-service-by-sms host_notification_commands notify-host-by-email,notify-host-by-sms email [email protected] pager +254xxxxxxxxx } define contactgroup{ contactgroup_name site_admin alias Site Administrator members colin } *services_nagios2.cfg* # check for particular string in page via http define service { hostgroup_name my-servers service_description STRING CHECK check_command check_http-mysite use generic-service notification_interval 0 ; set > 0 if you want to be renotified contacts colin contact_groups site_admin } Could someone please tell me where I'm going wrong. Here are the generic-host and generic-service definitions *generic-service_nagios2.cfg* # generic service template definition define service{ name generic-service ; The 'name' of this service template active_checks_enabled 1 ; Active service checks are enabled passive_checks_enabled 1 ; Passive service checks are enabled/accepted parallelize_check 1 ; Active service checks should be parallelized (disabling this can lead to major performance problems) obsess_over_service 1 ; We should obsess over this service (if necessary) check_freshness 0 ; Default is to NOT check service 'freshness' notifications_enabled 1 ; Service notifications are enabled event_handler_enabled 1 ; Service event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts notification_interval 0 ; Only send notifications on status change by default. is_volatile 0 check_period 24x7 normal_check_interval 5 retry_check_interval 1 max_check_attempts 4 notification_period 24x7 notification_options w,u,c,r contact_groups site_admin register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL SERVICE, JUST A TEMPLATE! } *generic-host_nagios2.cfg* define host{ name generic-host ; The name of this host template notifications_enabled 1 ; Host notifications are enabled event_handler_enabled 1 ; Host event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts max_check_attempts 10 notification_interval 0 notification_period 24x7 notification_options d,u,r contact_groups site_admin register 1 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE! }

    Read the article

  • How to transfer data between two netowks efficiently

    - by Tono Nam
    I will like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I will like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the vpn is because it is two slow. Having high upload speed is very expensive/impossible on residential places so I will like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with dropbox is it only enables 2 GB of storage in order for it to be free. I think the deals they offer are ok and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I will like it even faster lol. Anyways I was thinking why not create a program my self. This is the algorithm that I was thinking let me know if it sounds to crazy. (remember my goal is to transfer files as fastest as possible) Things that I will use in this algorithm: Server on the internet called S ( has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it) Client A on location 1 Client B on location 2 So lets say on location 1 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each package. Have server S help speed up things. For example every time a package is lost we can use Server S to inform client A that it needs to resend a package. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data meanwhile it is being compressed. Also if it is possible to start decompressing data even if we are not done receiving all the info. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So i guess my question is should using UDP over TCP could increase performance by using an extra server to keep track of lost packages? and How should I compress files before sending? compressing a 1 GB file with the highest compression ration takes about 1 hour! I will like to take advantage of that time by sending it meanwhile it is compressed.

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • Postfix certificate verification failed for smtp.gmail.com

    - by Andi Unpam
    I have problem, my email server using postfix with gmail smtp, i use account google apps, but always ask for SASL authentication failed, I sent an email using php script, after I see the error logs in the wrong password, after I open the URL from the browser and no verification postfixnya captcha and could return, but after 2-3 days later happen like that again. This my config postfix #myorigin = /etc/mailname smtpd_banner = Hostingbitnet Mail Server biff = no append_dot_mydomain = no readme_directory = no myhostname = webmaster.hostingbitnet.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost, webmaster.hostingbitnet.com, localhost.localdomain, 103.9.126.163 relayhost = [smtp.googlemail.com]:587 relay_transport = relay relay_destination_concurrency_limit = 1 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 103.9.126.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = smtp relayhost = [smtp.gmail.com]:587 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/google-apps smtp_sasl_security_options = noanonymous smtp_use_tls = yes smtp_sender_dependent_authentication = yes tls_random_source = dev:/dev/urandom default_destination_concurrency_limit = 1 smtp_tls_CAfile = /etc/postfix/tls/root.crt smtp_tls_cert_file = /etc/postfix/tls/cert.pem smtp_tls_key_file = /etc/postfix/tls/privatekey.pem smtp_tls_session_cache_database = btree:$data_directory/smtp_tls_session_cache smtp_tls_security_level = may smtp_tls_loglevel = 1 smtpd_tls_CAfile = /etc/postfix/tls/root.crt smtpd_tls_cert_file = /etc/postfix/tls/cert.pem smtpd_tls_key_file = /etc/postfix/tls/privatekey.pem smtpd_tls_session_cache_database = btree:$data_directory/smtpd_tls_session_cache smtpd_tls_security_level = may smtpd_tls_loglevel = 1 #secure smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination Log from mail.log Oct 30 14:51:13 webmaster postfix/smtp[9506]: Untrusted TLS connection established to smtp.gmail.com[74.125.25.109]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 30 14:51:15 webmaster postfix/smtp[9506]: 87E2739400B1: SASL authentication failed; server smtp.gmail.com[74.125.25.109] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 ix9sm156630pbc.7 Oct 30 14:51:15 webmaster postfix/smtp[9506]: setting up TLS connection to smtp.gmail.com[74.125.25.108]:587 Oct 30 14:51:15 webmaster postfix/smtp[9506]: certificate verification failed for smtp.gmail.com[74.125.25.108]:587: untrusted issuer /C=US/O=Equifax/OU=Equifax Secure Certificate Authority Oct 30 14:51:16 webmaster postfix/smtp[9506]: Untrusted TLS connection established to smtp.gmail.com[74.125.25.108]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 30 14:51:17 webmaster postfix/smtp[9506]: 87E2739400B1: to=<[email protected]>, relay=smtp.gmail.com[74.125.25.108]:587, delay=972, delays=967/0.03/5.5/0, dsn=4.7.1, status=deferred (SASL authentication failed; server smtp.gmail.com[74.125.25.108] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 s1sm3850paz.0) Oct 30 14:51:17 webmaster postfix/error[9508]: B3960394009D: to=<[email protected]>, orig_to=<root>, relay=none, delay=29992, delays=29986/5.6/0/0.07, dsn=4.7.1, status=deferred (delivery temporarily suspended: SASL authentication failed; server smtp.gmail.com[74.125.25.108] said: 535-5.7.1 Please log in with your web browser and then try again. Learn more at?535 5.7.1 https://support.google.com/mail/bin/answer.py?answer=78754 s1sm3850paz.0) BTW I made cert follow the link here http://koti.kapsi.fi/ptk/postfix/postfix-tls-cacert.shtml and it worked, but after 2/3 days my email back to problem invalid SASL, and then i'm required to log in use a browser and enter the captcha there but success log in after input captcha, and my email server can send emails from telnet or php script. but it will be back in trouble after 2/3days later. My question is how to make it permanent certificate? Thanks n greeting.

    Read the article

  • PPTP VPN from Ubuntu cannot connect

    - by Andrea Polci
    I'm trying to configure under Linux (Kubuntu 9.10) a VPN I already use from Windows. I installed the network-manager-pptp package and added the VPN under Network Manager. These are the parameters under "advanced" button: Authentication Methods: PAP, CHAP, MSCHAP, MSCHAP2, EAP (I also tried "MSCHAP, MSCHAP2") Use MPPE Encryption: yes Crypto: Any Use stateful encryption: no Allow BSD compression: yes Allow Deflate compression: yes Allow TCP header compression: yes Send PPP echo packets: no When I try to connnect it doesn't work and this is what I get in the system log: 2010-04-08 13:53:47 pcelena NetworkManager <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4931 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections 2010-04-08 13:53:47 pcelena pppd[4932] Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN plugin state changed: 3 2010-04-08 13:53:47 pcelena pppd[4932] pppd 2.4.5 started by root, uid 0 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN connection 'MYVPN' (Connect) reply received. 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. 2010-04-08 13:53:47 pcelena pppd[4932] Using interface ppp0 2010-04-08 13:53:47 pcelena pppd[4932] Connect: ppp0 <--> /dev/pts/2 2010-04-08 13:53:47 pcelena pptp[4934] nm-pptp-service-4931 log[main:pptp.c:314]: The synchronous pptp option is NOT activated 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 1, peer's call ID 14800). 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] LCP terminated by peer 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:929]: Call disconnect notification received (call id 14800) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:788]: Received Stop Control Connection Request. 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 4 'Stop-Control-Connection-Reply' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) 2010-04-08 13:53:48 pcelena pppd[4932] Modem hangup 2010-04-08 13:53:48 pcelena pppd[4932] Connection terminated. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:48 pcelena pppd[4932] Exit. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state changed: 6 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state change reason: 0 2010-04-08 13:53:48 pcelena NetworkManager <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. 2010-04-08 13:53:48 pcelena NetworkManager <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001390] ensure_killed(): waiting for vpn service pid 4931 to exit 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001479] ensure_killed(): vpn service pid 4931 cleaned up The error that sticks out here is "pppd[4932] LCP terminated by peer". Does anyone has suggestion on what can be the problem and how to make it work?

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • mysql replication 1x master, 1x slave

    - by clarkk
    I have just setup one master and one slave server, but its not working.. On my website I connect to the slave server and I insert some rows, but they do not appear on the master and vice versa.. What is wrong? This is what I did: Master: -> /etc/mysql/my.cnf [mysqld] log-bin = mysql-master-bin server-id=1 # bind-address = 127.0.0.1 binlog-do-db = test_db Slave: -> /etc/mysql/my.cnf [mysqld] log-bin = mysql-slave-bin server-id=2 # bind-address = 127.0.0.1 replicate-do-db = test_db Slave: terminal 0 > mysql> STOP SLAVE; // and drop tables Master: terminal 1 > mysql> CREATE USER 'repl_slave'@'slave_ip' IDENTIFIED BY 'repl_pass'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_slave'@'slave_ip'; mysql> FLUSH PRIVILEGES; mysql> FLUSH TABLES WITH READ LOCK; -- leave terminal open terminal 2 > shell> mysqldump -u root -pPASSWORD test_db --lock-all-tables > dump.sql mysql> SHOW MASTER STATUS; Slave: terminal 3 > shell> mysql -u root -pPASSWORD test_db < dump.sql terminal 0 > mysql> CHANGE MASTER TO mysql> MASTER_HOST='master_ip', mysql> MASTER_USER='repl_slave', mysql> MASTER_PASSWORD='repl_pass', mysql> MASTER_PORT=3306, mysql> MASTER_LOG_FILE='mysql-master-bin.000003', // terminal 2 > SHOW MASTER STATUS mysql> MASTER_LOG_POS=4, // terminal 2 > SHOW MASTER STATUS mysql> MASTER_CONNECT_RETRY=10; mysql> START SLAVE; mysql> SHOW SLAVE STATUS; Here is the slave status: Array ( [Slave_IO_State] => Waiting for master to send event [Master_Host] => xx.xx.xx.xx [Master_User] => repl_slave [Master_Port] => 3306 [Connect_Retry] => 10 [Master_Log_File] => mysql-master-bin.000003 [Read_Master_Log_Pos] => 106 [Relay_Log_File] => mysqld-relay-bin.000002 [Relay_Log_Pos] => 258 [Relay_Master_Log_File] => mysql-master-bin.000003 [Slave_IO_Running] => Yes [Slave_SQL_Running] => Yes [Replicate_Do_DB] => test_db [Replicate_Ignore_DB] => [Replicate_Do_Table] => [Replicate_Ignore_Table] => [Replicate_Wild_Do_Table] => [Replicate_Wild_Ignore_Table] => [Last_Errno] => 0 [Last_Error] => [Skip_Counter] => 0 [Exec_Master_Log_Pos] => 106 [Relay_Log_Space] => 414 [Until_Condition] => None [Until_Log_File] => [Until_Log_Pos] => 0 [Master_SSL_Allowed] => No [Master_SSL_CA_File] => [Master_SSL_CA_Path] => [Master_SSL_Cert] => [Master_SSL_Cipher] => [Master_SSL_Key] => [Seconds_Behind_Master] => 0 [Master_SSL_Verify_Server_Cert] => No [Last_IO_Errno] => 0 [Last_IO_Error] => [Last_SQL_Errno] => 0 [Last_SQL_Error] => )

    Read the article

  • Apache reverse proxy POST 403

    - by qkslvrwolf
    I am trying to get Jira and Stash to talk to each other via a Trusted Application link. The setup, currently, looks like this: Jira - http - Jira Proxy -https- stash proxy -http- stash. Jira and the Jira proxy are on the same machine. The Jira Proxy is showing 403 Forbidden for POST requests from the stash server. It works (or seems to ) for everything else. I contend that since we're seeing 403 forbiddens in the access log for apache, Jira is never seeing the request. Why is apache forbidding posts,and how do I fix it? Note that the IPs for both Stash and the Stash Proxy are in the "trusted host" section. My config: LogLevel info CustomLog "|/usr/sbin/rotatelogs /var/log/apache2/access.log 86400" common ServerSignature off ServerTokens prod Listen 8443 <VirtualHost *:443> ServerName jira.company.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/ssl/certs/server.cer SSLCertificateKeyFile /etc/ssl/private/server.key SSLProtocol +SSLv3 +TLSv1 SSLCipherSuite DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA # If context path is not "/wiki", then send to /jira. RedirectMatch 301 ^/$ https://jira.company.com/jira RedirectMatch 301 ^/gsd(.*)$ https://jira.company.com/jira$1 ProxyRequests On ProxyPreserveHost On ProxyVia On ProxyPass /jira http://localhost:8080/jira ProxyPassReverse /jira http://localhost:8080/jira <Proxy *> Order deny,allow Allow from all </Proxy> RewriteEngine on RewriteLog "/var/log/apache2/rewrite.log" RewriteLogLevel 2 # Disable TRACE/TRACK requests, per security. RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] DocumentRoot /var/www DirectoryIndex index.html <Directory /var/www> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> <LocationMatch "/"> Order deny,allow Deny from all allow from x.x.71.8 allow from x.x.8.123 allow from x.x.120.179 allow from x.x.120.73 allow from x.x.120.45 satisfy any SetEnvif Remote_Addr "x.x.71.8" TRUSTED_HOST SetEnvif Remote_Addr "x.x.8.123" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.179" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.73" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.45" TRUSTED_HOST </LocationMatch> <LocationMatch ^> SSLRequireSSL AuthType CompanyNet PubcookieInactiveExpire -1 PubcookieAppID jira.company.com require valid-user RequestHeader set userid %{REMOTE_USER}s </LocationMatch> </VirtualHost> # Port open for SSL, non-pubcookie access. Used to access APIs with Basic Auth. <VirtualHost *:8443> SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/ssl/certs/server.cer SSLCertificateKeyFile /etc/ssl/private/server.key SSLProtocol +SSLv3 +TLSv1 SSLCipherSuite DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA ProxyRequests On ProxyPreserveHost On ProxyVia On ProxyPass /jira http://localhost:8080/jira ProxyPassReverse /jira http://localhost:8080/jira <Proxy *> Order deny,allow Allow from all </Proxy> RewriteEngine on RewriteLog "/var/log/apache2/rewrite.log" RewriteLogLevel 2 # Disable TRACE/TRACK requests, per security. RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] DocumentRoot /var/www DirectoryIndex index.html <Directory /var/www> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> </VirtualHost> <VirtualHost jira.company.com:80> ServerName jira.company.com RedirectMatch 301 /(.*)$ https://jira.company.com/$1 RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </VirtualHost> <VirtualHost *:80> ServerName go.company.com RedirectMatch 301 /(.*)$ https://jira.company.com/$1 RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </VirtualHost>

    Read the article

  • Screen -X exec commands not working until manually attached

    - by James Watt
    I have a batch script that starts a java server application inside of a screen. The command looks like this: cd /dir/ && screen -A -m -d -S javascreen java -Xms640M -Xmx1024M -jar javaserverapp.jar nogui After I run the batch script, it starts the server and puts it inside the correct screen. If I list my screens after, I see something like this: user@gtwy /dir $ screen -list There is a screen on: 16180.javascreen (Detached) 1 Socket in /var/run/screen/S-user. However, I have a second batch script that sends automated commands to this server and runs on a different crontab interval. Because of the way the application works, I send commands to it like this (this command tells it to alert connected users "testing 123"): screen -X exec .\!\! echo say testing 123 I've also tried: screen -R -X exec .\!\! echo say testing 123 screen -S javascreen -X exec .\!\! echo say testing 123 Unfortunately, these commands DO NOT WORK. They don't even give me an error message, they just do nothing. HOWEVER - If I manually attach to the screen first (with the below command) and then detach, now I can run any of the above commands flawlessly. I can demonstrate this with a video, if I wasn't clear enough here. screen -r -d Thanks in advance. Update: here is the important parts of /etc/screenrc. It should be totally vanilla, I've never edited this file. # VARIABLES # =============================================================== # No annoying audible bell, using "visual bell" # vbell on # default: off # vbell_msg " -- Bell,Bell!! -- " # default: "Wuff,Wuff!!" # Automatically detach on hangup. autodetach on # default: on # Don't display the copyright page startup_message off # default: on # Uses nethack-style messages # nethack on # default: off # Affects the copying of text regions crlf off # default: off # Enable/disable multiuser mode. Standard screen operation is singleuser. # In multiuser mode the commands acladd, aclchg, aclgrp and acldel can be used # to enable (and disable) other user accessing this screen session. # Requires suid-root. multiuser off # Change default scrollback value for new windows defscrollback 1000 # default: 100 # Define the time that all windows monitored for silence should # wait before displaying a message. Default 30 seconds. silencewait 15 # default: 30 # bufferfile: The file to use for commands # "readbuf" ('<') and "writebuf" ('>'): bufferfile $HOME/.screen_exchange # # hardcopydir: The directory which contains all hardcopies. # hardcopydir ~/.hardcopy # hardcopydir ~/.screen # # shell: Default process started in screen's windows. # Makes it possible to use a different shell inside screen # than is set as the default login shell. # If begins with a '-' character, the shell will be started as a login shell. # shell zsh # shell bash # shell ksh shell -$SHELL # shellaka '> |tcsh' # shelltitle '$ |bash' # emulate .logout message pow_detach_msg "Screen session of \$LOGNAME \$:cr:\$:nl:ended." # caption always " %w --- %c:%s" # caption always "%3n %t%? @%u%?%? [%h]%?%=%c" # advertise hardstatus support to $TERMCAP # termcapinfo * '' 'hs:ts=\E_:fs=\E\\:ds=\E_\E\\' # set every new windows hardstatus line to somenthing descriptive # defhstatus "screen: ^En (^Et)" # don't kill window after the process died # zombie "^["

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • TV not detected by Windows/VGA - when there is a WHDI device in the signal chain

    - by ashwalk
    I'm at my wit's end with this one... I had an EVGA GTS 250, and I used to plug it's HDMI out into a WHDI sender, which transmitted to its corresponding WHDI receiver 15ft away, which then connected to a Samsung LN40D LCD TV through another HDMI cable. PC/VGA < [hdmi cable] < WHDI sender <[air] WHDI receiver < [hdmi cable] < TV It was perfect, stable, no perceivable latency. I just plugged everything the first time and it worked instantly. It sent 5.1 audio, and Windows/nVidia Control Center detected the TV by its name. The WHDI device is this one: http://goo.gl/Q8iWI5 Now I bought an EVGA GTX 650, and WHDI doesn't work anymore. Both Windows and nVidia Control Center won't detect the TV, only the monitor that's connected via DVI. The TV screen shows "TX202913 connected. Check video signal." on top of a black screen. Though the device is not the problem itself, just the fact that it's not allowing direct connection between PC and TV. I would bet that if put an AVR in its place I'd also have this issue. The HDMI on this new card works with other monitors. If I put the older card back, WHDI works normally. I have googled this for 5 months on and off. Once I bumped into a page that showed how to force a display device to always-on through registry edit. Once I restarted windows, the Tv (through WHDI) displayed my expanded or duplicated desktop at 1024x768 ONLY, and listed the display as "digital display". I could not change the resolution and it wouldn't playback audio (although the option was available at nVidia Control Center HDMI audio options, but did not work). This proves that there is no conflict between the devices, except that software-wise, Windows cannot, for the life of it, understand that there's a TV there to send video/audio to. Since this won't do (no audio, poor video), I reverted this regedit. It's also not an EDID problem within the TV, since when connected directly it works. The last weird bit of this saga is that today, I reminded of Windows' "Add Device" dialog, gave it a go, and a "Samsung Generic UPNP TV" showed up, which I promptly installed the drives for, rising to a climax of... ...NOTHING HAPPENING. As far as I can tell, it really didn't change anything other than using up a few kb in my main disc. I should also say that I looked a LOT into handshake problems and nothing applied either. Do any of you have an idea of what may be going on? I can't stand the thought of having a us$200 device not working because of the addition of a newer graphics card, when the much older one had no issues. There is absolutely NO REASON for this to happen. There is NO documentation on WHDI online. Apparently no one buys this stuff. For the same reason, no one responded to this same plea for help on NVidia and EVGA forums. Worst case, this can be a warning about this setup for people in the future. Thanx in advance.

    Read the article

  • Cannot connect to MySQL Server on RHEL 5.7

    - by Jeffrey Wong
    I have a standard MySQL Server running on Red hat 5.7. I have edited /etc/my.cnf to specify the bind address as my server's public IP address. [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks ; # to do so, uncomment this line: # symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address=171.67.88.25 port=3306 And I have also restarted my firewall sudo /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT /sbin/service iptables save The network administrator has already opened port 3306 for this box. When connecting from a remote computer (running Ubuntu 10.10, server is running RHEL 5.7), I issue mysql -u jeffrey -p --host=171.67.88.25 --port=3306 --socket=/var/lib/mysql/mysql.sock but receive a ERROR 2003 (HY000): Can't connect to MySQL server on '171.67.88.25' (113). I've noticed that the socket file /var/lib/mysql/mysql.sock is blank. Should this be the case? UPDATE The result of netstat -an | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN Result of sudo netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 0 7602 3168/hpiod tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 7827 3298/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 5110 2802/portmap tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN 0 8431 3326/rserver tcp 0 0 0.0.0.0:915 0.0.0.0:* LISTEN 0 5312 2853/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 7655 3188/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 7688 3199/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 8025 3362/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 0 7620 3173/python udp 0 0 0.0.0.0:909 0.0.0.0:* 0 5300 2853/rpc.statd udp 0 0 0.0.0.0:912 0.0.0.0:* 0 5309 2853/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 0 4800 2598/dhclient udp 0 0 0.0.0.0:36177 0.0.0.0:* 70 8314 3476/avahi-daemon: udp 0 0 0.0.0.0:5353 0.0.0.0:* 70 8313 3476/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 0 5109 2802/portmap udp 0 0 0.0.0.0:631 0.0.0.0:* 0 7691 3199/cupsd Result of sudo /sbin/iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6373 2110K RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 1241 packets, 932K bytes) pkts bytes target prot opt in out source destination Chain RH-Firewall-1-INPUT (2 references) pkts bytes target prot opt in out source destination 572 861K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1 28 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 255 0 0 ACCEPT esp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT ah -- * * 0.0.0.0/0 0.0.0.0/0 46 6457 ACCEPT udp -- * * 0.0.0.0/0 224.0.0.251 udp dpt:5353 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 782 157K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:23 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 4970 1086K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Result of nmap -P0 -p3306 171.67.88.25 Host is up (0.027s latency). PORT STATE SERVICE 3306/tcp filtered mysql Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds Solution When everything else fails, go GUI! system-config-securitylevel and add port 3306. All done!

    Read the article

  • How to save POST&GET headers of a web page with "Wireshark"?

    - by brilliant
    Hello everybody, I've been trying to find a python code that would log in to my mail box on yahoo.com from "Google App Engine". I was given this code: import urllib, urllib2, cookielib url = "https://login.yahoo.com/config/login?" form_data = {'login' : 'my-login-here', 'passwd' : 'my-password-here'} jar = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar)) form_data = urllib.urlencode(form_data) # data returned from this pages contains redirection resp = opener.open(url, form_data) # yahoo redirects to http://my.yahoo.com, so lets go there instead resp = opener.open('http://mail.yahoo.com') print resp.read() The author of this script looked into HTML script of yahoo log-in form and came up with this script. That log-in form contains two fields, one for users' Yahoo! ID and another one is for users' password. However, when I tried this code out (substituting mu real Yahoo login for 'my-login-here' and my real password for 'my-password-here'), it just return the log-in form back to me, which means that something didn't work right. Another supporter suggested that I should send an MD5 hash of my password, rather than a plain password. He also noted that in that log-in form there are a lot other hidden fields besides login and password fields (he called them "CSRF protections") that I would also have to deal with: <input type="hidden" name=".tries" value="1"> <input type="hidden" name=".src" value="ym"> <input type="hidden" name=".md5" value=""> <input type="hidden" name=".hash" value=""> <input type="hidden" name=".js" value=""> <input type="hidden" name=".last" value=""> <input type="hidden" name="promo" value=""> <input type="hidden" name=".intl" value="us"> <input type="hidden" name=".bypass" value=""> <input type="hidden" name=".partner" value=""> <input type="hidden" name=".u" value="bd5tdpd5rf2pg"> <input type="hidden" name=".v" value="0"> <input type="hidden" name=".challenge" value="5qUiIPGVFzRZ2BHhvtdGXoehfiOj"> <input type="hidden" name=".yplus" value=""> <input type="hidden" name=".emailCode" value=""> <input type="hidden" name="pkg" value=""> <input type="hidden" name="stepid" value=""> <input type="hidden" name=".ev" value=""> <input type="hidden" name="hasMsgr" value="0"> <input type="hidden" name=".chkP" value="Y"> <input type="hidden" name=".done" value="http://mail.yahoo.com"> He said that I should do the following: Simulate normal login and save login page that I get; Save POST&GET headers with "Wireshark"; Compare login page with those headers and see what fields I need to include with my request; I really don't know how to carry out the first two of these three steps. I have just downloaded "Wireshark" and have tried capturing some packets there. However, I don't know how to "simulate normal login and save the login page". Also, I don't how to save POST$GET headers with "Wireshark". Can anyone, please, guide me through these two steps in "Wireshark"? Or at least tell me what I should start with. Thank You.

    Read the article

  • synchronization of file locations between two machines

    - by intuited
    Although similar threads have been asked on this site and its siblings before, I've not managed to glean the answer to this persistent question. Any help is much appreciated. The situation: I've got two laptops; both contain a ton of music. Sometimes I move these music files to different locations, or change the metadata in them, or convert them to a different format. I might do any of these things on either machine. I rarely do all of them at once — ie it's unlikely that I'll convert a file's format and move it to a different location all in one go. I'd like to be able to synchronize these changes without having to sift through everything that was renamed or moved. I'm familiar with rsync but I find it inadequate, because although it can compute checksums, it doesn't have any way to store them. So if a file differs, it can't figure out which side it changed on. This also means that it can't attempt to match a missing file to a new one with the same checksum (ie a move) if the filesize and date are the same, it , so it takes an epoch to do a sync on a large repository. I would like to only check the checksum if the files even if you turn on checksumming, it still doesn't use it intelligently: ie it checksums files even if the sizes differ. IIRC. it's not able to use file metadata as a means of file comparison. this is sort of a wishlist item but it seems doable. I've also looked into rsnapshot, but its requirement to create a full backup is impractical in this situation. I don't need a backup, I just need a record of what file with each hash was where when. Unison seems like it might be able to do something vaguely along these lines, but I'm loathe to spend hours wading through its details only to discover that it's sadly lacking. Plus, it's fun asking questions on here. What I'd like is a tool that does something along these lines: keeps track of file checksums or of actual renames, possibly using inotify to greatly reduce resource consumption/latency stores a database containing this info, along with other pertinencies like the file format and metadata, the actual inode, the filename history, etc. uses this info to provide more-intelligent synchronization with a counterpart on the other side. So for example: if a file has been converted from flac to ogg, but kept the same base filename, or the same metadata, it should be able to send the new version over, and the other side should delete the original. Probably it should actually sequester it somewhere in case they or you screwed up, but that's a detail. And then when the transaction is done, the state is logged so that the next time the two interact they can work out their differences. Maybe all this metadata stuff is a fancy pipe dream. I would actually be pretty happy if there was something out there that could just use checksums in an intelligent way. This would be sort of like having the intelligence of something like git, minus the need to duplicate data in an index/backup/etc (and branching, and checkouts, and all the other great stuff that RCSs do. basically just fast forward commit pushes are all I want, with maybe the option to roll back.) So is there something out there that can do this? If not, can someone suggest a good way to start making it?

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • IP Micro-outages, telephone micro-outages, and CATV micro-outages

    - by Michael Graff
    This is a long and complicated question, mostly because it has been going on for 2.5 years without a solution in sight. It also is only one-third computer related, the other two-thirds are cable TV and cable-phone related. Background I have COX Communications for a cable provider, and we get Internet, digital cable TV, and digital phone service through them. The Internet is a SB5101 right now, and has been a DPC2100 and SB5120 in the past. Same results. The phone service is provided through a telephone interface mounted on the outside of the house (not classic VoIP) and the CATV is through a Scientific Atlanta receiver without DVR. I do have a TiVo connected to the CATV box. Symptoms The CATV shows "blocking" -- sometimes very very short duration where a few blocks appear on the screen. Sometimes it lasts long enough that the video "pauses" for 2-5 seconds, and rarely but not unseen the audio also fails. The CATV decoder box shows no correctable (FEC) or uncorrectable errors. That is, all BER counters are zero for the video stream. The Internet shows "micro-outages" where it appears that sent packets are not making it out, but I continue to receive packets from local modems. That is, pings stop coming back, but I continue to see modems broadcast for DHCP, and sometimes they ask more than once. The cable modem shows no errors during this time, but cable modems lie like you would not believe. It is actually possible to unplug the coax from the modem for 20 seconds and it reports NO ERRORS to the provider's tools. The phone service cuts out for 1-3 seconds, infrequently. When this happens, I hear NOTHING (not even comfort noise) and the remote side hears a "click" as if I were getting a call waiting message. However, there is no call incoming, other than the one I'm currently on of course. Things SEEM to happen more frequently when the temperature outside swings from cold to warm, so fall/spring seems worse than summer/winter. All micro-outages occur between once or twice a day (which I could ignore) to 10 times per hour. All SNR, signal levels, noise levels, etc. show very close to optimal when measured. COX's diagnosis This is a continual pain for me. Over the last 2.5 years, they have opened, "fixed" something, and closed the tickets. They close it without confirming that it is indeed better, and when I reopen they cannot do that, but instead they open a new ticket and send yet another low-level tech out to do the same signal tests and report that all is OK. I've finally gotten a line tech who has a clue and is motivated enough to pursue this with me. We have tried things like switching the local nodes over to UPS and generator power, but this does not trigger the noise. We have tried replacing all cabling, the tap outside my house, the modem, the CATV decoder -- all without resolution. Recently they have decided it is both my computer or switch, my TiVo, and my phone that are all broken and causing this issue. My debugging steps I spent the worse day of my TV-watching life yesterday and part of today. I watched live TV without the TiVo. I witnessed blocking, but it did "feel different." and was actually more severe. Some days it is better, some days it is worse, so perhaps this was just a very bad day. Today, I connected the TiVo to my DVD player, and ran two very long movies through it. I saw no blocking at all during nearly 6 hours of video. Suggestions? Does anyone have any suggestions on what to do next? I understand perhaps only the IP side can be addressed here, but it is one of the more limiting debugging options.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >