Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 69/503 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • mount_afp on linux, user rights

    - by Antonio Sesto
    I need to mount a remote filesystem on a linux box using the afp protocol. The linux box runs an old Debian 4. I downloaded the source code of mount_afp, compiled it and installed it with all the required packages. Then created /etc/fuse with the following command: mknod /dev/fuse c 10 229 (according to the instructions here) I can mount the remote filesystem as root by executing: mount_afp afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ but the same command fails when run as normal user (of the local machine). After reading here and there, I created a group fuse, and added my normal user U to the group fuse: [prompt] groups U U fuse Then modified the group of /dev/fuse, that now has the following rights: 0 crwxrwx--- 1 root fuse 10, 229 Feb 8 15:33 /dev/fuse However, if the user U tries to mount the remote filesystem by using the same command as above, U gets the following error: Incorrect permissions on /dev/fuse, mode of device is 20770, uid/gid is 0/1007. But your effective uid/gid is 1004/1004 But the user U with uid 1004 has also gid 1007 (group fuse). I might think the problem is related to real/effective/etc. ID, but I do not know how to proceed and could not find any clear instructions. Could you please help me? There is also another problem. If I mount /mnt/MOUNTPOINT as root and run ls -l /mnt, I get: drwxrwxrwx 15 root root 466 Feb 8 16:34 MONTPOINT If I run ls -l /mnt as normal user U I get: ? ?????????? ? ? ? ? ? MOUNTPOINT in fact when I try to cd /mnt/MOUNTPOINT I get: $-> cd /mnt/MOUNTPOINT -sh: cd: /mnt/MOUNTPOINT: Not a directory Then I unmount /mnt/MOUNTPOINT as root and run again ls -l /mnt as normal user U I get: 0 drwxr-xr-x 2 root root 6 Feb 8 15:32 MOUNTPOINT/ After reading Frank's answer, I killed every shell/process running with privileges of user U. Still U cannot mount the remote filesystem, but the error message has changed. Now it is: "Login error: Authentication failed". The problem is not related to remote login/password since the same command works perfectly when run as root of the local machine. Since I cannot get mount_afp to work with normal users, I decided to follow mgorven's suggestion. So I run the commands: mount_afp -o allow_other afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ and mount_afp -o user=U afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ The mount succeeds but user U cannot access the mount point. If U executes ls -l in /mnt U@LOCAL_HOST [/mnt] $-> ls -l ls: cannot access MOUNT_POINT: Permission denied total 0 ? ?????????? ? ? ? ? ? MOUNT_POINT Is it so hard to have this utility working?

    Read the article

  • howto only tunnel specific hosts route through openvpn client on tomato

    - by kcome
    I am relatively newbie in networking world although I did coding and know some sysadmin background for a long time. and here I'm only one step from my destination. The whole picture is : at home I use one LinkSys E3000 as the gateway(don't know yet if this is it's name), wireless AP and no other routing/switching devices. It serves 1 PC and 1 Mac with LAN, 1 Mac Mini + 1 iPad + 2 smartphones with WIFI. My goal is use an openvpn client on the E3000 (with tomato firmware) and make my iPad and smartphone's all WiFi traffic through it, and other devices route remain the same non-openvpn route. So far I'm able to connect openvpn client on E3000 to an openvpn server, tunnel all my devices' all traffic through that openvpn connection. What's left is howto selectively route by source IP (at least in my guessing) to the tunnel while don't bother others. I had learned some 'iptables' and 'route' in past few days however without much luck, so here comes my question. Here are some info which will help you get the structure. ifconfig -a output, some useless lines striped, and in the web interface C0:C1:C0:1A:E0:28 is WAN, C0:C1:C0:1A:E0:27 is LAN, C0:C1:C0:1A:E0:29 is 2.4G wifi AP, C0:C1:C0:1A:E0:2A is 5G wifi AP. root@router:/tmp/home/root# ifconfig -a br0 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:29 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 eth2 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:2A UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host ppp0 Link encap:Point-to-Point Protocol inet addr:172.200.1.43 P-t-P:172.200.0.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING MULTICAST MTU:1480 Metric:1 vlan1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 vlan2 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:28 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 wl0.1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:29 BROADCAST MULTICAST MTU:1500 Metric:1 brctl show output root@router:/tmp/home/root# brctl show bridge name bridge id STP enabled interfaces br0 8000.c0c1c01ae027 no vlan1 eth1 eth2 before openvpn route-up script root@router:/tmp/home/root# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.200.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 172.200.0.1 0.0.0.0 UG 0 0 0 ppp0 openvpn server push PUSH: Received control message: 'PUSH_REPLY,redirect-gateway,dhcp-option DNS 8.8.8.8,route 172.20.0.1,topology net30,ping 10,ping-restart 120,ifconfig 172.20.0.6 172.20.0.5' openvpn's stock route-up script Apr 24 14:52:06 router daemon.notice openvpn[1768]: /sbin/ifconfig tun11 172.20.0.6 pointopoint 172.20.0.5 mtu 1500 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 72.14.177.29 netmask 255.255.255.255 gw 172.200.0.1 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 172.20.0.5 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 172.20.0.5 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 172.20.0.1 netmask 255.255.255.255 gw 172.20.0.5 route after openvpn root@router:/tmp/home/root# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.20.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun11 72.14.177.29 172.200.0.1 255.255.255.255 UGH 0 0 0 ppp0 172.200.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 172.20.0.1 172.20.0.5 255.255.255.255 UGH 0 0 0 tun11 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 172.20.0.5 128.0.0.0 UG 0 0 0 tun11 128.0.0.0 172.20.0.5 128.0.0.0 UG 0 0 0 tun11 0.0.0.0 172.200.0.1 0.0.0.0 UG 0 0 0 ppp0 something I had noticed and tried: * on the web interface of openvpn client there is an option "Create NAT on tunnel", if i check this, there is the following script (probably executed after openvpn connection established) root@router:/tmp/home/root# cat /tmp/etc/openvpn/fw/client1-fw.sh #!/bin/sh iptables -I INPUT -i tun11 -j ACCEPT iptables -I FORWARD -i tun11 -j ACCEPT iptables -t nat -I POSTROUTING -s 192.168.1.0/255.255.255.0 -o tun11 -j MASQUERADE if i uncheck this option, the last line will not appear. Then I guess probably the my issue will be solved by iptables and NAT related commands, I just haven't got enough knowledge to figure them out. I tried run iptables -t nat -I POSTROUTING -s 192.168.1.6 -o tun11 -j MASQUERADE manually after openvpn connected (192.168.1.6 is the ip address of my iPad), then my iPad get internet with openvpn tunnel, however all other devices can't reach internet. in case if needed, here is the iptables about NAT root@router:/tmp/home/root# iptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination DROP all -- 0.0.0.0/0 192.168.1.0/24 WANPREROUTING all -- 0.0.0.0/0 172.200.1.43 upnp all -- 0.0.0.0/0 172.200.1.43 Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 SNAT all -- 192.168.1.0/24 192.168.1.0/24 to:192.168.1.1 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain WANPREROUTING (1 references) target prot opt source destination DNAT icmp -- 0.0.0.0/0 0.0.0.0/0 to:192.168.1.1 Chain upnp (1 references) target prot opt source destination DNAT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5353 to:192.168.1.3:5353 Thanks in advance for helping and read this so much, I hope i made every info you need to give a help :)

    Read the article

  • Ubuntu 11.10, using wget/curl fails with ssl

    - by Greg Spiers
    Note: See edit 3 for solution On a completely new install of Ubuntu I'm getting the following errors when using wget: wget https://test.sagepay.com --2012-03-27 12:55:12-- https://test.sagepay.com/ Resolving test.sagepay.com... 195.170.169.8 Connecting to test.sagepay.com|195.170.169.8|:443... connected. ERROR: cannot verify test.sagepay.com's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': Unable to locally verify the issuer's authority. To connect to test.sagepay.com insecurely, use `--no-check-certificate'. I've tried installing ca-certificates and configuring the ca-certs and they appear to all be setup in /etc/ssl/certs. The same issue exists for cURL: curl https://test.sagepay.com curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Which leads me to believe it's something wrong with openssl server wide. wget and curl both work correctly locally on OSX and I have confirmed with a few people that it's working on their servers so I suspect it's nothing to do with the server I'm attempting to connect to. Any ideas or suggestions on things to try to narrow it down? Thank you Edit As requested verbose output from curl curl -Iv https://test.sagepay.com * About to connect() to test.sagepay.com port 443 (#0) * Trying 195.170.169.8... connected * Connected to test.sagepay.com (195.170.169.8) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Edit 2 Using the hash from your comment I see this: ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al 7651b327.0 lrwxrwxrwx 1 root root 59 2012-03-27 12:48 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority.pem ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al Verisign_Class_3_Public_Primary_Certification_Authority.pem lrwxrwxrwx 1 root root 94 2012-01-18 07:21 Verisign_Class_3_Public_Primary_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -rw-r--r-- 1 root root 834 2011-09-28 14:53 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- But doing the steps myself I end up with a different hash: strace -o /tmp/foo.out curl -Iv https://test.sagepay.com and grep ssl /tmp/foo.out open("/lib/x86_64-linux-gnu/libssl.so.1.0.0", O_RDONLY) = 3 stat("/etc/ssl/certs/415660c1.0", {st_mode=S_IFREG|0644, st_size=834, ...}) = 0 open("/etc/ssl/certs/415660c1.0", O_RDONLY) = 4 stat("/etc/ssl/certs/415660c1.1", 0x7fff7dab07b0) = -1 ENOENT (No such file or directory) readlink -f /etc/ssl/certs/415660c1.0 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Any other ideas? Thank you for the help so far :) Edit 3 So it turns out that installing the ca-certificates package didn't install the one that I needed. I found this post about certificates being presented out of order. This seems to be the case with my request to sagepay. The solution ended up being to install another CA certificate from Verisign. I'm not sure why this fixes the issue with it being out of order but it does, but I suspect the out of order issue really isn't a problem at all and it was infact because I was missing a certificate all along. The additional certificate is available in that post but I didn't want to blindly trust it. I've looked at the list of CA certificates from cURL's site and it is listed there so I do trust it. The certificate: Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmltYXJ5 IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVow XzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUA A4GNADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhEBarsAx94 f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/isI19wKTakyYbnsZogy1Ol hec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0GCSqGSIb3DQEBAgUAA4GBALtMEivPLCYA TxQT3ab7/AoRhIzzKBxnki98tsX63/Dolbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59Ah WM1pF+NEHJwZRDmJXNycAA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2Omuf Tqj/ZA1k -----END CERTIFICATE----- I put this in a file in: /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt I then modified the /etc/ca-certificates.conf and added the following line at the end: curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt After that I ran the command: sudo update-ca-certificates Looking into the /etc/ssl/certs directory I see it correctly linked: ls -al | grep cURL lrwxrwxrwx 1 root root 69 2012-03-27 16:03 415660c1.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 69 2012-03-27 16:03 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 101 2012-03-27 16:03 Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem -> /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt And everything works! curl -I https://test.sagepay.com HTTP/1.1 200 OK...

    Read the article

  • Changed Password Won't seem to work for account

    - by erik
    Bit of an odd problem. I've got a server I can SSH into as one of two logins: root or erik. Once I've logged in as erik I've tried to switch to the root user: # sudo su - root Password: And entered the password. After several failures I thought I might have forgotten. So I SSH'd in as root, and changed the root password: # passwd Now back to the other shell (erik) and attempt to run sudo su - root and again, it won't accept the just changed password. Any ideas?

    Read the article

  • Limit vsftp upload to a given set of file-names

    - by Chen Levy
    I need to configure an anonymous ftp with upload. Given this requirement I try to lock this server down to the bear minimum. One of the restrictions I wish to impose is to enable the upload of only a given set of file-names. I tried to disallow write permission to the upload folder, and put in it some empty files with write permission: /var/ftp/ [root.root] [drwxr-xr-x] |-- upload/ [root.root] [drwxr-xr-x] | |-- upfile1 [ftp.ftp] [--w-------] | `-- upfile2 [ftp.ftp] [--w-------] `-- download/ [root.root] [drwxr-xr-x] `-- ... But this approach didn't work because when I tried to upload upfile1, it tried to delete and create a new file in its' place, and there is no permissions for that. Is there a way to make this work, or perhaps use a different approach like abusing the deny_file option?

    Read the article

  • Fresh Red Hat Enterprise Linux fails to install httpd using yum

    - by Julian
    I'm trying to install a LAMP stack in a fresh red hat server but yum is misbehaving. Being linux illiterate I'm at a loss. $yum install httpd Loaded plugins: security Setting up Install Process No package httpd available. Nothing to do My yum config $ cat /etc/yum.conf [main] cachedir=/var/cache/yum keepcache=0 debuglevel=2 logfile=/var/log/yum.log distroverpkg=redhat-release tolerant=1 exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 # Note: yum-RHN-plugin doesn't honor this. metadata_expire=1h # Default. # installonly_limit = 3 # PUT YOUR REPOS HERE OR IN separate files named file.repo # in /etc/yum.repos.d Other stuff in the yum.repos.d dir $ ls -lah /etc/yum.repos.d/ total 12K drwxr-xr-x 2 root root 4.0K Feb 4 01:15 . drwxr-xr-x 59 root root 4.0K Feb 4 01:28 .. -rw-r--r-- 1 root root 561 Mar 10 2010 rhel-debuginfo.repo What could be going on? I thought "out of the box" RHEL5.5 would be friendlier :)

    Read the article

  • OSX: Python packages fail to install, error message "/usr/local/bin: File Exists"

    - by kylehotchkiss
    I keep trying to install django and other python packages, and I keep getting the exact same error message: Installing django-admin.py script to /usr/local/bin error: /usr/local/bin: File exists So I look to make sure that my /usr/local folder is okay. At first glance it appears okay, until I try cd-ing into my bin. It says it can't because it's not a directory. Peculiar, I thought, so then I tried a Anchorage:local khotchkiss$ ls -a -l total 26168 drwxr-xr-x 6 root wheel 204 Dec 26 20:18 . drwxr-xr-x@ 14 root wheel 476 Feb 24 12:54 .. -rwxr-xr-x@ 1 root wheel 13395080 Oct 22 23:04 bin drwxr-xr-x 8 root wheel 272 Dec 26 20:18 git drwxr-xr-x 4 root wheel 136 Dec 18 11:31 include drwxr-xr-x 12 root wheel 408 Dec 18 11:31 lib And haven't a clue of what the 'bin' is, why its so large, and why its preventing me from installing python packages. Any clue?

    Read the article

  • LFD always stops working after ~30 days, until I give /etc/csf/csf.pl -r

    - by gus
    When I give /etc/csf/csf.pl -r , I see lots of lines flushing, then I begin to get the notification emails again, (several emails per day), for example: Time: Wed Sep 12 08:39:47 2012 +0800 IP: 221.13.104.162 (CN/China/-) Failures: 5 (sshd) Interval: 300 seconds Blocked: Permanent Block Log entries: Sep 12 08:39:25 MyHost sshd[9677]: Failed password for root from 221.13.104.162 port 51106 ssh2 Sep 12 08:39:28 MyHost sshd[9712]: Failed password for root from 221.13.104.162 port 51690 ssh2 Sep 12 08:39:32 MyHost sshd[9739]: Failed password for root from 221.13.104.162 port 52128 ssh2 Sep 12 08:39:36 MyHost sshd[9778]: Failed password for root from 221.13.104.162 port 52670 ssh2 Sep 12 08:39:40 MyHost sshd[9821]: Failed password for root from 221.13.104.162 port 53155 ssh2 And then after about 30 days, the emails stop coming, it is as if something has filled up, and requires flushing again. I don't know much about CSF/LFD, but I would have imagined that this would work in a FIFO manner, so it should be able to run indefinitely within finite space. My /etc/csf/version.txt says 4.83 My cat /proc/version says Linux version 2.6.18-028stab066.8 (root@rhel5-64-build) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 SMP Fri Nov 27 20:19:25 MSK 2009

    Read the article

  • Strange rsync behaviour

    - by Stewie
    So, I start with comparing two directories: [root@135759 ]# rsync -av test1/ test2/ building file list ... done sent 128 bytes received 20 bytes 296.00 bytes/sec total size is 6 speedup is 0.04 They both are in sync Now, let me create a file in test1 and copy it to test2 [root@135759 ]# touch test1/hello4.php [root@135759 ]# scp test1/hello4.php test2/hello4.php I verify that those are same: [root@135759 test2]# md5sum test1/hello4.php d41d8cd98f00b204e9800998ecf8427e hello4.php [root@135759 test1]# md5sum test2/hello4.php d41d8cd98f00b204e9800998ecf8427e hello4.php Thus, running rsync will show me 0 files. Why is the output not right ? [root@135759 ]# rsync -avn test1/ test2/ building file list ... done hello4.php sent 116 bytes received 24 bytes 280.00 bytes/sec total size is 6 speedup is 0.04

    Read the article

  • Who should own the root folder of a drive?

    - by Gaia
    All partitions are NTFS. The system is Windows 7 Pro. It does not belong to a domain. I do use shared folders occasionally (both via the Homegroup and old school sharing). Should I set the owner to be Administrators or SYSTEM for a A) fixed drive? B) removable drive? C) Is it ok to make every object on the drive inherit the new ownership? I just realized that I had some messy settings because I turned UAC on for the first time in years and I am now getting some undesirable prompts. I already have permissions set properly and I am only concerned with ownership.

    Read the article

  • How do I execute a command before kickstart parses ks.cfg?

    - by Crazy Chenz
    How do I execute a command before kickstart parses ks.cfg? My specific problem is that I want to install redhat into a tmpfs by telling kickstart: part / --fstype ext3 --size 1000 --maxsize 4000 --ondisk loop1 I've tried doing: %pre #!/bin/sh mkdir /tmp-root mount -t tmpfs tmpfs /tmp-root dd if=/dev/zero of=/tmp-root/tmp-root.img bs=4096 count=1000000 losetup /dev/loop1 /tmp-root/tmp-root.img but that is not done early enough. Ugh! Update: I'm beginning to think it has nothing to do with being done early enough. I believe it has to do with anaconda and kudzu not thinking that a loopback device is a valid device. I'm not a python guy, so the idea of hacking up the kickstart code sucks! -Vinnie

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • How to unsquash and mount arch linux live CD

    - by steffen
    I am following this manual to install Arch Linux from within another Linux distro with the help of an Arch Linux live CD. Here is what I did: sudo mount -o loop Downloads/archlinux-2012.11.01-dual.iso arch_iso/ unsquashfs -d squashfs-root/ arch_iso/arch/x86_64/root-image.fs.sfs This results in a directory squashfs-root/ containing one file: root-image.fs I assume that this is not what I want. I want to see something that looks like a Linux root folder. If I follow these steps: "mount the file system" with mount -B /squashfs-root ${livecd_arch} and mount -t proc /proc ${livecd_arch}/proc, I get error messages like: mount: mount point /home/me/arch_root//proc does not exist What am I missing? Thanks!

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • Limit vsftpd upload to a given set of file-names

    - by Chen Levy
    I need to configure an anonymous ftp with upload. Given this requirement I try to lock this server down to the bear minimum. One of the restrictions I wish to impose is to enable the upload of only a given set of file-names. I tried to disallow write permission to the upload folder, and put in it some empty files with write permission: /var/ftp/ [root.root] [drwxr-xr-x] |-- upload/ [root.root] [drwxr-xr-x] | |-- upfile1 [ftp.ftp] [--w-------] | `-- upfile2 [ftp.ftp] [--w-------] `-- download/ [root.root] [drwxr-xr-x] `-- ... But this approach didn't work because when I tried to upload upfile1, it tried to delete and create a new file in its' place, and there is no permissions for that. Is there a way to make this work, or perhaps use a different approach like abusing the deny_file option?

    Read the article

  • How can I repair my USB drive?

    - by yurko
    USB drive is in read only state and I can't repair it. First of all I tried erase it using dd: root@yurko-laptop:/home/yurko-laptop# ls -l /dev/disk/by-id | grep usb lrwxrwxrwx 1 root root 9 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0-part1 -> ../../sdb1 root@yurko-laptop:/home/yurko-laptop# dd if=/dev/zero of=/dev/sdb dd: ?????? ? «/dev/sdb»: ?? ?????????? ????????? ????? 8257537+0 ??????? ??????? 8257536+0 ??????? ???????? ??????????? 4227858432 ????? (4,2 GB), 942,633 c, 4,5 MB/c After that I wanted to create new filesystem using fdisk: root@yurko-laptop:/home/yurko-laptop# fdisk /dev/sdb You will not be able to write the partition table. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 4227 MB, 4227858432 bytes 4 heads, 63 sectors/track, 32768 cylinders Units = cylinders of 252 * 512 = 129024 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 18 32768 4126596 b W95 FAT32 Command (m for help): fdisk showed that the partition still exists and I can't write the partition table. I tried to delete the existing partition: Command (m for help): d Selected partition 1 Command (m for help): w Unable to write /dev/sdb root@yurko-laptop:/home/yurko-laptop# Why am I not be able to write the partition table? Does it mean that some hardware failure occurred? And is it possible to repair the current USB drive? I've tried to use hdparm and it showed that the readonly flag is on: root@yurko-laptop:/home/yurko-laptop# hdparm /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 26 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 multcount = 0 (off) readonly = 1 (on) readahead = 256 (on) geometry = 1016/131/62, sectors = 8257536, start = 0

    Read the article

  • Windows errors, how do I find root cause and fix it? Getting several errors

    - by Eric Martin
    My server is having issues and not responding to customer's https requests. I checked the event viewer and found several errors. These two are listed a couple of times: WINS encountered a database error. This may or may not be a serious error. WINS will try to recover from it. You can check the database error events under 'Application Log' category of the Event Viewer for the Exchange Component, ESENT, source to find out more details about database errors. If you continue to see a large number of these errors consistently over time (a span of few hours), you may want to restore the WINS database from a backup. The error number is in the second DWORD of the data section. And this one: An error occured while using SSL configuration for socket address 0.0.0.0:444. The error status code is contained within the returned data. SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: xxx.xxx.xxxx.xxx] I also found this in the event viewer but the computer has been restarted since this message and I have not seen it again. Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory. This is my virtual memory settings: I'm not familiar with WINS so I wasn't sure if that is where I start or how to resolve it. Is the WINS error causing the other problems or should I be looking somewhere else?

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • how to install mpgtx from source code

    - by Ahmet vardar
    i am new on linux server. i have mpgtx folder in my root, how can i install it ? in readme file it is written; ./configure && make when i type this i get permission denied error ? thanks EDIT: Here the steps i done root@server [/]# cd /mpgtx root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]# make ----------------------------------------------------------------------------- Hello ! I'm afraid I'm a dummy Makefile. My goal in life is to politely ask you to run the configure script to actual- ly generate a real Makefile. Would you be kind enough to type "./configure --help" to see the options that will suit your needs ? Please note that typing "./configure" without option will generate a Makefile that will suit most people needs. I wish you a good day. Please don't drive to fast. ----------------------------------------------------------------------------- root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]#

    Read the article

  • What are these folders in the root of my drive?

    - by Max Schmeling
    The following folders keep showing up: C:\Default\ C:\NativeImages\ They both have a ton of folders in them witha bunch of html files in each folder. I'm assuming this has something to do with NGEN because of the NativeImages folder. How do I get rid of these folders and keep them from coming back? Or otherwise move them so they get put somewhere else? Thanks

    Read the article

  • Graphite Running using daemon tools getting defunct

    - by pradeepchhetri
    I am running carbon-cache.py and carbon-aggregator.py using daemon tools. When I made some changes in the storage-schema.conf and tried to restart the carbon-cache.py, I found that it is becoming zombie very frequently. root 3367 3366 0 03:23 pts/1 00:00:00 supervise carbon-aggregator root 3371 3366 0 03:23 pts/1 00:00:00 supervise carbon-cache root 3373 3367 3 03:23 pts/1 00:00:02 /usr/bin/python /usr/bin/carbon-aggregator.py --debug start root 3379 3372 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-cache root 3382 3368 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-aggregator root 3638 3371 21 03:24 pts/1 00:00:00 [carbon-cache.py] <defunct> Can someone tell me what may be the reason ?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >