Search Results

Search found 14874 results on 595 pages for 'mysql connector'.

Page 581/595 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • How to connect to a Virtualbox guest from the host when network cable unplugged

    - by Greg K
    I'd like to work offline (I'm flying to the US twice this month), to do this I need access to a linux development server. When I work from home I boot a VirtualBox VM and that acts as my dev server for the day (providing Apache, PHP & MySQL to run my server side code). However, I'd like to work with my VM when I'm not connected to a network. I have my Ubuntu VM guest set up with a bridge connection so it can serve HTTP and provide SSH access from inside my local network. I've tried to manually configure my network settings on both Mac OSX (the host) and Ubuntu (the guest) but I can't even ping my own NIC address (127.0.0.1 can, 192.168.21.x I can't) in OS X when I unplug the cable. Manual network settings: $ ifconfig en0 en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:xx:xx:xx:xx:xx inet 192.168.21.5 netmask 0xffffff00 broadcast 192.168.21.255 media: autoselect (100baseTX <full-duplex,flow-control>) status: active I can ping localhost fine, as well as my VM (.20) and SSH too. $ ping 192.168.21.5 PING 192.168.21.5 (192.168.21.5): 56 data bytes 64 bytes from 192.168.21.5: icmp_seq=0 ttl=64 time=0.085 ms 64 bytes from 192.168.21.5: icmp_seq=1 ttl=64 time=0.102 ms 64 bytes from 192.168.21.5: icmp_seq=2 ttl=64 time=0.100 ms 64 bytes from 192.168.21.5: icmp_seq=3 ttl=64 time=0.094 ms $ ping 192.168.21.20 PING 192.168.21.20 (192.168.21.20): 56 data bytes 64 bytes from 192.168.21.20: icmp_seq=0 ttl=64 time=0.910 ms 64 bytes from 192.168.21.20: icmp_seq=1 ttl=64 time=1.181 ms 64 bytes from 192.168.21.20: icmp_seq=2 ttl=64 time=1.159 ms 64 bytes from 192.168.21.20: icmp_seq=3 ttl=64 time=1.320 ms Network cable unplugged: $ ifconfig en0 en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:xx:xx:xx:xx:xx media: autoselect status: inactive $ ping 192.168.21.5 PING 192.168.21.5 (192.168.21.5): 56 data bytes ping: sendto: No route to host ping: sendto: No route to host Request timeout for icmp_seq 0 ping: sendto: No route to host Request timeout for icmp_seq 1 Does OS X disable the NIC when the network cable is unplugged? Any way to stop it doing this?

    Read the article

  • Why can't I use SSL certs imported via Server Admin in a custom Apache install?

    - by morgant
    I've got a couple of Mac OS X 10.6.8 Server web servers that run a custom AMP255 (Apache 2.x, MySQL 5.x, and PHP 5.x) stack installed using MacPorts. We've got a lot of Mac OS X Server servers and generally install SSL certs via Server Admin and they "just work" in the built-in services, however, these web servers have always had SSL certs installed in a non-standard location and used only for Apache. Long story short, we're trying to standardize this part of our administration and install certs via Server Admin, but have run into the following issue: when the certs are installed via Server Admin and referenced in our Apache conf files, Apache then prompts for a password upon trying to start. It does not seem to be any password we know, certainly not the admin or keychain passwords! We've added the _www user to the certusers (mainly just to ensure they have the proper access to the private key in /etc/certificates/). So, with the custom installed certs we have the following files (basically just pasted in from the company we purchase our certs from): -rw-r--r-- 1 root admin 1395 Apr 10 11:22 *.domain.tld.ca -rw-r--r-- 1 root admin 1656 Apr 10 11:21 *.domain.tld.cert -rw-r--r-- 1 root admin 1680 Apr 10 11:22 *.domain.tld.key And the following in the VirtualHost in /opt/local/apache2/conf/extra/httpd-ssl.conf: SSLCertificateFile /path/to/certs/*.domain.tld.cert SSLCertificateKeyFile /path/to/certs/*.domain.tld.key SSLCACertificateFile /path/to/certs/*.domain.tld.ca This setup functions normally. If we use the certs installed via Server Admin, which both Server Admin & Keychain Assistant show as valid, they're installed in /etc/certificates/ as follows: -rw-r--r-- 1 root wheel 1655 Apr 9 13:44 *.domain.tld.SOMELONGHASH.cert.pem -rw-r--r-- 1 root wheel 4266 Apr 9 13:44 *.domain.tld.SOMELONGHASH.chain.pem -rw-r----- 1 root certusers 3406 Apr 9 13:44 *.domain.tld.SOMELONGHASH.concat.pem -rw-r----- 1 root certusers 1751 Apr 9 13:44 *.domain.tld.SOMELONGHASH.key.pem And if we replace the aforementioned lines in our httpd-ssl.conf with the following: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCertificateChainFile /etc/certificates/*.domain.tld.SOMELONGHASH.chain.pem This prompts for the unknown password. I have also tried httpd-ssl.conf configured as follows: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCertificateChainFile /etc/certificates/*.domain.tld.SOMELONGHASH.concat.pem And as: SSLCertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.cert.pem SSLCertificateKeyFile /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem SSLCACertificateFile /etc/certificates/*.domain.tld.SOMELONGHASH.chain.pem We've verified that the certificate is configured to allow all applications access it (in Keychain Assistant). A diff of the /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem & *.domain.tld.key files shows the former is encrypted and the latter is not, so we're assuming that Server Admin/Keychain Assistant is encrypting them for some reason. I know I can create an unencrypted key file as follows: sudo openssl rsa -in /etc/certificates/*.domain.tld.SOMELONGHASH.key.pem -out /etc/certificates/*.domain.tld.SOMELONGHASH.key.no_password.pem But, I can't do that without entering the password. I thought maybe I could export an unencrypted copy of the key from Keychain Admin, but I'm not seeing such an option (not to mention that the .pem options are greyed out in all export options). Any assistance would be greatly appreciated.

    Read the article

  • Dovecot, Postfix, Postfixadmin - can't send/receive mail

    - by Jack
    I am setting up a mail server: Dovecot and Postfix with MySQL support and Postfixadmin. Spend literally all day trying to figure it out, but I'm still unable to neither send nor receive any emails. To my knowledge, I have configured everything correctly, so either there is another problem, or my knowledge isn't good enough. Here is what I get when I use "echo test | mail [email protected]:" Jul 11 00:41:07 server postfix/pickup[17999]: 5B0D32AE1B: uid=0 from= Jul 11 00:41:07 server postfix/cleanup[19444]: 5B0D32AE1B: message-id=<[email protected] Jul 11 00:41:07 server postfix/qmgr[18513]: 5B0D32AE1B: from=, size=329, nrcpt=1 (queue active) Jul 11 00:41:12 server postfix/smtp[19448]: 5B0D32AE1B: to=, relay=none, delay=5.3, delays=0.1/0.01/5.2/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=dsa.com type=MX: Host not found, try again) *@mail.asd.com is changed for privacy reasons, same goes for [email protected]. *The bold text is where it, for some reason, prints out dsa.com - even though I haven't found it anywhere in the files which I've edited during the installation, nor my DNS is .com in the first place. Here is what I get when I try to send out an email from Postfix Admin interface: Jul 11 00:49:08 server postfix/smtpd[19479]: connect from localhost[127.0.0.1] Jul 11 00:49:08 server postfix/trivial-rewrite[19484]: warning: do not list domain asd.com in BOTH mydestination and virtual_mailbox_domains Jul 11 00:49:08 server postfix/smtpd[19479]: 4F7892AE1E: client=localhost[127.0.0.1] Jul 11 00:49:08 server postfix/cleanup[19487]: 4F7892AE1E: message-id=<[email protected] Jul 11 00:49:08 server postfix/qmgr[18513]: 4F7892AE1E: from=, size=317, nrcpt=1 (queue active) Jul 11 00:49:08 server postfix/smtpd[19479]: disconnect from localhost[127.0.0.1] Jul 11 00:49:10 server postfix/smtpd[19492]: connect from localhost[127.0.0.1] Jul 11 00:49:10 server postfix/trivial-rewrite[19484]: warning: do not list domain asd.com in BOTH mydestination and virtual_mailbox_domains Jul 11 00:49:10 server postfix/smtpd[19492]: 743AE2AE1F: client=localhost[127.0.0.1] Jul 11 00:49:10 server postfix/cleanup[19487]: 743AE2AE1F: message-id=<[email protected] Jul 11 00:49:10 server postfix/qmgr[18513]: 743AE2AE1F: from=, size=772, nrcpt=1 (queue active) Jul 11 00:49:10 server postfix/smtpd[19492]: disconnect from localhost[127.0.0.1] Jul 11 00:49:10 server amavis[13437]: (13437-11) Passed CLEAN, LOCAL [127.0.0.1] - , Message-ID: <[email protected], mail_id: 86+KQY93ANel, Hits: -0.002, size: 317, queued_as: 743AE2AE1F, 2145 ms Jul 11 00:49:10 server postfix/smtp[19489]: 4F7892AE1E: to=, relay=127.0.0.1[127.0.0.1]:10024, delay=2.3, delays=0.17/0.01/0/2.1, dsn=2.0.0, status=sent (250 2.0.0 from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 743AE2AE1F) Jul 11 00:49:10 server postfix/qmgr[18513]: 4F7892AE1E: removed I really don't know what might be the problem... If you need to know something, feel free to ask and I'll clarify something.

    Read the article

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

  • Unable to commit file through svn, server sent truncated HTTP response body

    - by Rocket3G
    I have my own VPS, on which I want to run a simple SVN + chiliproject setup. I have re-installed SVN, CHILI and the OS several times, and it always works for a couple of hours/days and then it just stops working. Well, everything works, except I can't upload any files. Committing directories seems to work just fine, but when I try to commit a file it breaks. I have an error log file, which gives me the following text when I try to commit something x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "OPTIONS /project HTTP/1.1" 200 149 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 346 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 401 345 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 201 262 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 236 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/vcc/default HTTP/1.1" 201 271 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PROPPATCH /project/!svn/wbl/c11d45ac-86b6-184a-ac5a-9a1105d64563/1 HTTP/1.1" 207 267 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/ver/1 HTTP/1.1" 201 271 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "HEAD /project/index.html HTTP/1.1" 404 - x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PUT /project/!svn/wrk/c11d45ac-86b6-184a-ac5a-9a1105d64563/index.html HTTP/1.1" 201 269 x.x.x.x - admin [19/Oct/2013:00:02:04 +0200] "DELETE /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 204 - So it seems that it PUTs the file (test.html) correctly, and somehow somewhere something is wrong (file permissions are alright, when I purposely stated that they are wrong, it gave me errors, which is expected, and they were about the file permissions being incorrect. The odd thing is that files won't get added, but directories are fine. I also have enough storage left on my machine. What I should note, perhaps, is that I use Ubuntu 12.04.3 with ruby 1.9.3, mysql 14.14 and I have it set up that Chiliproject handles the authentication and authorization for the project. It works, because I can commit directories and read it all correctly, though I can't upload files. Help would really be appreciated, as I don't know what on earth is going on with this 'truncated http response body'. I tried to read them with wireshark, but it basically gave me the same information. With regards, Ps. I have no clue what the delay between put and delete is, as it's a file of a mere 500 bytes, so it's uploaded in approximately a second. Pps. I copied this question from StackOverflow to this site, as I didn't know the existence of this site and another user suggested that I'd get more answers here, as it's basically a server fault.

    Read the article

  • centos 6 ps aux hangs up

    - by Guntis
    I have problem with my server. Server is running centos 6 (CloudLinux Server release 6.2). uname -a = 2.6.32-320.4.1.lve1.1.4.el6.x86_64 That is a kvm guest. On host is debian 6. If i run command ps aux, it stuck on random process (shows some processes only), top command is working fine. htop doesn't work too (black screen). top - 12:11:51 up 34 min, 1 user, load average: 4.26, 6.71, 16.15 Tasks: 201 total, 7 running, 192 sleeping, 0 stopped, 2 zombie Cpu(s): 7.9%us, 2.8%sy, 0.0%ni, 87.5%id, 1.6%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 9862044k total, 2359484k used, 7502560k free, 171720k buffers Swap: 10485720k total, 0k used, 10485720k free, 1336872k cached server has one Intel(R) Xeon(R) CPU E5606 @ 2.13GHz, free -m total used free shared buffers cached Mem: 9630 2336 7293 0 170 1324 -/+ buffers/cache: 841 8789 Swap: 10239 0 10239 php -v PHP 5.3.19 (cli) (built: Nov 28 2012 10:03:07) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd., and with Zend Guard Loader v3.3, Copyright (c) 1998-2010, by Zend Technologies with Suhosin v0.9.33, Copyright (c) 2007-2012, by SektionEins GmbH mysql Server version: 5.1.63-cll php -i disable_functions => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_e xec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php _uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_set sid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_de code, xmlrpc_server_create, putenv, show_source,mail => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, pos ix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exe c, syslog, system, xmlrpc_entity_decode, xmlrpc_server_create, putenv, show_source,mail ... suhosin.executor.disable_eval => Off => Off suhosin.executor.eval.blacklist => include,include_once,require,require_once,curl_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak, syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nice,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl_wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_cr eate, socket_create_listen,socket_create_pair,link,register_shutdown_function,register_tick_function,gzinflate => include,include_once,require,require_once,c url_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak,syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nic e,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl _wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_create, socket_create_listen,socket_create_pair,link,register_shutdown_function, register_tick_function,gzinflate Sometimes i cannot kill httpd process. I run kill -9 PID even several times, and nothing happens. php runs via suphp. I learned somewhere that it can be trojan. I ran strace ps aux and it stops on open("/proc/PID/cmdline", O_RDONLY) If i reboot server, problem is gone but after some time it is back again .. :( Thanks.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • Dovecot Virtual Users Not Authenticating

    - by blankabout
    We have a standard Postfix/Dovecot installation working perfectly with real users but cannot work out how to add virtual users, all virtual user login attempts fail with authentication errors. Following are snippets from the configuration files: /etc/postfix/main.cf: virtual_mailbox_domains = virtualexample.com virtual_mailbox_base = /var/spool/vhosts virtual_mailbox_recipients = hash:/etc/postfix/virtual_mailbox_recipients /etc/dovecot/dovecot.conf: !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.conf auth_mechanisms = cram-md5 digest-md5 plain passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } /etc/cram-md5.pwd [email protected]{MD5}$1$uIMvzy92$9Xt67B/qw4u6txkkxzne80 This is a snippet from the log when a login attempt is made: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libmech_gssapi.so auth: Debug: passwd-file /etc/cram-md5.pwd: Read 1 users auth: Debug: auth client connected (pid=21990) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51774 auth: Debug: client out: CONT#0111#011PDI1Njc0NjQ1NzQ3MTY0NTkuMTM0MTIxNzkwN0BncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#0111630404609#01121990#0111#011b66b5f46b520a08e1d19d3d249be7073 auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#0111630404609 imap: Error: Authenticated user not found from userdb, auth lookup id=1630404609 (client-pid=21990 client-id=1) imap-login: Internal login failure (pid=21990 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=21993 auth: Debug: auth client connected (pid=22010) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51775 auth: Debug: client out: CONT#0111#011PDcxMDkwNDY1NTQzODUzMDkuMTM0MTIxNzkyOEBncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#011343539713#01122010#0111#011e47b1345784e2845d59e794afa9a6bbe auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#011343539713 imap: Error: Authenticated user not found from userdb, auth lookup id=343539713 (client-pid=22010 client-id=1) imap-login: Internal login failure (pid=22010 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=22011 It would appear that the user lookup is not working, even tho' the log suggests that Dovecot is using the /etc/cram-md5.pwd file and the user is configured in that same file. There are of course dozens of examples of using virtual users with Dovecot, but all the ones we have found either refer to Dovecot 1.x (we are using 2.x), using only virtual users (we must use real AND virtual users) or want to use a MySQL db, we need to use a text file. Some hints about where we are going wrong would be very much appreciated.

    Read the article

  • Windows Server 2008 Migration - Did I miss something?

    - by DevNULL
    I'm running in to a few complications in my migration process. My main role has been a Linux / Sun administrator for 15 yrs so Windows server 2008 environment is a bit new to me, but understandable. Here's our situation and reason for migrating... We have a group of developers that develop VERY low-level software in Visual C with some inline assembler. All the workstations were separate from each other which cased consistency problems with development libraries, versions, etc... Our goal was to throw them all on to a Windows domain were we can control workstation installations, hot fixes (which can cause enormous problems), software versions, etc... All Development Workstations are running Windows XP x32 (sp3) and x64 (sp2) I running in to user permission problems and I was wondering maybe I missed one, tWO or a handful of things during my deployment. Here is what I have currently done: Installed and Activated Windows Server 2008 Added Roles for DNS and Active Directory Configured DNS with WINS for netbios name usage Added developers to AD and mapped their shared folders to their profile Added roles for IIS7 and configured the developers SVN Installed MySQL Enterprise Edition for development usage Not having a firm understanding of Group Policy I haven't delved deeply in to that realm yet. Problems I'm encountering: 1. When I configure any XP workstations to logon our domain, once a user uses their new AD login, everything goes well, except they have very restrictive permissions. (Eg: If a user opens any existing file, they don't have write access, except in their documents folder.) Since these guys are working on low system level events, they need to r/w all files. All I'm looking to restrict in software installations. Am I correct to assume that I can use WSUS to maintain the domains hot fixes and updates pushed to the workstations? I need to map a centralized shared development drive upon the users login. This is open to EVERYONE. Right now I have the users folders mapped upon login through their AD profile. But how do I map a share if I've already defined one within their profile in AD? Any responses would be very grateful. Do I have to configure and define a group policy for the domain users? Can I use Volume Mirroring to mirror / sync two drives on two separate servers or should I just script a rsync or MS Synctool? The drives simply store nightly system images.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Postfix / Dovecot and Email Retrieval

    - by Eric J.
    I have setup Postfix and Dovecot on an Ubuntu box following the instructions http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ I can see that email is being delivered to and accepted by the server, but the email is not available for retrieval via POP3. What could be missing in my configuraton? It seems that email is not being properly handed off to Dovecot. Here are what I believe are the relevant /var/log/mail.log entries for an attempt to send email from another domain (hosted by Gmail) to the domain I have setup: Logged during SMTP connection postfix/smtpd[14689]: connect from mail-vb0-f50.google.com[209.85.212.50] postfix/smtpd[14689]: Anonymous TLS connection established from mail-vb0-f50.google.com[209.85.212.50]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) postfix/smtpd[14689]: 5782740ACF: client=mail-vb0-f50.google.com[209.85.212.50] postfix/cleanup[14696]: 5782740ACF: message-id=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com> postfix/qmgr[14687]: 5782740ACF: from=<[email protected]>, size=1947, nrcpt=1 (queue active) postfix/smtpd[14702]: connect from mail.destinationdomain.com[127.0.0.1] postfix/smtpd[14702]: 2940A41AA9: client=mail.destinationdomain.com[127.0.0.1] postfix/cleanup[14696]: 2940A41AA9: message-id=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com> postfix/qmgr[14687]: 2940A41AA9: from=<[email protected]>, size=2450, nrcpt=1 (queue active) amavis[21309]: (21309-02) Passed CLEAN, [209.85.212.50] <[email protected]> -> <[email protected]>, Message-ID: <CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com>, mail_id: W52ZB8FAAA+8, Hits: -0.101, size: 1946, queued_as: 2940A41AA9, [email protected], 784 ms postfix/smtpd[14702]: disconnect from mail.destinationdomain.com[127.0.0.1] postfix/smtp[14698]: 5782740ACF: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=1.1, delays=0.29/0.01/0/0.79, dsn=2.0.0, status=sent (250 2.0.0 from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 2940A41AA9) postfix/qmgr[14687]: 5782740ACF: removed dovecot: lda([email protected]): msgid=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com>: saved mail to INBOX postfix/pipe[14703]: 2940A41AA9: to=<[email protected]>, relay=dovecot, delay=0.08, delays=0.02/0.02/0/0.04, dsn=2.0.0, status=sent (delivered via dovecot service) postfix/qmgr[14687]: 2940A41AA9: removed Logged during POP3 retrieval attempts dovecot: pop3-login: Login: user=<[email protected]>, method=PLAIN, rip=209.85.220.135, lip=10.195.83.10, mpid=14706 dovecot: pop3([email protected]): Disconnected: Logged out top=0/0, retr=1/2557, del=1/1, size=2540 postfix/smtpd[14689]: disconnect from mail-vb0-f50.google.com[209.85.212.50] dovecot: pop3-login: Login: user=<[email protected]>, method=PLAIN, rip=209.85.212.31, lip=10.195.83.10, mpid=14708 dovecot: pop3([email protected]): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • startup Error for Zend Server CE

    - by Jamison
    Hello! I've got a strange startup error for Zend Server CE - it's probably easy to fix, but I don't have much experience with Zend Server! I'm running the latest OSX 10.6.6 and the latest Zend Server CE for Mac. When I run the "start" command from the command line, here is what I get: /usr/local/zend/bin/apachectl start [OK] spawn-fcgi: child spawned successfully: PID: 4206 /usr/local/zend/bin/shell_functions.rc: line 133: 4210 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4211 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Zend Server GUI [Lighttpd] [FAILED] /usr/local/zend/bin/lighttpdctl.sh: line 46: 4212 Bus error $WATCHDOG -i $BINARY Starting MySQL SUCCESS! /usr/local/zend/bin/shell_functions.rc: line 133: 4304 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4425 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Java bridge [FAILED] /usr/local/zend/bin/java_bridge.sh: line 39: 4426 Bus error $WATCHDOG -i $BINARY Zend Server started... The challenge is that ZEND SERVER wont open the GUI with this error, and seemingly I can click on Zend Server in the Applications folder and it opens for a second and immediately closes. I've made sure that Web Sharing is turned off to avoid conflicts, and I've run Disk Utility from my recovery disk to make sure there are no file system errors. Here is what the lines that are referenced in the errors have in terms of code: shell_functions.rc: (starting on line 132 - the error message says line 133...): launch() { if [ -z "$DEBUG" ]; then exec 3>/dev/null 4>&3 else exec 3>&1 4>&2 fi $WATCHDOG -i $BINARY 1>&3 2>&4 RET=$? if [ $RET -eq 0 ];then $ECHO_CMD "$BINARY watchdog is up and running.. ${OK_COLOR}[OK]${T_RESET}" return $RET else #$WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY >> "$PREFIX/logs/watchdog_$BINARY.log" 2>&1 $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 report $? "Starting" fi } _kill() { $WATCHDOG -i $BINARY > /dev/null 2>&1 if [ $? -eq 1 ];then $ECHO_CMD "$BINARY is not running" else $WATCHDOG -t $BINARY > /dev/null 2>&1 report $? "Stopping" fi } lighttpdctl.sh: (starting on line 45 - the error message says line 46...): status() { $WATCHDOG -i $BINARY } case "$1" in start) start status ;; stop) stop ;; restart) stop sleep 1 start ;; status) status ;; *) usage exit 1 esac exit $? java_bridge.sh: (starting on line 38 - the error message says line 39...): status() { $WATCHDOG -i $BINARY } Question: "Watchdog" is library in this zend BIN folder - it seems to handle error reporting? all the errors in my start command seem to deal with this Watchdog thing, but I don't know what to do about it... Thanks!

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Seeking past end of file causes Apache hang, and it never restarts.

    - by talkingnews
    I've actually solved my problem with a better script, but I'm still left wondering why Apache2 hung completely - this is an out-of-the-box ISPCONFIG 3.03 install, everything bang up to date, running perfectly. Until... The troublesome but innocent-looking script: $fp = fopen("/var/log/ispconfig/cron.log", "r"); fseek($fp, -5000, SEEK_END); $line_buffer = array(); while (!feof($fp)) { $line = fgets($fp, 1024); $line_buffer[] = $line; $line_buffer = array_slice($line_buffer, -10, 10); } foreach ($line_buffer as $line) { echo $line; } You get the idea, just a script I found on a forum somwehere. I did this for various logs, since it's a nice easy window on what's occurring (in a protect dir, of course!). One day, the logs having grown large an me having sorted all my cron, scripting and mail queue errors, I thought I was time to start afresh. updated, rebooted, archived and deleted the logs. When I ran my script a couple of hours later, it hung. And hung. 8 minutes I waited. Chrome timed the page out, of course, but the server never came back to life. htop showed /usr/sbin/apache2 -k restart using 100% CPU. Never came back until I did a service apache2 restart. Ran fine, as soon as I hit that logfile again...dead. So, I worked out it was the logfile script, and I worked out that seeking beyond the end of the file wasn't good, and I found a better script http://www.php.net/manual/en/function.fseek.php#90450 But what I'm left wondering is... why didn't something restart or kill the process? How was one hanging page able to bring down the whole server? It's running suphp. I say "out of the box", I've tweaked mysql and apache to fork and reserve sensible amounts of processes for the 512Mb RAM the VPS has, and it'll handle multiple refreshes of large pages, and hadn't hung before. Any ideas how I'd avoid this? Google isn't my friend in this instance beyond the reccs. above about number of processes vs RAM available.

    Read the article

  • many unknow process name as "sudo"

    - by joaner
    my server free memoney is less and less, And many process COMMAND are"sudo" when use top and enter M. I don't understand root user need to use "sudo". I want to know the way these processes are generated ? Can I kill ? Tasks: 185 total, 1 running, 184 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3967848k total, 3484196k used, 483652k free, 218532k buffers Swap: 4112376k total, 0k used, 4112376k free, 2932864k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22219 mysql 20 0 582m 67m 5492 S 0.0 1.7 0:01.75 mysqld 22337 daemon 20 0 327m 31m 3440 S 0.0 0.8 0:01.58 httpd 22252 daemon 20 0 321m 26m 3416 S 0.0 0.7 0:01.25 httpd 22263 daemon 20 0 319m 23m 3396 S 0.0 0.6 0:00.71 httpd 22253 daemon 20 0 310m 18m 3444 S 0.0 0.5 0:00.69 httpd 22251 root 20 0 28392 12m 3640 S 0.0 0.3 0:00.09 httpd 2422 root 20 0 9192 3608 2184 S 0.0 0.1 0:00.32 ssh 13613 root 20 0 38220 3572 1044 S 0.0 0.1 0:22.31 rsyslogd 2423 root 20 0 11556 3420 2692 S 0.0 0.1 0:00.11 sshd 22570 root 20 0 11716 3408 2676 S 0.0 0.1 0:00.08 sshd 3351 root 20 0 10384 2540 2000 S 0.0 0.1 0:00.06 sudo 30870 root 20 0 10384 2528 2000 S 0.0 0.1 0:00.06 sudo 14356 dkim-mil 20 0 49664 2444 1468 S 0.0 0.1 0:03.91 dkim-filter 2085 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 7741 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 29838 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 2006 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 29747 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 30602 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 30935 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 2259 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 2503 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 2515 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 7718 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 7745 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 29845 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30172 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30352 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30548 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30598 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30897 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30899 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >