Search Results

Search found 2301 results on 93 pages for 'schrodingers cat'.

Page 47/93 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • php crashes with no core file and this message : apc_mmap failed

    - by greg0ire
    Description of the problem Regularly, cron php processes crash on our production server, which result in mails with the following body : PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 Segmentation fault (core dumped) I think the Segmentation fault (core dumped) should result in core files being handled by apport and then written in /var/crashes, but the files I can see there are there since yesterday, although the last crash occured today : -rw-r----- 1 root whoopsie 1138528 mai 22 04:09 _usr_bin_php5.0.crash -rw-r----- 1 frontoffice whoopsie 1166373 mai 20 18:00 _usr_bin_php5.1005.crash -rw-r----- 1 frontoffice whoopsie 81622658 mai 22 00:05 _usr_sbin_php5-fpm.1005.crash I tried to download the last one anyway, and ran gdb /usr/sbin/php5-fpm /tmp/_usr_sbin_php5-fpm.1005.crash, only to be told that the file is not a core file (its format was not recognized). Here is the server's apc configuration : cat /etc/php5/cli/conf.d/20-apc.ini extension=apc.so apc.shm_size=512M apc.ttl=3600 apc.user_ttl=3600 apc.enable_cli=1 I'm mostly worried about the apc.shm_size… isn't it too high or too low ? I understand it has to do with the size of memory segments. Question(s) What could be the problem ? How can I troubleshoot it (how can I get a valid core file ?) ? System information free total used free shared buffers cached Mem: 5081296 4354684 726612 0 374744 959968 -/+ buffers/cache: 3019972 2061324 Swap: 522236 516888 5348 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" php -v PHP 5.4.17-1~precise+1 (cli) (built: Jul 17 2013 18:14:06) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies php -i excerpt : Configuration apc APC Support => enabled Version => 3.1.13 APC Debugging => Disabled MMAP Support => Enabled MMAP File Mask => Locking type => pthread mutex Locks Serialization Support => php Revision => $Revision: 327136 $ Build Date => Nov 20 2012 18:41:36 Directive => Local Value => Master Value apc.cache_by_default => On => On apc.canonicalize => On => On apc.coredump_unmap => Off => Off apc.enable_cli => On => On apc.enabled => On => On apc.file_md5 => Off => Off apc.file_update_protection => 2 => 2 apc.filters => no value => no value apc.gc_ttl => 3600 => 3600 apc.include_once_override => Off => Off apc.lazy_classes => Off => Off apc.lazy_functions => Off => Off apc.max_file_size => 1M => 1M apc.mmap_file_mask => no value => no value apc.num_files_hint => 1000 => 1000 apc.preload_path => no value => no value apc.report_autofilter => Off => Off apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 512M => 512M apc.shm_strings_buffer => 4M => 4M apc.slam_defense => On => On apc.stat => On => On apc.stat_ctime => Off => Off apc.ttl => 3600 => 3600 apc.use_request_time => On => On apc.user_entries_hint => 4096 => 4096 apc.user_ttl => 3600 => 3600 apc.write_lock => On => On php -m [PHP Modules] apc bcmath bz2 calendar Core ctype curl date dba dom ereg exif fileinfo filter ftp gd gettext hash iconv imagick intl json ldap libxml mbstring memcache memcached mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix Reflection session shmop SimpleXML soap sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tidy tokenizer wddx xml xmlreader xmlwriter zip zlib [Zend Modules] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 39531 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 39531 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • CentOS: revert python version back to original

    - by NP
    Hi all, I installed python 2.6 using the instructions here on CentOS 5.4. However I realized it was a bad move and I need to revert back to 2.4, which was there originally. Can anyone guide me on how to undo what I did here? In particular, I am not sure how to undo this: Configure ld to find your shared libs: $ cat /etc/ld.so.conf.d/opt-python2.5.conf /opt/python2.5/lib (hit enter) (hit ctrl-d to return to shell) $ ldconfig I tried removing the alias and the symlink and even re-aliasing python to /usr/bin/python, but when I try to install an RPM i get this error: error: Failed dependencies: libpython2.4.so.1.0 is needed by ... Thanks in advance.

    Read the article

  • Dumping a Linux console scrollback buffer?

    - by Gerald Combs
    We would like to save the output of a program run on a Linux console which spans many lines. Unfortunately it wasn't logged or run under screen, or any other way that lets us easily capture the output. The best method we've been able to come up with so far is: Log into the machine via a separate SSH session In the console session, page to the top of the buffer Repeat: In the SSH session, run "cat /dev/vcs >> screendump.txt" In the console session, page down one screen Dump the final screen in the SSH session Is there a better way? It seems like if the VC memory were contiguous and you knew where it was you could use dd to pull the console text directly out of kernel memory and into a file.

    Read the article

  • sudo ENV_KEEP not always preserving

    - by mafro
    When I run sudo -s, my environment is preserved. However, when running a simple sudo <command> it appears not to be preserved. The contents of my sudoers file: mafro@ip-10-xx-xx-250:~ > sudo cat /etc/sudoers.d/mafro Defaults env_reset Defaults env_keep += "HOME" mafro ALL=(ALL) NOPASSWD:ALL Using sudo -s, the ll alias is available: mafro@ip-10-xx-xx-250:~ > sudo -s root@ip-10-xx-xx-250:~ > ll total 8K drwxrwxr-x 2 mafro dev 4.0K Jun 9 23:59 bin drwxr-xr-x 20 mafro dev 4.0K Jun 9 23:59 dotfiles Using straight sudo, it is not: mafro@ip-10-xx-xx-250:~ > sudo ll sudo: ll: command not found What is happening here?

    Read the article

  • pdftk utility and batch file

    - by duhaas
    I cant for the life of me figure out what I'm missing. I have the following batch file: As you can see, when I run this batch file from my desk against a mapped drive it runs just fine: When I run the same exact batch file on the server itself, the place where the mapped drive is located on, it doesnt run and makes me think I have a syntax problem: I just dont understand whats going on, and my eyes are having a hard time keeping track of what might be diff. The server where it isnt working is windows 2003, my desktop where the same batch file is working is Windows 7. Here is the batch file, nothing crazy: FOR /D /r %%G in ("*") DO pdftk "%%G\*.pdf" cat output "%%G\Report.pdf"

    Read the article

  • Varnish does not recognize req.hash

    - by Yogesh
    I have Varnish 3.0.2 on Redhat and service varnish start fails after I added vcl_hash section. I did varnishd and then loaded the vcl using vcl.load vcl.load default default.vcl Message from VCC-compiler: Unknown variable 'req.hash' At: ('input' Line 24 Pos 9) set req.hash += req.url; --------########------------ Running VCC-compiler failed, exit 1 cat default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if( req.url ~ "\.(css|js|jpg|jpeg|png|swf|ico|gif|jsp)$" ) { unset req.http.cookie; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if( req.httpCookie == "JSESSIONID" ) { set req.http.X-Varnish-Hashed-On = regsub( req.http.Cookie, "^.*?JSESSIONID=([a-zA-z0-9]{32}\.[a-zA-Z0-9]+)([\s$\n])*.*?$", "\1" ); set req.hash += req.http.X-Varnish-Hashed-On; } return(hash); } What could be wrong?

    Read the article

  • ProFTPD mod_tls is not loaded properly?

    - by develroot
    The server is running CentOS 5 with DirectAdmin. I am trying to get ProfFTPD work over TLS, however it seems that proftpd is lacking mod_tls support, even though it was compiled with mod_tls. # proftpd -l Compiled-in modules: mod_core.c mod_xfer.c mod_auth_unix.c mod_auth_file.c mod_auth.c mod_ls.c mod_log.c mod_site.c mod_delay.c mod_facts.c mod_ident.c mod_ratio.c mod_readme.c mod_cap.c As you can see there is no mod_tls.c, however, the DirectAdmin configuration file for proftpd suggests that it was built with TLS support: # cat /usr/local/directadmin/custombuild/configure/proftpd/configure.proftpd #!/bin/sh install_user=ftp \ install_group=ftp \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var/run \ --mandir=/usr/share/man \ --without-pam \ --disable-auth-pam \ --enable-nls \ --with-modules=mod_ratio:mod_readme:mod_tls And all I get when I try to connect over FTPS using FileZilla is: Raspuns: 220 ProFTPD 1.3.3c Server ready. Comanda: AUTH TLS Raspuns: 500 AUTH not understood Comanda: AUTH SSL Raspuns: 500 AUTH not understood Am I missing something? thanks.

    Read the article

  • chown: changing ownership of `.': Invalid argument

    - by Pierre
    I'm trying to install some new files on our new server while our sysadmin is in holidays: Here is my df # df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb3 273G 11G 248G 5% / tmpfs 48G 260K 48G 1% /dev/shm /dev/sdb1 485M 187M 273M 41% /boot xxx.xx.xxx.xxx:/commun 63T 2.2T 61T 4% /commun as root , I can create a new directory and run chown under /home/lindenb # cd /home/lindenb/ # mkdir X # chown lindenb X but I cannot run the same command under /commun # cd /commun/data/users/lindenb/ # mkdir X # chown lindenb X chown: changing ownership of `X': Invalid argument why ? how can I fix this ? updated: mount: /dev/sdb3 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sdb1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) xxx.xx.xxx.xxx:/commun on /commun type nfs (rw,noatime,noac,hard,intr,vers=4,addr=xxx.xx.xxx.xxx,clientaddr=xxx.xx.xxx.xxx) version: $ cat /etc/redhat-release CentOS release 6.3 (Final)

    Read the article

  • ^C not working in zsh on Mac OSX

    - by Vitaly Kushner
    Ctrl-C stopped working for me at the terminal when using zsh (on mac osx). I didn't notice the exact moment that it happend so I can't be sure what caused it. I didd't update zsh in a while though. and didn't touch .zshrc (I have it at a repo http://github.com/astrails/dotzsh) If I run bash, ^C works in it. If I run any command, like cat, ^C will work to stop it too. but inside zsh it just doesn't do anything. bindkey | grep \\^C gives "^B"-"^C" self-insert zsh 4.3.10 (i386-apple-darwin10.4.3), installed though ports (zsh-devel @4.3.10_0+doc+examples+mp_completion+pcre) mac os 10.6.6

    Read the article

  • nc or socat: How to read data from remote:/dev/ttyACM0 ?

    - by AndreasT
    I have a device running at a remote computer on /dev/ttyACM0 Now I want to read that data on my computer. I can connect to it over ssh. Unfortunately I am a nc/socat rookie and no howto covered this. Semantically like this: cat remote:/dev/ttyACM0 The remote system has a limited linux on it, and I can't install packages. (socat is not available there, nc is) Super cool would be to have some forwarded device: local:/dev/ttySOCK0 pointing to remote:/dev/ttyACM0 Thanks for any help.

    Read the article

  • Use netcat as a proxy to log traffic

    - by deephacks
    I want to use netcat as a proxy to log http requests and responses to files, then tail these to inspect traffic. Think wireshark. Tried the following where 'fifo' is a named pipe, 'in' and 'out' are files, netcat proxy on port 8080, server on port 8081. while true; do cat fifo | nc -l -p 8080 | tee -a in | nc localhost 8081 | tee -a out 1fifo; done Problems: Netcat stop responing after first request (while loop ignored?). Netcat fails with msg localhost [127.0.0.1] 8081 (tproxy) : Connection refused if server unavailable on 8081. Question: Is it possible to "lazily" connect to 8081 when request is made? I.e. I do not want to have 8081 running when netcat is started.

    Read the article

  • An SQLite/STDIN Conundrum, Specific to AIX

    - by mikfreedman
    Hi there! I'm been playing around with SQlite at work, specifically with trying to get the sqlite3 command line tool to accept stdin instead of a file. Sounds easy enough, on linux you can execute a command like: echo 'test' | sqlite3 test.db '.import /dev/stdin test' unfortunately - our machines at work run AIX (5 & 6) and as far as I can tell, there is no equivalent to the virtual file /dev/stdin. I managed to hack together an equivalent command that works on AIX using a temporary file. echo 'test' | cat - > /tmp/blah ; sqlite3 test.db '.import /dev/stdin test' ; rm /tmp/blah Now, does it need to use STDIN? isn't this temporary file thing enough? Probably, but I was hoping someone with better unix-fu had a more elegant solution. note: the data I would like to import is only provided via STDOUT, so that's what the echo 'test' command is all about

    Read the article

  • Add a remote printer over ssh on OSX?

    - by GradGuy
    I have a printer at my office that is connected to a local network and my linux box at work can see it on the network. However, it is not visible to the outside world. I was trying to figure out a way to add it on my MacAir and so far have found two options: 1) Using ssh tunnel via CLI: cat file.pdf | ssh user@linuxbox lpr. 2) With Chrome installed on the linux box, using the Google Cloud Print service on the remote box and automator on my MacAir I can add the printer to Cmnd+p dialog box I like the first method since it does not require Chrome be installed and the second one since it allows to use Cmnd+p inside all applications. I was wondering if there is a way to combine by using automator to run the first command line script. What about port forwarding? Is it possible to forward the remote CUPS 631 port to a local port and then add the printer normally? What other methods would you recommend?

    Read the article

  • Why do I see the weird backspace behaviour on my shell sometimes?

    - by Lazer
    I use bash shell and sometimes all of a sudded, my Backspace key stops working (when this happens Ctrl + Backspace still works fine) I am not sure why this happens, but it also carries over to any vim sessions that I use from the shell. To my surprise, getting a fresh shell does not help, and the problem seems to go away as abruptly as it started. This is what the typed characters look like, each Backspace keypress is shown by a ^? on the shell $ cat filem^?namr^?e Does anybody have a clue what might be happening? How can I restore the normal behaviour?

    Read the article

  • Installing ethernet drivers with no install package

    - by Josh
    I recently got my new Sony Vaio laptop and formatted it into Windows 7 Ultimate. I would like to use the Windows Easy Transfer Tool over a network connection to transfer some of my files over from my desktop PC. Before I do this though, I need to install the ethernet LAN drivers (I'm currently using the built in Wifi). I downloaded the original LAN driver that came with my Vaio originally from the Sony website: http://support.vaio.sony.eu/computing/vaio/downloads/preinstalled/index.aspx?l=en_GB&m=VPCEB1Z0E_B [Scroll down to the 450KB Ethernet driver] When I unzip the package, these files are inside: yk62x64.cat yk62x64.dll yk62x64.inf yk62x64.sys As you can see, no installer. Can anyone guide me through how to properly install these drivers? I have thought of using Google but I'm clueless as to what query to use. Thanks.

    Read the article

  • Exim queue in WHM

    - by Xobb
    Hi fellas, I've got the centos server with WHM. The mail server is exim. I need exim put all messages in queue and not sending directly.Though I've added the queue_only option to exim configuration and the messages are collected in the queue now. Afterwards I've found out that someone is calling exim -q to process the queue every once in a while. I've found the following cron job: 0 6 * * * /scripts/exim_tidydb > /dev/null 2>&1 which I beleive has been used to process the exim queue. Also I suspect that script was installed alongside with WHM. Surely I've commented it out and was expecting everything to work just fine. But that didn't happen. I still get the exim queue processed once in a while. Am I missing anything? What may cause my exim queue to process? Here is cat /etc/exim.conf | grep queue queue_only deliver_queue_load_max = 3 Thanks

    Read the article

  • How to upgrade libxml on CentOS

    - by Radek Simko
    I have a following version of CentOS: $ cat /etc/issue CentOS release 5.5 (Final) Kernel \r on an \m and following version of libxml: $ php -i | grep libxml libxml Version => 2.6.26 libxml libxml2 Version => 2.6.26 libxslt compiled against libxml Version => 2.6.26 and need to have newer version of libxml (primarly for usage in PHP, but obviously, it doesn't matter). If I even install the newer version of libxml somehow: wget ftp://xmlsoft.org/libxml2/libxml2-2.7.2.tar.gz tar -xvf libxml2-2.7.2.tar.gz cd libxml2-2.7.2 ./configure make sudo make install then I am unable to get it to work in PHP - there is still old version: libxml Version => 2.6.26 libxml libxml2 Version => 2.6.26 libxslt compiled against libxml Version => 2.6.26 What else do I need to do to make the new version to work with PHP?

    Read the article

  • Custom kernel with NFS client support

    - by Vaibhav
    I'm trying to build a custom Linux kernel using this link I have successfully built the kernel and booted into it. Now I want to mount NFS share on it. I have enabled NFS client support from menuconfig . Update : I'm trying to mount a NFS share from newly built kernel. I have tried adding a NFS client support to the kernel. Following command shows (From newly built kernel) #cat /proc/filesystems nodev nfs nodev usbfs ext3 vfat .... Which shows that kernel support NFS filesystem but, mount command fails to mount NFS share, which is working fine on other machines. Help will be appreciated.

    Read the article

  • Hostname problems in CentOS 5.5

    - by spoon16
    I just set up a CentOS 5.5 machine on my local network and attempted to modify the hostname by editing /etc/sysconfig/network file. When I'm logged in locally the change to the hostname is reflected and seems to be working fine. When I open a SSH session via PuTTY from Windows this is what I see at the prompt: [root@? ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=mini.local [root@? ~]# sysctl kernel.hostname kernel.hostname = ? [root@? ~]# hostname ? [root@? ~]# hostname -f hostname: Unknown server error A couple of other symptoms that may be helpful in troubleshooting this problem. I can ping the CentOS box from my Windows machine via IP but not hostname. Also, my Netgear router does not display the hostname when I view the "Connected Devices", I do see the mac address and the proper IP listed though. How can I make it so that the hostname is properly propagated throughout my network?

    Read the article

  • Munin "Available entropy" when using address space layout randomization

    - by clawspoon
    Having just configured Munin for statistics logging on my gentoo server (hardened profile), I am noticing that my "Available entropy" is consitently in the 200-300 range. This seems way to low, so I checked it manually using the command $ cat /proc/sys/kernel/random/entropy_avail 3544 Odd. Consistently very low values in Munin and practically filled up when checking manually. After thinking about the problem for a while I came to the conclusion that the problem is probably that I'm using Adress Space Layout Randomization which is using the entropy when running commands/programs. Since Munin runs a whole slew of programs all the entropy is used up, and Munin then measures how much entropy there is, resulting in the low values. Does anyone have any experience with this? How can this be avoided?

    Read the article

  • Logrotate Successful, original file goes back to original size

    - by drewrockshard
    Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate } Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1 truncating /var/log/mylogfile.log compressing log with: /bin/gzip removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • Getting packets from one port to another on a Dell PowerConnect 2824 switch

    - by Arvo Bowen
    I have a dell PowerConnect 2824 and I have a cat 5 cable connected from port 1 to port 23. Port 1 is reserved for VLAN 1 (the only VLAN that can manage the switch) and port 18-23 belong to VLAN 112. I currently have the switch setup with ip 10.71.3.5/27 and a test machine plugged into port 22 with IP address 10.71.3.30/27. For some reason I can not ping 10.71.3.5 from my test machine (10.71.3.30). Note: When I try to ping the server plugged into port 21 (IP: 10.71.3.7/27) also VLAN 112, I get responses just fine. Note: When I plug my test machine directly into port 1, I can ping 10.71.3.5 just fine. Quick Recap: Switch IP: 10.71.3.5 Port 1 - dedicated to management - (VLAN1) Port 21 - SERVER (10.71.3.7/27) - (VLAN112) Port 22 - test machine (10.71.3.30/27) - (VLAN112) Port 23 - dedicated to management (to hop over to VLAN 1 from VLAN 112) - (VLAN112)

    Read the article

  • Running phpmyadmin and suphp

    - by thor
    I have a Debian Lenny web server. It is running apache2 with libapache2-mod-suphp. Unfortunately, suphp makes impossible to use phpmyadmin, as phpmyadmin is installed in /usr/share/phpmyadmin and owned by root, and suphp disables it's enging in this direcory: $ cat /etc/apache2/mods-enabled/suphp.conf <IfModule mod_suphp.c> AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml suPHP_AddHandler application/x-httpd-php <Directory /> suPHP_Engine on </Directory> # By default, disable suPHP for debian packaged web applications as files # are owned by root and cannot be executed by suPHP because of min_uid. <Directory /usr/share> suPHP_Engine off </Directory> </IfModule> Is there a possibility to enable system phpmyadmin (may be through standard libapache2-mod-php5) while using suphp? How?

    Read the article

  • linux build iso error

    - by Neil
    i played with linux customization and when i want to build the iso back i get this error: $ mkisofs -r -o rhel.iso -b isolinux/isolinux.bin -c isolinux/boot.cat ./ INFO: UTF-8 character encoding detected by locale settings. Assuming UTF-8 encoded filenames on source filesystem, use -input-charset to override. Unknown file type (unallocated) ./.. - ignoring and continuing. Using RELEA000.HTM;1 for /RELEASE-NOTES-pt_BR.html (RELEASE-NOTES-U1-pt_BR.html) Size of boot image is 20 sectors -> mkisofs: Error - boot image './isolinux/isolinux.bin' has not an allowable size. i didnt change the isolinux.bin why i ahve this error?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >