Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 443/991 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • How to get a list of Dovecot IMAP users

    - by Colt McCormack
    How do you get a list of users for a dovecot email server that connect via IMAP (as opposed to POP)? Our server is setup to authenticate via LDAP/PAM. Is there an easy way to get a list of the users who are accessing their mail via IMAP, rather than POP? I am about to migrate our server to Google Apps and want to migrate all of the mail for my IMAP users only (couple hundred out of several hundred total users). POP mail will be migrated separately from the client end obviously. I would much rather migrate only the IMAP users rather than the whole domain which would include migrating a bunch of POP mail left in the server that has already been read/sorted/deleted in the client's email program. Migrating all of that extra useless leftover POP mail could waste weeks of migration time. I suppose parsing some logs to see who has connected on an IMAP port (995 or 993) would give me a list would work if someone could help me do that. I know I have the raw dovecot logs, but am hoping for a cleaner solution.

    Read the article

  • merging two partitions on ubuntu

    - by gthm geeky
    This is how my partitions look like in Ubuntu. I would like to merge two partitions /dev/sda8 and /dev/sda/7 because I am unable to use both of them. /dev/sda8 111G 2.7G 103G 3% / udev 1.9G 12K 1.9G 1% /dev tmpfs 763M 864K 762M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 252K 1.9G 1% /run/shm none 100M 72K 100M 1% /run/user /dev/sda7 117G 52M 111G 1% /home Please let me know if there is any way to do it. And all the partitions looks ugly..I would like to have only one partition which would be my home folder.

    Read the article

  • Allow PHP to write file without 777

    - by camerongray
    I am setting up a simple website on webspace provided by my university. I do not have database access so I am storing all the data in a flat file. The issue I am experiencing is related to file permissions. I need PHP to be able to read and write the data file but I don't really want to set the file to 777 as anybody else on the system could modify it, they already have read access to everyone's web directories. Does anyone have any ideas on how to accomplish this? Thanks in advance

    Read the article

  • Apache on Ubuntu very slow on inital calls, very fast afterwards

    - by papakost
    I own an Ubuntu 10 VPS Server with Apache 2 hosting a Magento website. The first hit to the site from any client takes about 15-20 sec, while the subsequent hits from the same client take 0-1 sec. I suppose it doesn't have to do with Magento caching, because this happens also when the first call is on a very light page and the next calls are on heavy ones. Does anyone have an idea on what is going wrong here?

    Read the article

  • Enter response once prompt returns?

    - by mjb
    It's neither a secure idea nor one I'd recommend elsewhere, but I have a situation when occasionally it takes a while for my Ansible ad-hoc command to respond. I'd love to pipe or args or whatever is needed to push the required text into the prompt so I can walk away and know it will finish. Ex: $ ansible all -m shell -a "reboot" --ask-pass Password: blah blah blah it worked I'd love to send an argument or << or something to get the password in. Is that possible?

    Read the article

  • Tail and wildcard characters

    - by Mitch
    I want to get the last 10 lines of multiple files. I know they all end with "-access_log". So I tried: tail -10 *-access_log But this gives me an error, where as: tail -10 file-* Gives me the output I'd expect. I would think this probably has more to do with BASH then tail. However commands like: cat *-access_log Work fine. Any suggestions?

    Read the article

  • why is this happening?-"dhcpcd will not work correctly unless run as root"

    - by user330317
    i have installed archlinux and gnome on virtualbox. had no problem connecting to internet but now after installing gnome and rebooting there is no internet connection after following instructions from archwiki,i have tried . but i cant figure out the problem please help. host-63drhd% sudo netctl status enp0s3 ? [email protected] - Networking for netctl profile enp0s3 Loaded: loaded (/usr/lib/systemd/system/[email protected]; static) Active: inactive (dead) Docs: man:netctl.profile(5) host-63drhd% sudo netctl enable enp0s3 Profile 'enp0s3' does not exist or is not readable host-63drhd% sudo dhcpcd dhcpcd[1486]: sending commands to master dhcpcd process host-63drhd% dhcpcd dhcpcd[1543]: control_open: Permission denied dhcpcd[1543]: dhcpcd will not work correctly unless run as root dhcpcd[1543]: open `/run/dhcpcd.pid': Permission denied dhcpcd[1543]: control_start: Permission denied dhcpcd[1543]: version 6.3.2 starting dhcpcd[1543]: enp0s3: if_init: Permission denied dhcpcd[1543]: enp0s8: if_init: Permission denied dhcpcd[1543]: no valid interfaces found dhcpcd[1543]: no interfaces have a carrier dhcpcd[1543]: forked to background, child pid 1544

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • Strange filesystem behavior, Ubuntu 9

    - by Fixee
    I have two windows open on the same machine (Ubuntu 9, ia32, server). I'll call these windows W1 and W2. W1: $ cd ~/test $ ls sample $ In W2 I run "make" from a parent directory that recreates file test/sample: $ make project . . $ cd test $ ls sample $ Now, returning to W1: $ ls $ cd ../test $ ls sample $ In other words, after I build from another window and the file test/sample is replaced, ls shows the file as missing in the 2nd window until I cd ../test back into the directory whereupon it reappears. I can give more details if required, but just wondering if this is a well-known behavior.

    Read the article

  • Automount vfat (w95 fat32 LBA) on Centos

    - by cpl
    I'm trying to mount an USB flash drive formatted as w95 fat32 LBA (as reported by dmesg) under Centos. I can easily mount it using the mount command: mount /dev/sdx1 /media/mydrive -t vfat But it seems that the system (Centos 6.3) cannot mount it automatically. It mounts all the other filesystem types automatically. I also installed fuse-ntfs and it correctly mounts NTFS drives. How can I enable automount for vfat partiotions too? Thanks

    Read the article

  • mysql disk io keeps increasing ... is that normal?

    - by trustfundbaby
    So I've been trying to figure out this disk IO problem I have been having with my linode VPS. Over the last day or two I've just left watch -n1 pidstat -d running in a console window and the output looks like this: Monitoring it over the last few days, I've noticed that my problem lies with the init, searchd, and mysql processes. Searchd is sphinx and all its indexes are on disk, so disk io there is inevitable (apparently). What I can't understand is why the disk reads (kB_rd/s) for mysql refuse to stabilize and just keep going up. It started out at 154 yesterday and is up to what you see in that screen shot. but disk writes (kB_wr/s) have remained pretty constant the entire time. My VPS only has 768MB RAM, my mysql db has a size of about 220MB and after running mysqltuner.pl and reading a bit about it, I've been advised to set my innodb_buffer_pool_size to 220MB but I simply cannot afford to do that ... I have it up to 150MB. My question is twofold. Why does the init process have that much disk reading to do? Why is mysql doing so much disk reading?

    Read the article

  • I dont understand why [closed]

    - by gcc
    I dont understand why they vote down my question ,I couldont understand them -they do know nothing ,donot try to solve it-but they do vote down.If you do know nothing , dont fuck something or dont try to mess up

    Read the article

  • Show symbolic links AND their targets in web directory listing (apache)

    - by Erwan Queffélec
    Listing a directory content with ls -l shows this output: total 12 drwxr-xr-x 3 root root 4096 Dec 11 16:38 2.3 drwxr-xr-x 5 root root 4096 Dec 11 16:38 2.4 drwxr-xr-x 2 root root 4096 Dec 11 16:38 archive lrwxrwxrwx 1 root root 10 Dec 11 16:38 current -> 2.4/2.4.1/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 next -> 2.4/2.4.2/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 previous -> 2.4/2.4.0/ Notice how it shows the symbolic links and their respective targets. I need to know if there is a way of getting the same behaviour in apache directory browsing. If apache is not capable of it as I suspect, is there an application (FLOSS) providing that kind of behaviour ?

    Read the article

  • Should we regularly schedule mysqlcheck (or databsae optimization)

    - by scatteredbomb
    We run a forum with some 2 million posts and I've noticed that if left untouched the overhead in the mySQL (as listed in phpMyAdmin) can get quite large (hundreds of megabytes). I'm wondering if scheduling a normal mysqlcheck to optimize the tables is good practice? Any reason not to do it, say, once a week at an off-peak hour? There was a time over the summer where our site was constantly crashing because mysql was using up all resources. That's when I noticed the huge amount of overhead and optimized the database and haven't had any problems since then with stability. I figured if that was helping alleviate the issues, I should just setup a cron to automatically do this.

    Read the article

  • Kernel compiling with -j2+ parameter ends prematurely with no error message or output bzImage

    - by Minix
    I've noticed quite a while ago that compiling a kernel with the parameter -j set to 1 or more doesn't produce a bzImage. Instead, it ends prematurely without any advice. I have reproduced the same behavior in both my netbook and home server. As far as I'm aware, the point where the compilation stops is random - Compiling twice with the same parameters will probably stop at different files. However, when I run make with no -j* parameter the compilation ends just fine and outputs a working bzImage. Both machines run Intel Atom (N270 on the netbook and 330 on the server) and I've compiled for these processors. If I recall correctly, I've tried compiling both with Atom and with generic x86_64 options. The kernel version I'm building is 2.6.34.1 I've always compiled normally with those options in my Core2Duo and Pentium Dual Core machines. Has anyone experienced this issue? Any ideas why does this happens? Is there a fix or workaround?

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • HAProxy overload protection

    - by user2050516
    using the HAProxy, would it be possible to configure an overload protection, to limit the amount of requests sent to the backing http server(s) to a given rate (z.B 100 Request per second ). If the threshold is exceeded requests should be answered with a default response. I am interested in requests per second not connections per second as a connection can have many requests. And yes to improve the servers is not an option here. If yes a configuration example to achieve that would be excellent. Thank you in advance.

    Read the article

  • why in /proc file system have this infomation

    - by liutaihua
    run: lsof|grep delete can find some process open fd, but system dis that it had to delete: mingetty 2031 root txt REG 8,2 15256 49021039 /sbin/mingetty (deleted) I look the /proce filesystem: ls -l /proc/[pid] lrwxrwxrwx 1 root root 0 9? 17 16:12 exe -> /sbin/mingetty (deleted) but actually, the executable(/sbin/mingetty) is normal at /sbin/mingetty path. and some soket like this situation: ls -l /proc/[pid]/fd 82 -> socket:[23716953] but, use the commands: netstat -ae|grep [socket id] can find it. why the OS display this infomation??

    Read the article

  • Rearrange content of a file

    - by VikJES
    I'd like to rearrange the content of a file on a per line basis (see below), ideally without using Perl or Python (I'm not allowed to... Don't ask.) The input file contains unordered header lines and lines with backup operation results. The output files should contain the lines ordered as shown below. Original file: Completed Backups Backups with Warnings Failed Backups Server A backup was completed with warnings Server B backup was successful Server C backup failed Server D backup was completed with warnings End result: Completed Backups Server B backup was successful Backups with Warnings Server A backup was completed with warnings Server D backup was completed with warnings Failed Backups Server C backup failed

    Read the article

  • puppet execution of a python script where os.system(...) command is not working

    - by philippe
    I am trying to manage Unix users with puppet. Puppet provides enough tools to create accounts and provide authorized_keys files for instance, but no to set up user password, and it tell to the user. What I have done is a python script which generate a random password and send it to the user by email. The problem is, it is not possible to launch passwd Unix command with python, I have then written a bash script with the command: echo -ne "$password\n$password\n" | passwd $user passwd -e $user Launched manually, the script works fine and the created user has its password sent by email. But when puppet launches it, only the python script gets executed, as if the os.system('/bin/bash my_bash_script') is ignored. No error is displayed. And the user gets its password, but the passwd commands are not launched. Is there any limitation with puppet preventing to perform what I described? Or, how can I otherwise change the user account, its expiration, and send password by email? I can provide more information, but right now, I don't know which are accurate. Many thanks!

    Read the article

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

  • xen 4.0 squeeze fails to start guests with: launch_vm: SETVCPUCONTEXT failed

    - by mcr
    As Chris Benninger says over at: http://www.benninger.ca/?p=58 lots and lots of people have the problem with Squeeze and xen4.0 telling them: launch_vm: SETVCPUCONTEXT failed (rc=-1) but nobody seems to know what the solution is. I don't know either, but at least here, a solution might get recorded. In my case, I can start one guest machine. An identical configuration for a second machine fails. Whichever one I start first, is the one that runs, the other gets the error. I've got at least a dozen other systems (at my work) running great with Squeeze and 64-bit XEN, but not this new machine at home.

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >