Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 446/1048 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • How to get a list of Dovecot IMAP users

    - by Colt McCormack
    How do you get a list of users for a dovecot email server that connect via IMAP (as opposed to POP)? Our server is setup to authenticate via LDAP/PAM. Is there an easy way to get a list of the users who are accessing their mail via IMAP, rather than POP? I am about to migrate our server to Google Apps and want to migrate all of the mail for my IMAP users only (couple hundred out of several hundred total users). POP mail will be migrated separately from the client end obviously. I would much rather migrate only the IMAP users rather than the whole domain which would include migrating a bunch of POP mail left in the server that has already been read/sorted/deleted in the client's email program. Migrating all of that extra useless leftover POP mail could waste weeks of migration time. I suppose parsing some logs to see who has connected on an IMAP port (995 or 993) would give me a list would work if someone could help me do that. I know I have the raw dovecot logs, but am hoping for a cleaner solution.

    Read the article

  • Apache on Ubuntu very slow on inital calls, very fast afterwards

    - by papakost
    I own an Ubuntu 10 VPS Server with Apache 2 hosting a Magento website. The first hit to the site from any client takes about 15-20 sec, while the subsequent hits from the same client take 0-1 sec. I suppose it doesn't have to do with Magento caching, because this happens also when the first call is on a very light page and the next calls are on heavy ones. Does anyone have an idea on what is going wrong here?

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • merging two partitions on ubuntu

    - by gthm geeky
    This is how my partitions look like in Ubuntu. I would like to merge two partitions /dev/sda8 and /dev/sda/7 because I am unable to use both of them. /dev/sda8 111G 2.7G 103G 3% / udev 1.9G 12K 1.9G 1% /dev tmpfs 763M 864K 762M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 252K 1.9G 1% /run/shm none 100M 72K 100M 1% /run/user /dev/sda7 117G 52M 111G 1% /home Please let me know if there is any way to do it. And all the partitions looks ugly..I would like to have only one partition which would be my home folder.

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • Automount vfat (w95 fat32 LBA) on Centos

    - by cpl
    I'm trying to mount an USB flash drive formatted as w95 fat32 LBA (as reported by dmesg) under Centos. I can easily mount it using the mount command: mount /dev/sdx1 /media/mydrive -t vfat But it seems that the system (Centos 6.3) cannot mount it automatically. It mounts all the other filesystem types automatically. I also installed fuse-ntfs and it correctly mounts NTFS drives. How can I enable automount for vfat partiotions too? Thanks

    Read the article

  • Rearrange content of a file

    - by VikJES
    I'd like to rearrange the content of a file on a per line basis (see below), ideally without using Perl or Python (I'm not allowed to... Don't ask.) The input file contains unordered header lines and lines with backup operation results. The output files should contain the lines ordered as shown below. Original file: Completed Backups Backups with Warnings Failed Backups Server A backup was completed with warnings Server B backup was successful Server C backup failed Server D backup was completed with warnings End result: Completed Backups Server B backup was successful Backups with Warnings Server A backup was completed with warnings Server D backup was completed with warnings Failed Backups Server C backup failed

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • Show symbolic links AND their targets in web directory listing (apache)

    - by Erwan Queffélec
    Listing a directory content with ls -l shows this output: total 12 drwxr-xr-x 3 root root 4096 Dec 11 16:38 2.3 drwxr-xr-x 5 root root 4096 Dec 11 16:38 2.4 drwxr-xr-x 2 root root 4096 Dec 11 16:38 archive lrwxrwxrwx 1 root root 10 Dec 11 16:38 current -> 2.4/2.4.1/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 next -> 2.4/2.4.2/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 previous -> 2.4/2.4.0/ Notice how it shows the symbolic links and their respective targets. I need to know if there is a way of getting the same behaviour in apache directory browsing. If apache is not capable of it as I suspect, is there an application (FLOSS) providing that kind of behaviour ?

    Read the article

  • why is this happening?-"dhcpcd will not work correctly unless run as root"

    - by user330317
    i have installed archlinux and gnome on virtualbox. had no problem connecting to internet but now after installing gnome and rebooting there is no internet connection after following instructions from archwiki,i have tried . but i cant figure out the problem please help. host-63drhd% sudo netctl status enp0s3 ? [email protected] - Networking for netctl profile enp0s3 Loaded: loaded (/usr/lib/systemd/system/[email protected]; static) Active: inactive (dead) Docs: man:netctl.profile(5) host-63drhd% sudo netctl enable enp0s3 Profile 'enp0s3' does not exist or is not readable host-63drhd% sudo dhcpcd dhcpcd[1486]: sending commands to master dhcpcd process host-63drhd% dhcpcd dhcpcd[1543]: control_open: Permission denied dhcpcd[1543]: dhcpcd will not work correctly unless run as root dhcpcd[1543]: open `/run/dhcpcd.pid': Permission denied dhcpcd[1543]: control_start: Permission denied dhcpcd[1543]: version 6.3.2 starting dhcpcd[1543]: enp0s3: if_init: Permission denied dhcpcd[1543]: enp0s8: if_init: Permission denied dhcpcd[1543]: no valid interfaces found dhcpcd[1543]: no interfaces have a carrier dhcpcd[1543]: forked to background, child pid 1544

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • Strange filesystem behavior, Ubuntu 9

    - by Fixee
    I have two windows open on the same machine (Ubuntu 9, ia32, server). I'll call these windows W1 and W2. W1: $ cd ~/test $ ls sample $ In W2 I run "make" from a parent directory that recreates file test/sample: $ make project . . $ cd test $ ls sample $ Now, returning to W1: $ ls $ cd ../test $ ls sample $ In other words, after I build from another window and the file test/sample is replaced, ls shows the file as missing in the 2nd window until I cd ../test back into the directory whereupon it reappears. I can give more details if required, but just wondering if this is a well-known behavior.

    Read the article

  • Should we regularly schedule mysqlcheck (or databsae optimization)

    - by scatteredbomb
    We run a forum with some 2 million posts and I've noticed that if left untouched the overhead in the mySQL (as listed in phpMyAdmin) can get quite large (hundreds of megabytes). I'm wondering if scheduling a normal mysqlcheck to optimize the tables is good practice? Any reason not to do it, say, once a week at an off-peak hour? There was a time over the summer where our site was constantly crashing because mysql was using up all resources. That's when I noticed the huge amount of overhead and optimized the database and haven't had any problems since then with stability. I figured if that was helping alleviate the issues, I should just setup a cron to automatically do this.

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • why in /proc file system have this infomation

    - by liutaihua
    run: lsof|grep delete can find some process open fd, but system dis that it had to delete: mingetty 2031 root txt REG 8,2 15256 49021039 /sbin/mingetty (deleted) I look the /proce filesystem: ls -l /proc/[pid] lrwxrwxrwx 1 root root 0 9? 17 16:12 exe -> /sbin/mingetty (deleted) but actually, the executable(/sbin/mingetty) is normal at /sbin/mingetty path. and some soket like this situation: ls -l /proc/[pid]/fd 82 -> socket:[23716953] but, use the commands: netstat -ae|grep [socket id] can find it. why the OS display this infomation??

    Read the article

  • Kernel compiling with -j2+ parameter ends prematurely with no error message or output bzImage

    - by Minix
    I've noticed quite a while ago that compiling a kernel with the parameter -j set to 1 or more doesn't produce a bzImage. Instead, it ends prematurely without any advice. I have reproduced the same behavior in both my netbook and home server. As far as I'm aware, the point where the compilation stops is random - Compiling twice with the same parameters will probably stop at different files. However, when I run make with no -j* parameter the compilation ends just fine and outputs a working bzImage. Both machines run Intel Atom (N270 on the netbook and 330 on the server) and I've compiled for these processors. If I recall correctly, I've tried compiling both with Atom and with generic x86_64 options. The kernel version I'm building is 2.6.34.1 I've always compiled normally with those options in my Core2Duo and Pentium Dual Core machines. Has anyone experienced this issue? Any ideas why does this happens? Is there a fix or workaround?

    Read the article

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

  • Permissions issue on Fedora with separate home partition

    - by Tres
    I am running Fedora 12 and I've setup a partition separate from my root partition to keep shared files and home directories. Now, I've been having permission issues where it says the user cannot chdir into their home directory (/files/home/*). Now, I fixed this originally by chmodding / to 0755 and the home directories also to 0755. And yes, the user is the owner:group of their home directory. Now get this, I didn't change a thing, rebooted, everything still works. Great, right? I boot the server up a day later, and now same ol issue. This is a home server that wasn't on at all at any point in between the working state and non-working state. Also, nothing else was modified. Any ideas? Thanks!

    Read the article

  • mysql disk io keeps increasing ... is that normal?

    - by trustfundbaby
    So I've been trying to figure out this disk IO problem I have been having with my linode VPS. Over the last day or two I've just left watch -n1 pidstat -d running in a console window and the output looks like this: Monitoring it over the last few days, I've noticed that my problem lies with the init, searchd, and mysql processes. Searchd is sphinx and all its indexes are on disk, so disk io there is inevitable (apparently). What I can't understand is why the disk reads (kB_rd/s) for mysql refuse to stabilize and just keep going up. It started out at 154 yesterday and is up to what you see in that screen shot. but disk writes (kB_wr/s) have remained pretty constant the entire time. My VPS only has 768MB RAM, my mysql db has a size of about 220MB and after running mysqltuner.pl and reading a bit about it, I've been advised to set my innodb_buffer_pool_size to 220MB but I simply cannot afford to do that ... I have it up to 150MB. My question is twofold. Why does the init process have that much disk reading to do? Why is mysql doing so much disk reading?

    Read the article

  • Gentoo+urxvt+terminus: How do I change font version?

    - by gaidal
    In my Debian installation I can type extended ASCII characters such as åäö by default using the terminus font, however in Gentoo I can't get it to work so far. Nothing happens when I hit those keys, like in this thread: Missing glyphs in Terminus font, how to setup a fallback font ? But in this case I know terminus supports those characters in at least some of its versions, since it's works in Debian. So what I want is to find out how to see and choose which of the many different terminus font files is being used. I set the font in the same way on both Debian and Gentoo, using URxvt*font: xft:terminus:size=xx in .Xdefaults. Both systems use en_US.UTF-8 as default locale.

    Read the article

  • Change ip route metric

    - by notphunny
    I'm constantly switching between eth0 and wlan0 interface on my archlinux because I often change OpenWrt fw images on my second router (which isn't connected anywhere). So I have problem with my routes when I'm connected to my wlan and want to connect with Ethernet to my router. Both routers are on 192.168.1.1/24 and after connecting to my Ethernet profile eth0 route becomes default one (which is ok for the time), because of smaller metric I guess. So I'm interested, how can I change routes metric so my applications can be connected to the internet (through out wlan)? Maybe there is solution not to use Default Gateway on Ethernet profile, however I still want to know how to change metric. Or default route if there are more then one.

    Read the article

  • Java kills sound on Karmic

    - by hasen j
    Every time I run a java application, one of two things happen: Either I lose sound in all other programs (even after quitting the java app) or if some other application is already playing sound, the said java app doesn't have sound Usually this can be fixed by running pulseaudio --kill from the command line, but it doesn't always work. Is there a way to fix this problem? This didn't happen before the upgrade to karmic. Other info: The java I'm using is Sun's Java

    Read the article

  • puppet execution of a python script where os.system(...) command is not working

    - by philippe
    I am trying to manage Unix users with puppet. Puppet provides enough tools to create accounts and provide authorized_keys files for instance, but no to set up user password, and it tell to the user. What I have done is a python script which generate a random password and send it to the user by email. The problem is, it is not possible to launch passwd Unix command with python, I have then written a bash script with the command: echo -ne "$password\n$password\n" | passwd $user passwd -e $user Launched manually, the script works fine and the created user has its password sent by email. But when puppet launches it, only the python script gets executed, as if the os.system('/bin/bash my_bash_script') is ignored. No error is displayed. And the user gets its password, but the passwd commands are not launched. Is there any limitation with puppet preventing to perform what I described? Or, how can I otherwise change the user account, its expiration, and send password by email? I can provide more information, but right now, I don't know which are accurate. Many thanks!

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >