Search Results

Search found 3153 results on 127 pages for 'debian lenny'.

Page 56/127 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • How to get a list of Dovecot IMAP users

    - by Colt McCormack
    How do you get a list of users for a dovecot email server that connect via IMAP (as opposed to POP)? Our server is setup to authenticate via LDAP/PAM. Is there an easy way to get a list of the users who are accessing their mail via IMAP, rather than POP? I am about to migrate our server to Google Apps and want to migrate all of the mail for my IMAP users only (couple hundred out of several hundred total users). POP mail will be migrated separately from the client end obviously. I would much rather migrate only the IMAP users rather than the whole domain which would include migrating a bunch of POP mail left in the server that has already been read/sorted/deleted in the client's email program. Migrating all of that extra useless leftover POP mail could waste weeks of migration time. I suppose parsing some logs to see who has connected on an IMAP port (995 or 993) would give me a list would work if someone could help me do that. I know I have the raw dovecot logs, but am hoping for a cleaner solution.

    Read the article

  • Workaround broken sudo?

    - by perreal
    I managed to break sudo by deleting the libc.so.6 sym-link in /lib. I copied the actual file and created a symbolic link with the same name under my home directory by using LD_PRELOAD=/lib/libc-2.11.3.so. At this point, all binaries linking libc are working through preload except sudo. For sudo, I need to write (and don't know why): $ /lib/ld-linux-x86-64.so.2 --library-path . /usr/bin/sudo but this gives me: $ sudo: must be setuid root Checking the permissions: $ ls -l /usr/bin/sudo $ -rwsr-xr-x 2 root root 166120 So the setuid bit is actually set. Question: I need to create a symbolic link named /lib/libc.so.6 through my active ssh connection without using sudo, or, make sudo work somehow. I don't have the root password and I can't connect through ssh anymore. Is there any other way I can get authorization?

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Should I update the kernel on a Linux machine?

    - by Legate
    As I understand it, updating to a new kernel (with the normal linux-image... package, not by rolling my own) requires a server restart. However, one of our servers (Ubuntu 10.04) is running several extensive screen sessions. Restarting kills those which is always a major hassle to their owners (mostly because of lost session histories). What should I do? I see several possibilites: Not doing anything, that is update only non-kernel packages (perhaps use apt-pinning?) Update the kernel, but not restart. (Is that smart? I seem to remember there might be some problems with loading kernel modules.) Updating the kernel and restarting. Is there perhaps some way to preserve the screen sessions? I guess it ultimately boils down to this question: How important is it to update the kernel? I posted this question here instead of askubuntu.com as I think this is not an Ubuntu-specific issue though this server is running Ubuntu.

    Read the article

  • I lost /dev/md2 on my server

    - by sten
    Hi, My 2 hard drives fried at the same moment apparently. My host company rebooted my server in rescue mode and I am trying to recover my data. They told me to mount /dev/sda2 to recover the data I need but, looking at a similar server that I have in pool, the data I'm looking for should be instead in /dev/md2. I can find /dev/md0 but not /dev/md2 (nor /dev/md1). I've looked on several places on the web and I could only find messages explaining how to create new partition. I just need to recover some data, not all of it and I'll be glad if anyone could help me to mount the /dev/md2 folder (or any other trick that would allow me to recover the data that was stored there). Thanks in advance, Sten

    Read the article

  • xen virtual machines get to many porcentages of cpus

    - by ki0
    Hi everyone, This is my question. I have one Xen server with 8 CPU's and 6 virtual machine running, each virtual hard disk are running in diferent physical hardisk. Everything worked fine but sometimes one virtual machine get almost whole CPU, if the Domain-0 is 90% that is normal, the virtual machine is 500% usage of CPU. I have improved that it is not depends who are working with the VM even when nobody are working with the server this still happens. I dont know what happen. Anyone have any idea?¿ or anyone have happened the same?¿

    Read the article

  • Run preseed commands as specific user / switching users

    - by pduersteler
    Beside the usual setup where I create a normal user foo, I want to run a few d-i preseed/late_command commands as that foo user. My initial thought was to simply call those commands with sudo, e.g: d-i preseed/late_command in-target echo "<pwd>" | sudo -Si <command>. This works for some sort of commands. However the problem is that some of the commands load up shell scripts which require to not be run with sudo. Issuing a su -c "<command>" would be an alternative, but su does not offer the possibility to read the password from stdin. Is it safe to jump around between the users using su (And if yes, how do I provide the stdin? and does it work or just result in a su: must be run from a terminal) or would this cause issues?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • BackupPC back-up has not been finished in 12 hours(!)

    - by chronoz
    I installed BackupPC toda on a server and set it to do a back-up 12 hours ago... while it's been backing up since, it seems very very slow and it's not completed yet. It's just backing up a testserver with a total disk usage of 1.8GB. What could cause the back-up process to be so slow? rsnapshot always worked wonderfully fast, but I want to improve my back-up solution. df shows that the size on the back-up disk is actually still increasing.

    Read the article

  • Routing Traffic With OpenVPN

    - by user224277
    Few minutes ago i configured my VPN server, and actually I can connect to my VPN but all trafic is going through my normal home network. On my OpenVPN application I've got an information : Server IP: **.185.***.*10 Client IP: 10.8.0.6 Traffic: 7.3 KB in, 5.6 KB out Connected: 10 June 2014 19:21:59 So everything is connected but how I can setup on windows 7 that all trafic have to go through OpenVPN network card ?? Client setting : client dev tun proto udp # enter the server's hostname # or IP address here, and port number remote **.185.***.*10 1194 resolv-retry infinite nobind persist-key persist-tun # Use the full filepaths to your # certificates and keys ca ca.crt cert user1.crt key user1.key ns-cert-type server comp-lzo verb 6 Server setting : port 1194 proto udp dev tun # the full paths to your server keys and certs ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh2048.pem cipher BF-CBC # Set server mode, and define a virtual pool of IP # addresses for clients to use. Use any subnet # that does not collide with your existing subnets. # In this example, the server can be pinged at 10.8.0.1 server 10.8.0.0 255.255.255.0 # Set up route(s) to subnet(s) behind # OpenVPN server push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" ifconfig-pool-persist /etc/openvpn/ipp.txt keepalive 10 120 status openvpn-status.log verb 6 and sysctl : net.ipv4.ip_forward=1 Thank you for your time and help.

    Read the article

  • In Ubuntu Linux, how do I list packages installed from the “universe” repository?

    - by Nate
    On an Ubuntu 10.04 LTS server, I want to list installed packages and see what repository they come from. It’s easy to list installed packages, but it does not include the name of the repository (such as “main” or “universe”). And this information isn’t in /var/lib/dpkg/status, so dpkg-query doesn’t show it either. I want to get a list of “unsupported” software—that is, software that doesn’t come from the “main” repository, and for which Ubuntu does not guarantee security updates. Note: This is a server. It does not have X, GNOME or KDE installed.

    Read the article

  • How can I start new window in the same screen session automatically?

    - by Mato
    I read How can I start multiple screen sessions automatically?, but I don't understand the first accepted reply: screen -dmS "$SESSION_NAME" "$COMMAND" "$ARGUMENTS" In my case I need to automatically create one screen session for one script, and afterwards I need to create a new window in the same session for another script. Manually, I would: run screen enter command CTRL+A CTRL+C enter command CTRL+A CTRL+D How can I do this automatically in a script? A simple example would help me a lot. Thank you for replies.

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • Haproxy, configure for one host

    - by Michal K.
    I have to use haproxy on one machine. I want to do redirect requests from Ip to the same ip (with another port). My configuration (doesn't work): lobal maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 1 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode http clitimeout 600000000 srvtimeout 600000000 contimeout 400000000 log 127.0.0.1 local0 log 127.0.0.1 local1 notice option httpclose # Disable Keepalive listen http_proxy 127.0.0.1:8080 balance leastconn # Load Balancing algorithm acl acl_apache path_end .avi .jpeg #option httpchk option forwardfor # This sets X-Forwarded-For ## Define your servers to balance server DE2 127.0.0.1:8080 weight 1 maxconn 15 check

    Read the article

  • httpd.conf ruined - can't access my vps anymore

    - by Jazerix
    Okay, so this may be incredible stupid But I was configuring my httpd.conf file yesterday. After a server restart, I can no longer access it. Port 80 is working fine, and it displays my webpages, however when I access the site via ssh, it just says the connection was refused. I cannot access webmin which is port 10000 or access it via ftp :/ Do I need to recreate the whole site or is there a way to get into it? I think I messed up the virtual hosts :)

    Read the article

  • httpd.conf ruined - cant access my vps anymore

    - by Jazerix
    Okay, so this may be incredible stupied But I was configuring my httpd.conf file yesterday. After a server restart, I can no longer access it. Port 80 is working fine, and it displays my webpages, however when I access the site via ssh, it just says the connection was refused. I cannot access webmin which is port 10000 or access it via ftp :/ Do I need to recreate the whole site or is there a way to get into it?

    Read the article

  • Solutions for scheduling cronjob

    - by Shamsul
    I have setup a list of corn, Some of the corn script takes long time like 1-5 hours, and its increasing day by day. I do not want to run two corn script at the same time, its not for the dependency, bu its because my server memory is not that much so that it can not handle two big operation, so i need to find out a solution so that the scheduled scripts will not start until the other previous script no finished. i have 10-15 corn job in the list. and 5 of them i do not want to overlap. Anyone help me find out any solutions?

    Read the article

  • Running screen without root

    - by digital
    Hi, I recently updated screen on my server and for some reason when logged in as a normal user I can no longer create a screen session. If I run sudo screen it works. It's probably a permissions error somewhere but I'm not where to find it. Any help would be greatly appreciated.

    Read the article

  • File gone or altered after MySQL[HY000][2002] error [on hold]

    - by Psyberion
    I'm working on a rather small project, and today I got an SQLSTATE[HY000][2002]:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' error. After a bit of googling and a few attempts to restart the mysqld service, I gave up and tried rebooting the computer. This did the trick, MySQL was now running fine. I did, however, get a more serious issue: Some files were missing, others were altered. Also, a few posts in the MySQL was gone. It's really strange, it's like the whole project has been reset two or three days, and I have no clue why. Some additional details about this: I save my files after every line of code. I'm religious about this. So I haven't lost the files that way. I was accessing the server via SSH when the error occurred, so I did the programming and the reboot over SSH. The server is a Raspberry Pi, model B, with Raspian on which I run Apache2. I was viewing the site and had an active session when I rebooted the system. The pages I lost did work just before this all happened. The MySQL fault occurred when I tried to add a text NOT NULL column to a table which had entries. Now, the amount of lost work isn't really that much, so I'm not really looking for help recovering the files. The reason I'm posting this is because I wonder how did this happen, and why?

    Read the article

  • Simplest way to get current time in current timezone using boost::date_time ?

    - by timday
    If I do date +%H-%M-%S on the commandline (Debian/Lenny), I get a user-friendly (not UTC, not DST-less, the time a normal person has on their wristwatch) time printed. What's the simplest way to obtain the same thing with boost::date_time ? If I do this: std::ostringstream msg; boost::local_time::local_date_time t = boost::local_time::local_sec_clock::local_time( boost::local_time::time_zone_ptr() ); boost::local_time::local_time_facet* lf( new boost::local_time::local_time_facet("%H-%M-%S") ); msg.imbue(std::locale(msg.getloc(),lf)); msg << t; Then msg.str() is an hour earlier than the time I want to see. I'm not sure whether this is because it's showing UTC or local timezone time without a DST correction (I'm in the UK). What's the simplest way to modify the above to yield the DST corrected local timezone time ? I have an idea it involves boost::date_time:: c_local_adjustor but can't figure it out from the examples.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >