Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 446/1328 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Dovecot/Postfix-mysql e-mail Aliases are not correctly forwarded

    - by jo_fryli
    I recently set up Dovecot/postfix-mysql on my Debian Squeeze Server and I have a bit of a problem. When ever I send a email to an alias ([email protected] forwarded to [email protected] for example) Postfix (or Dovecot, I'm not quite sure) puts this email into a Mailbox rather than forwarding it to the real Mail-Adress. I have tested all the MySQL queries and they all behave the way I intend them to do. foobar dovecot: deliver([email protected]): msgid=<000001385b464c9a-e40af11e-3bf4-49f6-903d-1d2369f6bfb6-000000@barfoo: saved mail to INBOX master.cf main.cf Keep in mind that normal E-Mail sending and receiving works just fine! I have set up my MySQL Tables with Postfixadmin. Thanks for your help!

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • kill a hung mount process

    - by John P
    I have a virtual machine drive that ran out of space, so I shutdown the VM, extended the volume using lvextend. After resizing the partition (ext3), I ran e2fsck on it, and it found and corrected errors. Unfortunately, when I ran efsck one more time, there were more errors that had to be fixed. I went through 3 rounds of e2fsck before I decided to try mounting it to clean up some space manually. I tried mounting it, but the mount process hung. I tried to "kill -9" the mount process, but that did not kill it. I killed the parent process, but that did not kill it either. Any ideas on how to kill a rogue mount process? Some evidence: ps -l 13292 F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 R 0 13292 1 99 85 0 - 17964 - ? 11:27 mount /dev/mapper/xen7-123p3 /tmp/p3/ lsof -p 13292 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mount 13292 root cwd DIR 9,2 4096 25264129 /root mount 13292 root rtd DIR 9,2 4096 2 / mount 13292 root txt REG 9,2 61656 2916434 /bin/mount mount 13292 root mem REG 9,2 144776 31457282 /lib64/ld-2.5.so mount 13292 root mem REG 9,2 1718232 31457284 /lib64/libc-2.5.so mount 13292 root mem REG 9,2 23360 31457291 /lib64/libdl-2.5.so mount 13292 root mem REG 9,2 43808 31457783 /lib64/libblkid.so.1.0 mount 13292 root mem REG 9,2 247496 31457331 /lib64/libsepol.so.1 mount 13292 root mem REG 9,2 95464 31457337 /lib64/libselinux.so.1 mount 13292 root mem REG 9,2 154640 31457491 /lib64/libdevmapper.so.1.02 mount 13292 root mem REG 9,2 17936 31457472 /lib64/libuuid.so.1.2 mount 13292 root mem REG 9,2 56438208 12684878 /usr/lib/locale/locale-archive mount 13292 root 0u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 1u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 2u CHR 136,11 0t0 13 /dev/pts/11 (deleted) umount -f /tmp/p3/ umount2: Invalid argument umount: /tmp/p3/: not mounted

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • Default IPv6 route on debian squeeze does not come up after boot

    - by Georg Bretschneider
    I have a problem with my default IPv6 route not coming up after boot on a Debian Squeeze system. This is my config (/etc/network/interfaces): # Loopback device: auto lo iface lo inet loopback iface lo inet6 loopback # device: br0 auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 88.198.62.xx broadcast 88.198.62.63 netmask 255.255.255.224 gateway 88.198.62.33 up route add -net 88.198.62.32 netmask 255.255.255.224 gw 88.198.62.33 br0 iface br0 inet6 static address 2a01:4f8:131:10x::2 netmask 64 gateway 2a01:4f8:131:100::1 up route -A inet6 add 2a01:4f8:131:100::1/59 dev br0 My inet comes up alright, but I have to exec the route command manually after boot to make IPv6 work. Otherwise I can't even reach my gateway. This is the output of ip -6 route show after boot: 2a01:4f8:131:10x::/64 dev br0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 unreachable fe80::/64 dev lo proto kernel metric 256 error -101 mtu 16436 advmss 16376 hoplimit 4294967295 fe80::/64 dev br0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 I already tried it with: up ip -6 route add 2a01:4f8:131:100::1 dev br0 up ip -6 route add default via 2a01:4f8:131:100::1 dev br0 in /etc/network/interfaces, but with the same results. If I execute those commands manually on my shell, everything starts working nicely. And yes, I tried with post-up instead of up, too. Only other changes I made was to activate ip forwarding for IPv6, because I want to run some LXC containers on that system.

    Read the article

  • Mail server Backup script

    - by Paul Stevens
    Hello, Im looking for the best way to accomplish a full backup of our "vmail" accounts on our mail server (CentOS "iRedMail" 5.1). I also need to split on 4GB parts the resulting tar or bzip compressed backup, and get this DVD-RW burned, on same server. The idea is to get this procedure to running overnight, once a week. Our mail server holds about 45GB of information. I will appreciate any advise or help on this topic. Thanks.

    Read the article

  • VPN on a ubuntu server limited to certain ips

    - by Hultner
    I got an server running Ubuntu Server 9.10 and I need access to it and other parts of my network sometimes when not at home. There's two places I need to access the VPN from. One of the places to an static IP and the other got an dynamic but with DynDNS setup so I can always get the current IP if I want to. Now when it comes to servers people call me kinda paranoid but security is always my number one priority and I never like to allow access to the server outside the network therefor I have two things I have to have on this VPN. One it shouldn't be accessiable from any other IP then these 2 and two it has to use a very secure key so it will be virtually impossible to bruteforce even from the said IP´s. I have no experience what so ever in setting up VPNs, I have used SSH tunneling but never an actuall VPN. So what would be the best, most stable, safest and performance effiecent way to set this up on a Ubuntu Server? Is it possible or should I just set up some kind of SSH Tunnel instead? Thanks on beforehand for answers.

    Read the article

  • Cant remove/delete symlink

    - by user477519
    I have tried to create a symlink and it threw this error: ln: accessing `.test': Permission denied Now I can't unlink or delete the symlink file. Tried Googling for help but could not find a solution. Please find the results of following commands. stat .test : File: `.test'stat: cannot read symbolic link `.test': Permission denied Size: 26 Blocks: 0 IO Block: 16384 symbolic link Device: 1fh/31d Inode: 312075453 Links: 1 Access: (0777/lrwxrwxrwx) Uid: (11160/ chatt) Gid: (11307/ pgr) Access: 2012-11-12 11:36:51.167327500 +0000 Modify: 2012-11-12 11:36:51.163331700 +0000 Change: 2012-11-12 11:36:51.163331700 +0000 Birth: - chattr -i .test: chattr: Permission denied while trying to stat .test lsatter .test lsattr: Operation not supported While reading flags on .test Any help would be appreciated. Thanks

    Read the article

  • Why won't apache load a symlinked file from conf.d?

    - by kdt
    I have an apache configuration file which works fine when it's placed directly in /etc/httpd/conf.d/foo.conf. However, when I move the same file somewhere else (for example, move it to /tmp/foo.conf) and then create a symlink with ln -s /tmp/foo.conf /etc/httpd/conf.d then apache fails on startup with: httpd: could not open document config file /etc/httpd/conf.d/foo.conf I've tried making the file and the symlink mode 777, and tried changing them to be owned by the apache user. It seems like apache is failing to load the file purely on the basis of it being a symlink, but I'm sure I've used symlinks successfully on other machines. Is there something I'm missing? Does apache have an option for refusing to load config files if they're symlinks? The operating system is CentOS 4.4, apache version 2.0.52.

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • cp command force

    - by user121196
    currently there's a xxx dir already in /home/yyy I'm trying to overwrite it cp -fr ../xxx /home/yyy/ doesn't work still prompts me to overwrite the individual files. how do I fix it?

    Read the article

  • How can I get Gnome-Do to open in multiple X Screens?

    - by btelles
    Hi, I LOVE Gnome-Do (the Ubuntu version of QuickSilver). The only thing is that I have several monitors, which are all completely separate X Screens (I.E. I can't move windows between them), and Gnome-Do will only open in ONE of those monitors. If I go to Monitor/Screen #2 and press Super+Space, the Gnome-Do window appears in the first monitor. Is it possible to get a separate Instance of Gnome-Do on each Screen? P.S. Using profiles may be a work-around...I've managed to get multiple instances of Firefox by using "firefox -P my_first_screen"...anything like that available in Gnome-do?

    Read the article

  • memcached install issues with lib event on server

    - by albert N
    I've installed libevent on my server in the directory root/data/ and have i'm about to install memcached with ./configure –with-lib-event=/data/; make; make install However, after running a bit I get this error saying i'm pointing to the wrong directory for libevent. checking for libevent directory... configure: error: libevent is required. You can get it from http://www.monkey.org/~provos/libevent/ If it's already installed, specify its path using --with-libevent=/dir/ make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target `install'. Stop. Any suggestions. I am not experience with cli so anything is help. Thanks!

    Read the article

  • iptables ACCEPT policy

    - by kamae
    In Redhat EL 6, iptables INPUT policy is ACCEPT but INPUT chain has REJECT entry in the end. /etc/syconfig/iptables is as below: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Do you know why the policy is ACCEPT not DROP? I think setting DROP policy is safer than ACCEPT in case to make mistake in the chain. Actually the policy is not applied to any packet: # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

    Read the article

  • Want to add a new domain to my postfix, but mails not delivered (mail look back to myself)

    - by user74850
    Hi everybody, My host relay is working but I was aksed to add a new domain (like external.example.com). I have added in main.cf the new fqdn in relay_domains relay_domains = $mydestination, example.com,example2.com, externe.example.com and in transport example.com smtp:[192.168.1.22] example2.com smtp:[192.168.1.22] external.example.com smtp:[192.168.1.22] Mails sent to users with example.com and example2.com domain are well delivered. But for mails to external.example.com i got this message Mar 17 16:47:52 relayhost postfix/smtp[11948]: 754E1876A01: to=<[email protected]>, relay=relayhost2.example.com[80.81.82.83]:25, delay=19555, delays=19525/0.01/30/0, dsn=4.4.6, status=deferred (mail for external.example.com loops back to myself) I read another Q/A about the same issue right here, but it's not helping me. Can you ?

    Read the article

  • Cracking WEP with Aircrack and Kismet

    - by Jenny
    Just a minor question, but I notice with aircrack when it lists networks, it does not list the encryption type of each network. Which seems fair enough, as you can use Kismet, however on my machine when I end kismet and the server, the monitor interface is not removed and I cannot remove it manually, which screws with aircrack. SO, is kismet needed to view encryption types of networks, and if so how do you use it peacefully in unison with aircrack?

    Read the article

  • How do I make zeitgeist work in Arch?

    - by wleoncio
    I've been trying to setup Zeitgeist on my Gnome-shell system for a couple of days, but I'm yet to get it to work. I've done everything I could think of, i.e. installing zeitgeist from [extra], as well as libqzeitgeist. I've also installed all Gnome extensions created by Seif (https://extensions.gnome.org/accounts/profile/seif), since they're the reason I'm installing the package in the first place. I've tried running "zeitgeist-daemon --replace" and then "gnome-shell --replace", but nothing seems to work. According to Der Harm's wiki (https://wiki.archlinux.org/index.php/User:Der_harm#Gnome_Zeitgeist), the Zeitgeist daemon doesn't need to be explicitly started, but even if it was, I don't know how to do it (since it's not in /etc/rc.d, I bet adding "zeitgeist" to my rc.conf wouldn't do any good either). I can't believe there isn't a very simple setup here, please help me see what I'm missing!

    Read the article

  • Is there software that will help me convert my PST files into a searachable web archive?

    - by chronoz
    I have used POP3 for many and many years and always used PST files for back-up purposes. I'd like to be able to create a searchable mail archive of this 12GB worth of e-mail. I had used Horde + Qmail for a while for searching e-mail, but it was truly horrible and even extremely slow when searching into a few ten thousands of e-mails, let alone more than a million. I would prefer a free solution that would provide fast searching through historical e-mails. Also, preferably hosted on a server, so I don't have to worry about backing up any more crucial data on my desktop.

    Read the article

  • Problems installing icinga-web

    - by Kungurov
    I'm using Ubuntu 10.04 LTS (64bit, Server), Apache 2.2.14 Following the instruction from the oficial icinga page http://docs.icinga.org/latest/en/index.html I installed the icinga-web-1.7.1 on my machine and configured a few hosts for test purposes. The Classic Interface runs as expected but the new Web Interface does not show any data. When I try: ps aux | grep ido2db | grep -v grep I get: icinga 27425 0.0 0.0 41464 600 ? Ss Jul27 0:00 /usr/local/icinga/bin/ido2db -c /usr/local/icinga/etc/ido2db.cfg which might indicate a problem with idomod/ido2db because according to the docs there should be at least 2 processes greped. Any ideas how to fix that?

    Read the article

  • overriding default scheduler for blkio requests in cgroups

    - by Aamir Mushtaq
    I am trying to optimize a set of servers that have to reside on single machine. i.e. i can have multiple application server, a DB server and of course a samba server as well in same instance. Now I was looking into several optimizing options available to me. In my quest, i did my tuning of the network stack. coming to the CPU, MEMORY and the BLKIO tweaks, i am using CGROUPS. The problem i am facing is that for enhanced performance in the nature of the applications that i am running, the CFQ Scheduler that is implemented for the BLKIO subsystem is not optimal. I was looking more for a Deadline Scheduler because that will serve my purpose well. My question is whether it is possible for us to change the scheduler in the kernel compilation itself for the BLKIO to Deadline and it will reflect in my usage of [CGROUP hierarchies][3]? Since when running the service cgconf, a new fs is mounted and i dont want it to revert to CFQ scheduler. I also welcome any suggestions that will enable me to have more control over my resources.

    Read the article

  • Should I quit using Ifconfig?

    - by Zhen
    With the servers that mount Infiniband cards, when I use the ifconfig command I have the warning: Ifconfig uses the ioctl access method to get the full address information, which limits hardware addresses to 8 bytes. Because Infiniband address has 20 bytes, only the first 8 bytes are displayed correctly. Ifconfig is obsolete! For replacement check ip. Should I quit using ifconfig? It is deprecated in favor of ip command? Or it will be update in the near future.

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >