Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 400/1113 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Postfix count relayed messages per user

    - by Martino Dino
    I would like to know if it's possible to count the outgoing (relayed) messages on a per user basis in postfix. I'm managing a small commercial SMTP relay and decided that it would be nice to have a detailed daily report on how much mail a single user have sent (and eventually enforce some limits) possibly in realtime. I've looked almost everywhere and started to think that writing my own milter would be the way to go... Are you aware of anything that already exists for postfix that can count and report relayed mail for authenticated users (a script, milter or whatever)?

    Read the article

  • send outgoing email via postfix from mail client

    - by Ey Jay
    I have installed postfix on my ubuntu that is hosted on digitalocean. What I want to do is. With my smtp server setup, I want to use it to send mail from my email client. I don't need to receive, I just need to send. I can telnet example.com 25 successfully, I received the email in my inbox, but when I tried using in a email client. smtp: example.com:25 user: smtp1user password: smtp1userpassword I get an error that says "Server doesn't respond. Try changing the port." I dont know how to proceed.

    Read the article

  • "Possible SYN flooding" in log despite low number of SYN_RECV connections

    - by al4
    Recently we had an apache server which was responding very slowly due to SYN flooding. The workaround for this was to enable tcp_syncookies (net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf). I posted a question about this here if you want more background. After enabling syncookies we started seeing the following message in /var/log/messages approximately every 60 seconds: [84440.731929] possible SYN flooding on port 80. Sending cookies. Vinko Vrsalovic informed me that this means the syn backlog is getting full, so I raised tcp_max_syn_backlog to 4096. At some point I also lowered tcp_synack_retries to 3 (down from the default of 5) by issuing sysctl -w net.ipv4.tcp_synack_retries=3. After doing this, the frequency seemed to drop, with the interval of the messages varying between roughly 60 and 180 seconds. Next I issued sysctl -w net.ipv4.tcp_max_syn_backlog=65536, but am still getting the message in the log. Throughout all this I've been watching the number of connections in SYN_RECV state (by running watch --interval=5 'netstat -tuna |grep "SYN_RECV"|wc -l'), and it never goes higher than about 240, much much lower than the size of the backlog. Yet I have a Red Hat server which hovers around 512 (limit on this server is the default of 1024). Are there any other tcp settings which would limit the size of the backlog or am I barking up the wrong tree? Should the number of SYN_RECV connections in netstat -tuna correlate to the size of the backlog? Update As best I can tell I'm dealing with legitimate connections here, netstat -tuna|wc -l hovers around 5000. I've been researching this today and found this post from a last.fm employee, which has been rather useful. I've also discovered that the tcp_max_syn_backlog has no effect when syncookies are enabled (as per this link) So as a next step I set the following in sysctl.conf: net.ipv4.tcp_syn_retries = 3 # default=5 net.ipv4.tcp_synack_retries = 3 # default=5 net.ipv4.tcp_max_syn_backlog = 65536 # default=1024 net.core.wmem_max = 8388608 # default=124928 net.core.rmem_max = 8388608 # default=131071 net.core.somaxconn = 512 # default = 128 net.core.optmem_max = 81920 # default = 20480 I then setup my response time test, ran sysctl -p and disabled syncookies by sysctl -w net.ipv4.tcp_syncookies=0. After doing this the number of connections in the SYN_RECV state still remained around 220-250, but connections were starting to delay again. Once I noticed these delays I re-enabled syncookies and the delays stopped. I believe what I was seeing was still an improvement from the initial state, however some requests were still delayed which is much worse than having syncookies enabled. So it looks like I'm stuck with them enabled until we can get some more servers online to cope with the load. Even then, I'm not sure I see a valid reason to disable them again as they're only sent (apparently) when the server's buffers get full. But the syn backlog doesn't appear to be full with only ~250 connections in the SYN_RECV state! Is it possible that the SYN flooding message is a red herring and it's something other than the syn_backlog that's filling up? If anyone has any other tuning options I haven't tried yet I'd be more than happy to try them out, but I'm starting to wonder if the syn_backlog setting isn't being applied properly for some reason.

    Read the article

  • SSL error: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch

    - by Tiffany Walker
    ERROR: SSL error: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch STEPS: openssl genrsa -out SITE.TLD.key 2048 openssl req -new -key SITE.TLD.key -out SITE.TLD.csr (send CSR to SSL site to sign) add CERT to SITE.TLD.crt add CA to SITE.TLD.ca chained them: cat SITE.TLD.crt SITE.TLD.ca > chained.cert Any Idea what I am doing wrong? I am using LiteSpeed HTTPd

    Read the article

  • How restore back up email files in qmail

    - by Maysam
    I have problem with restoring some old backup mail files in a mail server that uses qmail. The problem is, when I copy a new email file to the /cur directory, the number of emails in front of inbox increases, but when I click on the inbox, I don't see the newly copied email. I can only see the old emails. I also deleted maildirsize and courierimapuiddb files and they where automatically created again, but it didn't help and I cannot still see the email in my inbox. Is there something I am missing? How can I restore the backed up email files? Please note that when I copy the email files in /.sent-mail/cur directory, they are all displayed in my sent box, but that doesn't happen for inbox files in /cur directory.

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

  • SSH works in putty but not terminal

    - by Ryan Naddy
    When I try to ssh this in a terminal: ssh [email protected] I get the following error: Connection closed by 69.163.227.82 When I use putty, I am able to connect to the server. Why is this happening, and how can I get this to work in a terminal? ssh -v [email protected] OpenSSH_6.0p1 (CentrifyDC build 5.1.0-472) (CentrifyDC build 5.1.0-472), OpenSSL 0.9.8w 23 Apr 2012 debug1: Reading configuration data /etc/centrifydc/ssh/ssh_config debug1: /etc/centrifydc/ssh/ssh_config line 52: Applying options for * debug1: Connecting to sub.domain.com [69.163.227.82] port 22. debug1: Connection established. debug1: identity file /home/ryannaddy/.ssh/id_rsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_rsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_dsa-cert type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa type -1 debug1: identity file /home/ryannaddy/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.0 debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: Miscellaneous failure Cannot resolve network address for KDC in requested realm debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP Connection closed by 69.163.227.82

    Read the article

  • Achieving/maxing out 1GBPS connectivity on a nginx server

    - by Ansell Cruz
    This page (http://www.remsys.com/nginx-on-1gbps) claims that it can max out a 1gbps line using JBOD setup and no raid. Currently I'm on a 1gbps dedicated port and I'm on raid 10 (4x2TB - the disks that comes with 100tb.com servers). Currently I'm only peaking at 500-550mbps. wa% would show something around 20% everytime. I'm looking into maxing out this 1gbps port because I have an unmetered service from them. Do you guys think that the page that I referenced would be better than my current raid setup? Do you guys have any other suggestions on how to max out the performance of this server? TYI.

    Read the article

  • Give root password for maintenance

    - by Jevgeni Smirnov
    After entering shutdown now in terminal I get everything running normally and then: All processes ended withing 2 seconds...done INIT: Going single user INIT: Sending processes the TERM signal INIT: Sending processes the KILL signal Give root password for maintenance(or.... I press Ctrl+D, and it shows me login screen Debian. Shutdown through GUI works properly. UPDATE 1 It seems some process hangs. Moreover I'v managed to poweroff server through several retries. Recently i'v installed only ntp and ntpdate. Nothing more. I suppose it might be it conflicting with iptables.

    Read the article

  • How can I configure Cyrus IMAP to submit a default realm to SASL?

    - by piwi
    I have configured Postfix to work with SASL using plain text, where the former automatically submits a default realm to the latter when requesting authentication. Assuming the domain name is example.com and the user is foo, here is how I configured it on my Debian system so far. In the postfix configuration file /etc/main.cf: smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $mydomain The SMTP configuration file /etc/postfix/smtpd.conf contains: pwcheck_method: saslauthd mech_list: PLAIN The SASL daemon is configured with the sasldb mechanism in /etc/default/saslauthd: MECHANIMS="sasldb" The SASL database file contains a single user, shown by sasldblistusers2: [email protected]: userPassword The authentication works well without having to provide a realm, as postifx does that for me. However, I cannot find out how to tell the Cyrus IMAP daemon to do the same. I created a user cyrus in my SASL database, which uses the realm of the host domain name, not example.com, for administrative purpose. I used this account to create a mailbox through cyradm for the user foo: cm user.foo IMAP is configured in /etc/imapd.conf this way: allowplaintext: yes sasl_minimum_layer: 0 sasl_pwcheck_method: saslauthd sasl_mech_list: PLAIN servername: mail.example.com If I enable cross-realm authentication (loginrealms: example.com), trying to authenticate using imtest works with these options: imtest -m login -a [email protected] localhost However, I would like to be able to authenticate without having to specify the realm, like this: imtest -m login -a foo localhost I thought that using virtdomains (setting it either to userid or on) and defaultdomain: example.com would do just that, but I cannot get to make it work. I always end up with this error: cyrus/imap[11012]: badlogin: localhost [127.0.0.1] plaintext foo SASL(-13): authentication failure: checkpass failed What I understand is that cyrus-imapd never tries to submit the realm when trying to authenticate the user foo. My question: how can I tell cyrus-imapd to send the domain name as the realm automatically? Thanks for your insights!

    Read the article

  • What happens to running processes when I lose a remote connection to a *nix box?

    - by David Marble
    I occasionally lose my remote SSH connection to my VPS. I use screen for long-running processes, but am wondering what happens to the processes I had running aside from those run within a screen session if I lose the connection to the box. When I re-establish a connection to the box, what happened to the bash and sshd processes that were running when I lost the connection? Today I lost connection repeatedly and noticed many more bash and sshd processes than usual. If there are processes hanging around, do I need to kill them? How could I determine which processes were abandoned from my previous session? Thanks for any replies!

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

  • *nix OS that is easy to update to latest software

    - by rjstelling
    I need to configure a server (*nix) that runs our (bespoke) CMS and Applications. In the past I have defaulted to using Cent OS 5, but I find this outdated difficult to upgrade the software to the versions we require. For example, we need PHP 5.3, but CentOS 5 has 5.2. Updating is fine but breaks something else (normally MySQL support in PHP). Eventually it will get to a situation where I can't upgrade because of missing dependancies and incompatible versions. Error: Missing Dependency: httpd = 2.2.3-43.el5.centos.3 is needed by package httpd-devel-2.2.3-43.el5.centos.3.i386 (updates) Is there a better alternative OS for hassle free updates, I need: Apache 2.2.17 (the development version for apxs) MySQL 5.5.8 PHP 5.3.5

    Read the article

  • Xinetd , vncserver memory requirement

    - by JP19
    Hi, I am installing the following on a low memory system: vnc4server xinetd xterm openbox obconf I will only occasionally be logging into the vncs for some admin work. My question is: 1) Does xinetd take memory / cpu even when vncserver is not running? If so, can I "run" xinetd on demand (how)? And if no, any idea how much memory it will take when vncserver is not running? 2) Does vncserver take substantial memory when no clients are connected? 3) Do openbox/obconf take memory when vncserver is running but no client are connected? 4) Do openbox/obconf take memory when no vncserver is running? thanks JP

    Read the article

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • How to know how many files a process opened efficiently?

    - by Victor Lin
    I knows I can use the command lsof -p xxxx | wc -l to know the count of opened files op a process, it works, but however, it is just too inefficient. I have some server process which have too many socket files, the wc -l method never return the result. So, what is the efficient way to know how many files opened on a process? Thanks.

    Read the article

  • File system concepts (df command)

    - by mkab
    I'm finding it difficult to understand some stuffs about the df command. Suppose I type df and I have the following output Filesystem 1k-blocks Used Avail Capacity Mounted on /dev/da0s1 some number some number number percentage /win /dev/da0s2 some number some number number percentage /win/home /dev/da0s3a some number some number number percentage / devfs some number some number number percentage /dev /dev/da0s3g some number some number number percentage /local /dev/da0s3h some number some number -number 102% /reste /dev/da0s3d some number some number number percentage /tmp /dev/da1s3f some number some number number percentage /usr /dev/da1s3e some number some number number percentage /var /dev/da1s1a some number some number number percentage /public Are the answers to the following questions correct? How many physical drives do I have? Ans: 2. da0s1 and da1s1 How many physical partitions on each disk? Ans: 8 for da0s1 and 1 for da1s1 How many BSD partition on each physical partition Ans: Impossible to determine. We have to use the -T to determine its type How is it possible for the file system /dev/da0s3h filled at 102%? And where is this overflowed data written?Ans: I have no idea for this one Thanks.

    Read the article

  • how do you install php-devel

    - by user962449
    I keep getting dependency issues when I try to run yum install php-devel yum install --skip-broken php-devel .... --> Finished Dependency Resolution php-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-5.1.6-32.el5.i386 (base) php-cli-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-cli-5.1.6-32.el5.i386 (base) --> Running transaction check ---> Package php.i386 0:5.1.6-32.el5 set to be updated --> Processing Dependency: php = 5.1.6-32.el5 for package: php-devel ---> Package php-cli.i386 0:5.1.6-32.el5 set to be updated --> Finished Dependency Resolution php-devel-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php = 5.1.6-32.el5 is needed by package php-devel-5.1.6-32.el5.i386 (base) Packages skipped because of dependency problems: autoconf-2.59-12.noarch from base automake-1.9.6-2.3.el5.noarch from base imake-1.0.2-3.i386 from base php-5.1.6-32.el5.i386 from base php-cli-5.1.6-32.el5.i386 from base php-devel-5.1.6-32.el5.i386 from base Any ideas?

    Read the article

  • refresh screen rate ubuntu

    - by user24224
    Hello all I am having problems with the refresh rate if the screen . In the the refresh mode of the monitor in the monitor options have only one option 60Hz. I have LG 24 + ATI Radon 3870. And already installed the ati driver via ubuntu download centre. Any idea how i solve that one ? Thanks.

    Read the article

  • Using NOPASSWD for specific commands in sudoers file, PASSWD for all others

    - by jberryman
    I would like to configure sudo such that users can run some specific commands without entering a password (for convenience) and can run all other commands by entering a password. This is what I have, but this does not work; a password is always required: Defaults env_reset Defaults timestamp_timeout = 1 root ALL=(ALL:ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) NOPASSWD: /usr/sbin/pm-suspend, /usr/bin/apt-get, PASSWD: ALL #includedir /etc/sudoers.d Note that this is a debian system which uses this adding users to the "sudo" group method. Thanks.

    Read the article

  • Linux authentication via ADS -- allowing only specific groups in PAM

    - by Kenaniah
    I'm taking the samba / winbind / PAM route to authenticate users on our linux servers from our Active Directory domain. Everything works, but I want to limit what AD groups are allowed to authenticate. Winbind / PAM currently allows any enabled user account in the active directory, and pam_winbind.so doesn't seem to heed the require_membership_of=MYDOMAIN\\mygroup parameter. Doesn't matter if I set it in the /etc/pam.d/system-auth or /etc/security/pam_winbind.conf files. How can I force winbind to honor the require_membership_of setting? Using CentOS 5.5 with up-to-date packages. Update: turns out that PAM always allows root to pass through auth, by virtue of the fact that it's root. So as long as the account exists, root will pass auth. Any other account is subjected to the auth constraints. Update 2: require_membership_of seems to be working, except for when the requesting user has the root uid. In that case, the login succeeds regardless of the require_membership_of setting. This is not an issue for any other account. How can I configure PAM to force the require_membership_of check even when the current user is root? Current PAM config is below: auth sufficient pam_winbind.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account sufficient pam_winbind.so account sufficient pam_localuser.so account required pam_unix.so broken_shadow password ..... (excluded for brevity) session required pam_winbind.so session required pam_mkhomedir.so skel=/etc/skel umask=0077 session required pam_limits.so session required pam_unix.so require_memebership_of is currently set in the /etc/security/pam_winbind.conf file, and is working (except for the root case outlined above).

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >