Search Results

Search found 17187 results on 688 pages for 'give me chicken'.

Page 484/688 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • Authenticating Windows 7 against MIT Kerberos 5

    - by tommed
    Hi There, I've been wracking my brains trying to get Windows 7 authenticating against a MIT Kerberos 5 Realm (which is running on an Arch Linux server). I've done the following on the server (aka dc1): Installed and configured a NTP time server Installed and configured DHCP and DNS (setup for the domain tnet.loc) Installed Kerberos from source Setup the database Configured the keytab Setup the ACL file with: *@TNET.LOC * Added a policy for my user and my machine: addpol users addpol admin addpol hosts ank -policy users [email protected] ank -policy admin tom/[email protected] ank -policy hosts host/wdesk3.tnet.loc -pw MYPASSWORDHERE I then did the following to the windows 7 client (aka wdesk3): Made sure the ip address was supplied by my DHCP server and dc1.tnet.loc pings ok Set the internet time server to my linux server (aka dc1.tnet.loc) Used ksetup to configure the realm: ksetup /SetRealm TNET.LOC ksetup /AddKdc dc1.tnet.loc ksetip /SetComputerPassword MYPASSWORDHERE ksetip /MapUser * * After some googl-ing I found that DES encryption was disabled by Windows 7 by default and I turned the policy on to support DES encryption over Kerberos Then I rebooted the windows client However after doing all that I still cannot login from my Windows client. :( Looking at the logs on the server; the request looks fine and everything works great, I think the issue is that the response from the KDC is not recognized by the Windows Client and a generic login error appears: "Login Failure: User name or password is invalid". The log file for the server looks like this (I tail'ed this so I know it's happening when the Windows machine attempts the login): Screen-shot: http://dl.dropbox.com/u/577250/email/login_attempt.png If I supply an invalid realm in the login window I get a completely different error message, so I don't think it's a connection problem from the client to the server? But I can't find any error logs on the Windows machine? (anyone know where these are?) If I try: runas /netonly /user:[email protected] cmd.exe everything works (although I don't get anything appear in the server logs, so I'm wondering if it's not touching the server for this??), but if I run: runas /user:[email protected] cmd.exe I get the same authentication error. Any Kerberos Gurus out there who can give me some ideas as to what to try next? pretty please?

    Read the article

  • Can't get my OpenVpn client to connect

    - by Larry
    Hi Guys, I am trying to setup a test vpn between my home desktop and my laptop. So far I have got the server on the desktop to connect fine but I can not get my laptop to finish the connection. I have tried several different configurations and they all give me the same result. Obviously it has nothing to do with my Client configuration but possibly something on my laptop? Here is the message I get in the log when it stops then times out and restarts. Mon Oct 18 20:10:55 2010 UDPv4 link local: [undef] Mon Oct 18 20:10:55 2010 UDPv4 link remote: 74.190.29.236:1194 Mon Oct 18 20:11:55 2010 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Mon Oct 18 20:11:55 2010 TLS Error: TLS handshake failed here are my configurations server.ovpn port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.1 255.255.255.252 ifconfig-pool-persist ipp.txt push "route 10.0.0.1 255.255.255.0" push "dhcp-option WINS 10.0.0.5" push "dhcp-option DNS 10.0.0.5" push "dhcp-option DOMAIN acme.com.local" keepalive 10 120 comp-lzo max-clients 1 persist-key persist-tun status openvpn-status.log verb 3 LArry.ovpn client proto udp dev tun remote doublel.hopto.org 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client1.crt key client1.key comp-lzo verb 3 dev tun local 206.162.148.9 remote 134.28.54.2 ifconfig 192.168.99.1 192.168.99.2 route 10.0.0.0 255.0.0.0 192.168.99.2 I just need a simple vpn for one user. Am I headed down the right path? Thanks, Larry

    Read the article

  • "No more threads can be created in the system" in Network and Sharing Center

    - by Zell Faze
    A while back I noticed on one of our laboratory computers (Windows 7, very little extra software installed) that the network connection icon in the system tray would claim that it had no network connection, even though it did. This issue would go away after the computer was rebooted, but would surface again the next time I looked at the computer (a few days later). Upon opening the Network and Sharing Center I am shown an actual error message, but not one that seems to give me a lot of information about what the problem is. In the place of the usual information about network adapters and whether you are connected to the Internet it simply says: "No more threads can be created in the system." The Event Viewer shows hundreds of events from different services also with the same message. "Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x800700a4, No more threads can be created in the system."; "The WinHTTP Web Proxy Auto-Discovery Service service failed to start due to the following error: A thread could not be created for the service."; "The IP Helper service terminated with the following error: No more threads can be created in the system." As far as I can tell, this message seems to mean that there is some sort of resource leak in Windows where something is creating a large number of threads and those threads are not being killed off? I've tried restarting WMI and several services related to networking, without avail. Can anyone provide more information on what "No more threads can be created in the system" might mean and what I might be able to do to fix the issue? Currently the only solution appears to be restarting.

    Read the article

  • Remote RIB iLO on Proliant via RIBCL

    - by Wudang
    I'm trying to automate a process for our Ops. The process requires that some windows servers running on blades are shut down, left down for a few hours, the restarted when some other processes complete. This is done by an op logging on to each blade's iLO web interface to stop and start. I've been trying to automate this with HP's cpqlocfg program with partial success. I can issue the GET_POWER, GET_USER_INFO, etc commands but SET_HOST_POWER fails in a specific way. Using the cpqlocfg GET_EVENTLOG command I can see the events XML login and the power comand being issued from the iLO interface but then nothing happens. Some hints from googling suggest ACPI isn't configured properly but I can't find any hits on how to verify this. Am I even using the right command? There's also a few other options like PRESS_PWR_BUTTON etc. Problem is I have nowhere to test this, all I can do at the moment is give a script to ops and ask them to try it as 4am on a Sunday when they try the proc. The shutdown is trivial as I can use the windows "shutdown" command, it's the power on that I need help on. Anyone done this? I'd tag this "rib ribcl ilo" but lack the rep points, sorry.

    Read the article

  • User cannot access a system DSN on Windows Server 2008

    - by Ra Osolage
    We run our SQL Server services using a low privileged domain account. That account is NOT a local admin on the OS. Only access I give the user account is assigned during install of SQL: full control over its mount points and then everything else is granted by the SQL Server 2005/2008 installer. I need to create a linked server in SQL Server 2008 to an ODBC data source. So I remoted into the computer using my domain account, which is part of a group that DOES have local admin privs to the OS. I created a system DSN and configured it to connect to another SQL Server. The DSN works perfectly when I test it. However, when I try to create the linked server, I get an error. It appears to me that the DSN is invisible to the domain account that SQL Server is running as. It seems that this problem is only happening to me on Windows 2008 servers. Does anybody know whether there's anything that you need to do after creating a DSN to make it visible for other users to access?

    Read the article

  • Incoming traceroute blocked by ufw

    - by Tobias Timpe
    One of my Proxmox VMs running Ubuntu 13.04 won't accept incoming trace routes while ufw is enabled. What command do give ufw to allow incoming traceroute(6)s? The following shows up in the syslog with ufw enabled: 50:15:15:aa:ae:8d:7d:e4:7a:97:08:00 SRC=79.236.233.97 DST=78.46.101.252 LEN=52 TOS=0x00 PREC=0x00 TTL=1 ID=33400 PROTO=UDP SPT=63757 DPT=33466 LEN=32 Nov 4 16:20:36 web kernel: [8078158.260409] [UFW BLOCK] IN=eth0 OUT= MAC=00:50:56:15:15:aa:ae:8d:7d:e4:7a:97:08:00 SRC=79.236.233.97 DST=78.46.101.252 LEN=52 TOS=0x00 PREC=0x00 TTL=1 ID=33401 PROTO=UDP SPT=63757 DPT=33467 LEN=32 Nov 4 16:20:41 web kernel: [8078163.262626] [UFW BLOCK] IN=eth0 OUT= MAC=00:50:56:15:15:aa:ae:8d:7d:e4:7a:97:08:00 SRC=79.236.233.97 DST=78.46.101.252 LEN=52 TOS=0x00 PREC=0x00 TTL=2 ID=33402 PROTO=UDP SPT=63757 DPT=33468 LEN=32 Nov 4 16:20:46 web kernel: [8078168.262927] [UFW BLOCK] IN=eth0 OUT= MAC=00:50:56:15:15:aa:ae:8d:7d:e4:7a:97:08:00 SRC=79.236.233.97 DST=78.46.101.252 LEN=52 TOS=0x00 PREC=0x00 TTL=2 ID=33403 PROTO=UDP SPT=63757 DPT=33469 LEN=32 Nov 4 16:20:51 web kernel: [8078173.260521] [UFW BLOCK] IN=eth0 OUT= MAC=00:50:56:15:15:aa:ae:8d:7d:e4:7a:97:08:00 SRC=79.236.233.97 DST=78.46.101.252 LEN=52 TOS=0x00 PREC=0x00 TTL=2 ID=33404 PROTO=UDP SPT=63757 DPT=33470 LEN=32 And the trace route just ends in starts after the Proxmox host machine. Thanks Tobias Timpe

    Read the article

  • Adding an user to samba

    - by JustMaximumPower
    I'm trying to setup some samba shares in my home network on an Ubuntu 12.04 machine. Everything works fine for my user account (max) but I can not add any new user. Every time I try to add new user they can not use the shares. It's likely that the error is very basic to the concept of samba but please don't just tell me to read the docs. I've been trying that for about 2 weeks now. I've set up the server with my user max who can mount transfer and the share max. Than I added the user simon with sudo adduser --no-create-home --disabled-login --shell /bin/false simon because the user should not be able to ssh into the machine. I did an sudo smbpasswd -a simon and set an (samba) password for simon and added an share for simon. I also added simon to transferusers to give him access to the share transfer. But simon can't connect to transfer or simons. ---- output of testparam: ------- Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Processing section "[max]" Processing section "[simons]" Processing section "[transfer]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [max] comment = Privater share von Max path = /media/Main/max read only = No create mask = 0700 [simons] comment = Privater share von Simon path = /media/Main/simon read only = No create mask = 0700 [transfer] comment = Transferlaufwerk path = /media/Main/transfer read only = No create mask = 0755 ---- The files in /media/Main: ------ drwxrwxr-x 17 max max 4096 Oct 4 19:13 max/ drwx------ 5 simon max 4096 Aug 4 15:18 simon/ drwxrwxr-x 7 max transferusers 258048 Oct 1 22:55 transfer/

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X?

    - by Daryl Spitzer
    I enabled HTTPS on the Apache server built-in to Mac OS X 10.6 (on my MacBook Pro) by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf ...in /etc/apache2/httpd.conf and modifying /etc/apache2/extra/httpd-ssl.conf to include: DocumentRoot "/Users/dspitzer/foo/bar" ServerName dot.com:443 ServerAdmin [email protected] ... SSLCertificateFile "/private/etc/apache2/siab_cert.pem" SSLCertificateKeyFile "/private/etc/apache2/siab_key.pem" Then I restart apache (with sudo apachectl restart) and go to https://localhost/ in Safari, where I get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.</p> </body></html> I've tried changing 443 in /etc/apache2/extra/httpd-ssl.conf to 8443 and going to https://localhost:8443/ and I get the same error. I read http://serverfault.com/questions/88037/why-am-i-getting-this-403-forbidden-error and confirmed that execute permission is given for all parent directories of the vhost dir: /Users/dspitzer/foo/bar. Is there a log file somewhere that might give me a clue?

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Can I subnet a subnet?

    - by Portman
    Apologies in advance for the botched terminology. I have read the Server Fault Subnet Wiki but this is more of an ISP question. I currently have a /27 block of public IPs. I use give my router the first address in this pool and then use 1-to-1 NAT for all the servers behind the firewall, so that they each get their own public IP. The router/firewall is currently using (actual addresses removed to protect the guilty): IP Address: XXX.XXX.XXX.164 Subnet mask: 255.255.255.224 Gateway: XXX.XXX.XXX.161 What I would like to do is break out my subnet into two separate /28 subnets. And do this in a way that is transparent to the ISP (i.e., they see me as continuing to operate a single /27). Currently, my topology looks like: ISP | [Router/Firewall] | [Managed Ethernet Switch] / \ \ [Server1] [Server2] [Server3] (etc) Instead, I would like it to look like: ISP | [Switch] / \ [Router1] [Router2] | | | | [S1] [S2] [S3] [S4] (etc) As you can see, this would partition me into two separate networks. I'm struggling with what the correct IP settings would be on Router1 and Router2. Here's what I have right now: Router1 Router2 IP Address: XXX.XXX.XXX.164 XXX.XXX.XXX.180 Subnet mask: 255.255.255.240 255.255.255.240 Gateway: XXX.XXX.XXX.161 XXX.XXX.XXX.161 Note that normally you would expect Router2 to have a gateway of .177, but I'm trying to get them both to use the gateway originally given to me by the ISP. Is subnetting like this in fact possible, or am I completely botching the most basic concepts?

    Read the article

  • Samba: share home directories when home directories are symbolic links

    - by Owen
    I have set up a new Ubuntu 9.10 system for five users. In the system is a large LVM volume where all the data is to be kept. The main system disk is not for this purpose, so I attempted to move the home directories using usermod -d /var/data/username -m And started creating my shares for these new home locations. But then I thought: hey, Samba has built-in home directory sharing! So I enabled that, and it didn't work. The shares were not published to the network. Only the share for user 'owen' was published; his folder hadn't been moved. So I thought: maybe Samba home sharing only works for default home locations, so how about I move the home directories back to where they were, and then make them symlinks. root@boxenmkiv:/home# ls -l total 4 lrwxrwxrwx 1 brett brett 25 2010-04-03 08:48 brett -> /var/data/brett/ lrwxrwxrwx 1 carly carly 23 2010-04-03 08:48 carly -> /var/data/carly/ lrwxrwxrwx 1 dave dave 21 2010-04-03 08:48 dave -> /var/data/dave/ lrwxrwxrwx 1 kate kate 23 2010-04-03 08:47 kate -> /var/data/kate/ drwxr-xr-x 4 owen owen 4096 2010-04-03 08:44 owen Like so. Still no go. The only users share which is published to the network is 'owen' who as you can see above has not had his home directory moved. I have also added the following to my smb.conf [global] follow symlinks = yes wide symlinks = yes unix extensions = no With no luck. Am I going about doing this the entirely wrong way? Should I just give up and manually create shares for the users? Thanks in advance.

    Read the article

  • Dovecot install: what does this error mean?

    - by jamie
    I have postfix and dovecot installed on CentOS 6 (linode) along with MySQL. The table and user is already set up, postfix installed fine, but dovecot gives me this error in the mail log: Warning: Killed with signal 15 (by pid=9415 uid=0 code=kill) The next few lines say this: Apr 7 16:13:35 dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Apr 7 16:13:35 dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1: protocols=pop3s is no longer supported. to disable non-ssl pop3, use service pop3-login { inet_listener pop3 { p$ Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:5: ssl_cert_file has been replaced by ssl_cert = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:6: ssl_key_file has been replaced by ssl_key = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:8: namespace private {} has been replaced by namespace { type=private } Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:25: auth_user has been replaced by service auth { user } I am following directions for the install on CentOS 5 with changes in the dovecot.conf file from different sources specific to CentOS 6. So the dovecot.conf file might not be correct, but there is no good source I have found yet for making dovecot install correctly. Can anyone tell me what the error above means? The terminal does not give any message as to start OK or FAIL. When I issue the service dovecot start command, it says: Starting Dovecot Imap: and nothing more.

    Read the article

  • Squid configuration for proxy server

    - by Ian Rob
    I have a server with 10 ip's that I want to give access to some friends via authentication but I'm stuck on squid's config file. Let's say I have these ip's available on my server: 212.77.23.10 212.77.1.10 68.44.82.112 And I want to allocate each one of them to a different user like so: 212.77.23.10 goes to user manilodisan using password 123456 212.77.1.10 goes to user manilodisan1 using password 123456 68.44.82.112 goes to user manilodisan2 using password 123456 I managed to add the passwords and authentication works ok but how do I do to restrict one user to one of the available ip's? I have a basic setup from different bits I found over the internet but nothing seems to work. Here's my squid.conf (all comments are removed to make it lighter): acl ip1 myip 212.77.23.10 acl ip2 myip 212.77.1.10 tcp_outgoing_address 212.77.23.10 ip1 tcp_outgoing_address 212.77.1.10 ip2 http_port 8888 visible_hostname weezie auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid-passwd acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT hosts_file /etc/hosts forwarded_for off coredump_dir /var/spool/squid

    Read the article

  • Intel Wireless 4965AGN not achieving N throughput when connected to an Airport Express N network

    - by BenA
    I have an Intel Wireless WiFi Link 4965AGN adaptor in my laptop (HP Pavillion dv2000 series) which is connecting to a 5Ghz-only 802.11n network provided by an Apple Airport Express. The network is using WPA2 encryption. My desktop is also connected the Airport, via a Linksys WUSB600N USB adaptor. Both are running with the latest drivers, and the Airport is running the latest firmware. The Airport is also configured to use wide channels. The problem I have is that I never get throughput above 4MB/s when transferring files between the two machines. Even a pessimistic calculation shows a 270Mbps network as being capable of transfer rates at well above 10MB/s. I'm pretty sure I've isolated the issue to being the Intel adaptor, as wiring the desktop to the AP, and using the Linksys adaptor on the laptop immediately yielded speeds limited by the 100MB/s ethernet connection. I know that 802.11n is still a draft standard, and so mixing kit from different manufacturers can easily lead to poor results, but I was just wondering if anybody else out there has had success with this Intel adaptor on an N network? Or even better, connecting it to an Airport Express? Can anybody give me any advice on how to troubleshoot this issue? I should also mention that the Airport Express doesn't allow you to manually specify channels when running in N mode, and that I've been able to rule out interference from other Wireless LANs by scanning. There aren't any other 5GHz networks in my area. All ideas welcome! Update: A while later, I've just updated to the most recent drivers for both the Intel chip in the laptop, and the USB adaptor. Unfortunately this hasn't improved things :(. If anybody has any advice it would be be gratefully received.

    Read the article

  • Install GRUB bootloader without installing Linux.

    - by Kavitesh Singh
    I have Windows 7 installed on the system and I want to create a separate WinPe bootable partition which system can fallback when things go wrong. Now Win7 does give this option and i might also edit the BCD store to make changes in the boot menu of Win7 or could use EasyBCD. I dont want to use these options as i need to customize hiding/unhiding of partitions at the time of booting etc. I search and found GRUB might the tool i am looking for. I want to use GRUB loader without any version of Linux installed on the system. Can someone guide me how i can install the GRUB on harddisk MBR and configure the bootmenu. I search internet and mostly i came across commands which search the GRUB on the harddisk because of existing linux installation and then try to repair it. In my case there is no Linux at all. I have Ubuntu 9.10 bootable CD/ OpenSUSE 11.2 liveCD and installation disc. Can i use them to install GRUB on my system?

    Read the article

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • Linking Linux MIT Kerberos with a Windows 2003 Active Directory

    - by Beerdude26
    Greetings, I was wondering how one might link a Linux MIT Kerberos with a Windows 2003 Active Directory to achieve the following: A user, [email protected], attempts to log in at an Apache website, which runs on the same server as the Linux MIT Kerberos. The Apache module first asks the local Linux MIT Kerberos if he knows a user by that name or realm. The MIT Kerberos finds out it isn't responsible for that realm, and forwards the request to the Windows 2003 Active Directory. The Windows 2003 Active Directory replies positively and gives this information to the Linux MIT Kerberos, which in turn tells this to the Apache module, which grants the user access to its files. Here is an image of the situation: http://img179.imageshack.us/img179/5092/linux2k3.png (I'm not allowed to embed images just yet.) The documentation I have read concerning this issue often differ from this problem: Some discuss linking up a MIT Kerberos with an Active Directory to gain access to resources on the Active Directory server; While another uses the link to authenticate Windows users to the MIT Kerberos through the Windows 2003 Active Directory. (My problem is the other way around.) So what my question boils down to, is this: Is it possible to have a Linux MIT Kerberos server pass through requests for a Active Directory realm, and then have it receive the reply and give it to the requesting service? (Although it's not a problem if the requesting service and the Windows 2003 Active Directory communicate directly.) Suggestions and constructive criticism are greatly appreciated. :)

    Read the article

  • installing SVN - CentOS - cannot find -lexpat

    - by furnace
    Hey guys, I'm trying to install SVN on CentOS 5. Unfortunately a simple yum install isn't going to work (afaik) because I'm using the DirectAdmin control panel. When it comes to running 'make' I get this error: /usr/bin/ld: cannot find -lexpat I'm new to installing things without yum (!) so am a bit lost. Do you have any advice on how to get past this hurdle? Just to give a little more context to the error; /apache -I/usr/include/apache -I/etc/svn-install/subversion-1.6.2/sqlite-amalgamation -o subversion/svn/util.o -c subversion/svn/util.c cd subversion/svn && /bin/sh /etc/svn-install/subversion-1.6.2/libtool --tag=CC --silent --mode=link gcc -g -O2 -g -O2 -pthread -rpath /usr/lib -o svn add-cmd.o blame-cmd.o cat-cmd.o changelist-cmd.o checkout-cmd.o cleanup-cmd.o commit-cmd.o conflict-callbacks.o copy-cmd.o delete-cmd.o diff-cmd.o export-cmd.o help-cmd.o import-cmd.o info-cmd.o list-cmd.o lock-cmd.o log-cmd.o main.o merge-cmd.o mergeinfo-cmd.o mkdir-cmd.o move-cmd.o notify.o propdel-cmd.o propedit-cmd.o propget-cmd.o proplist-cmd.o props.o propset-cmd.o resolve-cmd.o resolved-cmd.o revert-cmd.o status-cmd.o status.o switch-cmd.o tree-conflicts.o unlock-cmd.o update-cmd.o util.o ../../subversion/libsvn_client/libsvn_client-1.la ../../subversion/libsvn_wc/libsvn_wc-1.la ../../subversion/libsvn_ra/libsvn_ra-1.la ../../subversion/libsvn_delta/libsvn_delta-1.la ../../subversion/libsvn_diff/libsvn_diff-1.la ../../subversion/libsvn_subr/libsvn_subr-1.la /etc/httpd/lib/libaprutil-1.la -lexpat /etc/httpd/lib/libapr-1.la -luuid -lrt -lcrypt -lpthread -ldl /usr/bin/ld: cannot find -lexpat collect2: ld returned 1 exit status make: *** [subversion/svn/svn] Error 1 Thanks!

    Read the article

  • Is anyone using KVM in production?

    - by Andy Shellam
    I've been trying to set up a pair of servers utilising KVM on Ubuntu 9.10 to host 8 virtual machines between them and ended up with various issues from the VMs freezing, to not powering on. I had one virtual server set up and running and was setting up a second, when any operation involving OpenSSL would cause the VM to lock up in a weird way - all network traffic would cease, it wouldn't process logins on the console, but it wasn't taking any CPU time off the host. The first virtual server was identical and worked perfectly. Another VM I tried to setup had installed Ubuntu fine then refused to reboot, throwing kernel exceptions to do with XFS. I've now installed Citrix XenServer 5.5 on both hosts, and am now setting up my third VM with absolutely no issues. I also had the same experience when I tried VMware, but I preferred Xen as it appears to give more features on the free license. My question is am I just unlucky with KVM, or is KVM as unstable as it appears? Are you using, or planning on using, KVM in production, and how successful have you been?

    Read the article

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >