Search Results

Search found 3604 results on 145 pages for 'bigced 21'.

Page 94/145 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • Computer won't start up. Stuck on Lenovo splash screen. Help Diagnose

    - by Ace Legend
    I have some (I'm not sure exactly what model) Lenovo 21" IdeaCentre. Honestly, the computer works off and on. I have had problems with it not being able to shutdown, which I fixed. The fan seems to be constantly running, a few other problems as well. Anyway, nobody was using it when all of a sudden it switched to a blue screen. I was in the kitchen, but when I got over to the computer I read the message. It said something about bad drivers, but that is all I saw and then it restarted. However, when it got to the Lenovo Splash screen, nothing happened. I waited there for over 10 minutes, but still nothing. I tried to turn of the computer, but the only way to do it was to pull out the power cable. I then removed all USB devices and tried again. Still nothing. It also won't respond to keyboard input when I try to use enter to interrupt normal startup. My guess is some piece of hardware is damaged inside the machine. However, I have no idea what piece it is. Does anybody have any idea what could be wrong with it? Thanks.

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

  • Connecting to SVN server from a computer outside of my LAN

    - by Tom Auger
    I've got a Fedora server running Subversion and svnserve on port 3690. My repo is at /var/svn/project_name. I have my router forwarding port 3690 to the local server (as well as port 80, 21, 22 and a few others). When I connect locally to svn://192.168.0.2/project_name it works great. When I connect from an external server to svn://my.static.ip/project_name I get a time out connecting to the host. However, if I http://my.static.ip there is no problem, so port forwarding is working (at least for port 80). I don't want to run WebDAV or svn via HTTP/s. I'd like it to work using svnserve, as documented in the svn book. What have I misconfigured? EDIT Here is the last part of my iptables dump. I'm not an expert, but it looks OK to me: ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:svn ACCEPT udp -- anywhere anywhere state NEW udp dpt:svn ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:6680:6699 ACCEPT udp -- anywhere anywhere state NEW udp dpts:6680:6699 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited EDIT 2 Results from sudo netstat -tulpn tcp 0 0 0.0.0.0:3690 0.0.0.0:* LISTEN 1455/svnserve

    Read the article

  • Adjust iptables

    - by madunix
    cat /etc/sysconfig/iptables: # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d X.0.0.Y -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp -s X.Y.Z.W --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s M.M.M.M --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT I have the above following IPtables on my linux web server(Apache/MySQL), I want to have the following: Block any traffic from multiple IP's to my web server IP1:1.2.3.4.5, IP2:6.7.8.9 ..etc Limiting one host to 20 connections to 80 port, which should not affect non-malicious user, but would render slowloris unusable from one host. Limit MYSQL port 3306 access on my server only to the following IP range A.B.C.D/255.255.255.240 Block any ICMP traffic.

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • LSI1068E hidden drives after failed raid volume creation

    - by silk
    We are using LSI 1068E raid chipset with SAS drives. We had added new drives to the system, and tried to create new raid volume with the lsiutil, unfortunately the creation failed. The problem is that now we do not have the new raid volume and disks 'disappeared' and are not available as targets for raid. Lsiutil option 8 (scan for devices) does not display these disks at all. lsiutil option 16 (display attached devices) does list them as targets. lsiutil option 21+30 (create raid) does not list these disks. Just after insrting them to enclosure these disks appeared in the system, as expected. During the raid creation kernel logged: Mar 4 08:40:02 kilo kernel: [57555.687946] mptbase: ioc0: RAID STATUS CHANGE for PhysDisk 2 id=0 Mar 4 08:40:02 kilo kernel: [57555.687978] mptbase: ioc0: PhysDisk has been created Mar 4 08:40:02 kilo kernel: [57555.695438] scsi target0:0:2: mptsas: ioc0: RAID Hidding: fw_channel=0, fw_id=0, physdsk 2, sas_addr 0x5000c50008ebe5fd for both of them, again as expected. Unfortunately they did not appear back even though the volume was not created. The same situation is in the controller's bios after a reboot. Taking the disks out and inserting in different slots did not help, either. Has someone seen a similar problem? And knows how to 'get back' our disks?

    Read the article

  • Change to different user, or let different user execute a command

    - by WG-
    I have a problem. There is a server which I can access with an account by ssh, lets say WG. Now there is a folder with the following permissions. drwxr-s---+ 855 vvz www-data 20K Aug 21 17:56 pictures I want to copy this folder using rsync, however since I am not the user www-data but WG I cannot execute rsync. So I want www-data to execute a rsync command. However, I do not posses sudo powers. My friend however tells me that I am actually able to execute the rsync command as www-data, but he will not tell me how. I asked him for some clues and he told me that it had something to do with reverse shell (which I figured out to be that you connect by ssh to your server and then you connect back to your own server, or something). I also asked if it was by-design or actually a flaw in the system. He tells me it is both. Furthermore I think it has something to do with the group permissions. If I just make sure that I am with the group permissions then I can also read the files. Anybody has a clue?

    Read the article

  • What may be wrong with String::ToIdentifier::EN tests?

    - by wk01
    I try to install Perl module String::ToIdentifier::EN (as depndency of DBIx::Class::Schema::Loader) but it fails on tests. I googled those errors but get no picture, where is problem: Building and testing String-ToIdentifier-EN-0.07 cp lib/String/ToIdentifier/EN.pm blib/lib/String/ToIdentifier/EN.pm cp lib/String/ToIdentifier/EN/Unicode.pm blib/lib/String/ToIdentifier/EN/Unicode.pm Manifying blib/man3/String::ToIdentifier::EN.3pm Manifying blib/man3/String::ToIdentifier::EN::Unicode.3pm PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'inc', 'blib/lib', 'blib/arch')" t/00_basic.t t/10_ascii.t t/20_capitalization.t Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 25 tests but ran 4. # Looks like your test exited with 25 just after 4. t/00_basic.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 21/25 subtests Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 768 tests but ran 512. # Looks like your test exited with 25 just after 512. t/10_ascii.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 256/768 subtests t/20_capitalization.t .. ok Test Summary Report ------------------- t/00_basic.t (Wstat: 6400 Tests: 4 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 25 tests but ran 4. t/10_ascii.t (Wstat: 6400 Tests: 512 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 768 tests but ran 512. Files=3, Tests=528, 1 wallclock secs ( 0.07 usr 0.02 sys + 0.42 cusr 0.04 csys = 0.55 CPU) Result: FAIL Failed 2/3 test programs. 0/528 subtests failed. make: *** [test_dynamic] Error 255 -> FAIL Installing String::ToIdentifier::EN failed. See /home/wanradt/.cpanm/build.log for details. Byte order is not compatible at... seems a key, but to where?

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • Customer site is out of IP addresses, they want to go from /24 to /12 netmask... Bad idea?

    - by ewwhite
    One of my client sites called to ask me to change the subnet masks of the Linux servers I manage there while they re-IP/change the netmask of their network based on a 10.0.0.x scheme. "Can you change the server netmasks from 255.255.255.0 to 255.240.0.0?" You mean, 255.255.240.0? "No, 255.240.0.0." Are you sure you need that many IP addresses? "Yeah, we never want to run out of IP addresses." A quick check against the Subnet Cheat Sheet shows: a 255.255.255.0 netmask, a /24 provides 256 hosts. It's clear to see that an organization can exhaust that number of IP addresses. a 255.240.0.0 netmask, a /12 provides 1,048,576 hosts. This is a small < 200-user site. I doubt that they'd allocate more than 400 IP addresses. I suggested something that provides fewer hosts, like a /22 or /21 (1024 and 2048 hosts, respectively), but was unable to give a specific reason against using the /12 subnet. Is there anything this customer should be concerned about? Are there any specific reasons they shouldn't use such an incredibly large mask in their environment?

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing.

    Read the article

  • mod_rewrite changes case even if not matching RewriteCond?

    - by kirdie
    I have a really strange problem with my MediaWiki which I want to have articles of the form mywiki.org/MyArticle. Now I got most of it to work using the following code but it mysteriously cannot display the logo anymore. RewriteEngine On # don't rewrite valid requests to files and directories RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-d # mywiki.org/MyArticle gets rewritten to mywiki.org/index.php/MyArticle RewriteRule ^/(.*)$ /index.php/$1 [L,QSA] Now when I type in mywiki.org/img/logo.jpg in my browser the adress changes to http://wiki.geoknow.eu/Img/logo.jpg (capital I) and I get to the empty article page but the image is definitely there (in my document root under the img folder): /var/www/mywiki.org$ ls img logo.jpg So far so bad. But now it gets really crazy: When I add RewriteCond %{REQUEST_URI} !^/.*\.jpg my adress still gets rewritten and my access log says - - [05/Dec/2012:16:30:21 +0100] "GET /Img/geoknow_logo.jpg HTTP/1.1" 404 509 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0" Where does that capital I in Img come from? The rule is not even executed because at least one condition is definitely not met now and I also don't have any to lowercase-transformation defined anywhere. What is happening there and how can I repair this? P.S.: Now all of the sudden the problem went away (the image is displayed as it should and there is no capital replacement anymore. What can cause this and why does it spontanously appear and disappear?

    Read the article

  • HPET missing from available clocksources on CentOS

    - by squareone
    I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet cat /sys/devices/system/clocksource/clocksource0/current_clocksource: tsc cat /sys/devices/system/clocksource/clocksource0/available_clocksource: tsc jiffies What is even more confusing, is that I have about a dozen other machines that utilize the same kernel .config, and can use HPET fine. I fear it is a hardware issue, but would appreciate any advice or help with getting HPET available. Thanks in advance!

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Cant get squid proxy to work

    - by danielgratz
    i need squid proxy on my centos server. But i just can't get it to work. I did yum install squid. Here is my squid.conf file (i removed all comments): acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl CONNECT method CONNECT acl our_networks src 192.168.1.0/24 192.168.2.0/24 http_access allow our_networks http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache coredump_dir /var/spool/squid Then i just put my server's public ip and port 3128 into my web browsers proxy settings... but it isn't working i can't visit any website. Please help. Thanks.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >