Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 320/422 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Apache crashes a few seconds after the start.

    - by Nacho
    Hi, i've got a problem with apache. When i try to start it (/etc/init.d/apache2 start) it dies after a few seconds. It shows up on "ps aux" consuming a lot of memory and then dies. I don't know what could be causing apache to consume this amount of memory: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 13379 1.0 0.3 14376 3908 ? Ss 22:31 0:00 /usr/sbin/apache2 -k start www-data 13383 0.0 0.4 197316 4196 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13390 0.0 0.3 172728 4172 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13396 0.0 0.3 156336 4160 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13400 0.0 0.3 148140 4156 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13403 0.0 0.3 131748 4148 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start Here is a htop screenshot: http://i.imgur.com/N4Chh.png It happened suddenly, no change had been made to server config, so i don't know whats causing it. The error log of my virtual servers shows this: [Sun Jan 30 22:19:50 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 11 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:19:55 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 19 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:29:40 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=12009): Couldn't create worker thread 18 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:31:06 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=13396): Couldn't create worker thread 15 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:02 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 16 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:07 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 17 in daemon process 'fb.ebookmetafinder.com'. I'm on a ubuntu server vps and i use mod_wsgi with django. Thanks.

    Read the article

  • Configured Samba to join our domain, but logon fails from Windows machine

    - by jasonh
    I've configured a Fedora 11 installation to join our domain. It seems to join successfully (though it reports a DNS update failure) but when I try to access \\fedoraserver.test.mycompany.com I'm prompted for a password. So I enter adminuser and the password and that fails, so I try test.mycompany.com\adminuser and that too fails. What am I missing? EDIT (Update 9/1/09): I can now connect to the machine and see the shares on it (see my response to djhowell's answer) but when I try to connect, I get an error saying The network path was not found. I checked the log entry on the Fedora computer for the computer I'm connecting from (/var/log/samba/log.ComputerX) and it reads: [2009/09/01 12:02:46, 1] libads/cldap.c:recv_cldap_netlogon(157) no reply received to cldap netlogon [2009/09/01 12:02:46, 1] libads/ldap.c:ads_find_dc(417) ads_find_dc: failed to find a valid DC on our site (Default-First-Site-Name), trying to find another DC Config files as of 9/1/09: smb.conf: [global] Workgroup = TEST realm = TEST.MYCOMPANY.COM password server = DC.TEST.MYCOMPANY.COM security = DOMAIN server string = Test Samba Server log file = /var/log/samba/log.%m max log size = 50 idmap uid = 15000-20000 idmap gid = 15000-20000 windbind use default domain = yes cups options = raw client use spnego = no server signing = auto client signing = auto [share] comment = Test Share path = /mnt/storage1 valid users = adminuser admin users = adminuser read list = adminuser write list = adminuser read only = No I also set the krb5.conf file to look like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = test.mycompany.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] TEST.MYCOMPANY.COM = { kdc = dc.test.mycompany.com admin_server = dc.test.mycompany.com default_domain = test.mycompany.com } [domain_realm] dc.test.mycompany.com = test.mycompany.com .dc.test.mycompany.com = test.mycompany.com [appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } I realize that there might be an issue with EXAMPLE.COM in there, however if I change it to TEST.MYCOMPANY.COM then it fails to join the domain with a preauthentication failure. As of 9/1/09, this is no longer the case.

    Read the article

  • L2TP over IPSec VPN with OpenSwan and XL2TPD can't connect, timeout on Centos 6

    - by Disco
    I'm setting up LT2p over IPSec on my Centos 6.3 fresh install. I have iptables flushed, permit all. Whenever I try to connect, i get a 'no reply from vpn' and nothi Here's my ipsec.conf file (Server is 1.2.3.4) : config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=1.2.3.4 leftprotoport=17/1701 right=%any rightprotoport=17/%any My /etc/ipsec.secrets 1.2.3.4 %any: PSK "password" My sysctl.conf (appened lines) net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.all.log_martians = 0 net.ipv4.conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.icmp_ignore_bogus_error_responses = 1 Here's what 'ipsec verify' gives: # ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.32/K2.6.32-279.19.1.el6.x86_64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] And I see xl2tpd is listening on 1701/udp : udp 0 0 1.2.3.4:1701 0.0.0.0:* 2096/xl2tpd

    Read the article

  • Linux authentication via ADS -- allowing only specific groups in PAM

    - by Kenaniah
    I'm taking the samba / winbind / PAM route to authenticate users on our linux servers from our Active Directory domain. Everything works, but I want to limit what AD groups are allowed to authenticate. Winbind / PAM currently allows any enabled user account in the active directory, and pam_winbind.so doesn't seem to heed the require_membership_of=MYDOMAIN\\mygroup parameter. Doesn't matter if I set it in the /etc/pam.d/system-auth or /etc/security/pam_winbind.conf files. How can I force winbind to honor the require_membership_of setting? Using CentOS 5.5 with up-to-date packages. Update: turns out that PAM always allows root to pass through auth, by virtue of the fact that it's root. So as long as the account exists, root will pass auth. Any other account is subjected to the auth constraints. Update 2: require_membership_of seems to be working, except for when the requesting user has the root uid. In that case, the login succeeds regardless of the require_membership_of setting. This is not an issue for any other account. How can I configure PAM to force the require_membership_of check even when the current user is root? Current PAM config is below: auth sufficient pam_winbind.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account sufficient pam_winbind.so account sufficient pam_localuser.so account required pam_unix.so broken_shadow password ..... (excluded for brevity) session required pam_winbind.so session required pam_mkhomedir.so skel=/etc/skel umask=0077 session required pam_limits.so session required pam_unix.so require_memebership_of is currently set in the /etc/security/pam_winbind.conf file, and is working (except for the root case outlined above).

    Read the article

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • Server unreachable without www

    - by deamon
    My server is unreachable without "www." prefix, even when trying it with ping. The DNS entry looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011010600 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 1.2.3.4 localhost IN A 127.0.0.1 mail IN A 1.2.3.4 www IN A 1.2.3.4 ftp IN CNAME www imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ A DNS record of the same type for another domain on the same server is working with and without "www". And the VirualHost config looks like this: <VirtualHost *:80> ServerName somewhere.com ServerAlias www.somewhere.com ServerSignature Off ... </VirtualHost> Any idea what could be wrong?

    Read the article

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • How can I both pipe and display output in Windows' command line?

    - by Bob
    I have a process I need to run within a batch file. This process produces some output. I need to both display this output to the screen and send (pipe) it to another program. The bash method uses tee: echo 'ee' | tee /dev/tty | foo Is there an equivalent for Windows? I am happy to use PowerShell if necessary. There are tee ports for Windows, but there does not appear to be an equivalent for /dev/tty, which complicates matters. The specific use-case here: I have a program (launch4j) that I need to run, displaying output to the user. At the same time, I need to be able to detect success or failure in the script. Unfortunately, this program does not set an exit code, and I cannot force it to do so. My current workaround involves piping to find, to search the output (launch4j config.xml | find "Successfully created") - however, that swallows the output I need to display. Therefore, I need some way to both display to the screen and send the ouput to a command - and this command should be able to set ERRORLEVEL (it cannot run asynchronously).

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • Can't get rsync over sftp to work

    - by Patrik
    I'm trying to set up a backup system from an Ubuntu server to a Synology NAS (DS413j) using rsync and sftp. I have created a user for this that we can call ubuntu-backup. I have a directory in ubuntu-backup home directory called www where the backup will be saved. I have enabled Network Backup in DSM The user ubuntu-backup has full access to it's home directory Here is my rsync config file on the Synology NAS: #motd file = /etc/rsyncd.motd #log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock use chroot = no [NetBackup] path = /var/services/NetBackup comment = Network Backup Share uid = root gid = root read only = no list = yes charset = utf-8 auth users = root secrets file = /etc/rsyncd.secrets [ubuntu-backup] path = /volume1/homes/ubuntu-backup/www comment = Ubuntu Backup uid = ubuntu-backup gid = users read only = false auth users = ubuntu-backup secrets file = /etc/rsyncd.secrets The permissions on /volume1/homes/ubuntu-backup/www is ubuntu-backup:users 777 Here is the command i'm running. rsync -aHvhiPb /var/www/ [email protected]:./ The result: sending incremental file list ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.9] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9] If I'm running this: rsync -aHvhiPb /var/www/ [email protected] It looks like its sending files. No errors. But I cant find anything on the NAS.

    Read the article

  • How to prevent remote hosts from delivering mail to Postfix with spoofed From header?

    - by Hongli Lai
    I have a host, let's call it foo.com, on which I'm running Postfix on Debian. Postfix is currently configured to do these things: All mail with @foo.com as recipient is handled by this Postfix server. It forwards all such mail to my Gmail account. The firewall thus allows port 25. All mail with another domain as recipient is rejected. SPF records have been set up for the foo.com domain, saying that foo.com is the sole origin of all mail from @foo.com. Applications running on foo.com can connect to localhost:25 to deliver mail, with [email protected] as sender. However I recently noticed that some spammers are able to send spam to me while passing the SPF checks. Upon further inspection, it looks like they connect to my Postfix server and then say HELO bar.com MAIL FROM:<[email protected]> <---- this! RCPT TO:<[email protected]> DATA From: "Buy Viagra" <[email protected]> <--- and this! ... How do I prevent this? I only want applications running on localhost to be able to say MAIL FROM:<[email protected]>. Here's my current config (main.cf): https://gist.github.com/1283647

    Read the article

  • how to debug when xinetd says : got signal 17 (child exited)

    - by Faizan Shaik
    I am trying to start services vsftpd and sshd using xinetd. my config files are as follows. /etc/xinetd.conf defaults { instances = 60 log_type = FILE /var/log/xinetdlog log_on_success = HOST PID log_on_failure = HOST cps = 25 30 only_from = localhost } includedir /etc/xinetd.d /etc/xinetd.d/ftp service ftp { disable = no server = /usr/sbin/vsftpd server_args = -l user = root socket_type = stream protocol = tcp wait = no instances = 4 flags = REUSE nice = 10 log_on_success += DURATION HOST USERID only_from = 127.0.0.1 10.0.0.0/24 } /etc/xinetd.d/ssh service ssh { disable = no log_on_failure += USERID server = /usr/sbin/sshd user = root socket_type = stream protocol = tcp wait = no instances = 20 flags = REUSE only_from = 127.0.0.1 10.0.0.0/24 } Even though i've included only_from attribute, vsftp server as well as ssh server are refusing connection from localhost. while vsftp and ssh servers are working fine individually when i check with "service vsftpd start" and "service ssh start". when i did debug using "xinetd -d" throug terminal i got the output as 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} Started service: ftp 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} Started service: ssh 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} mask_max = 8, services_started = 2 13/10/20@00:06:08: NOTICE: 3592 {main} xinetd Version 2.3.14 started with libwrap loadavg options compiled in. 13/10/20@00:06:08: NOTICE: 3592 {main} Started working: 2 available services 13/10/20@00:06:08: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3592 {main_loop} select returned 1 13/10/20@00:06:16: DEBUG: 3592 {server_start} Starting service ftp 13/10/20@00:06:16: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3607 {exec_server} duping 9 13/10/20@00:06:16: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3592 {main_loop} select returned 1 13/10/20@00:06:16: DEBUG: 3592 {check_pipe} Got signal 17 (Child exited) 13/10/20@00:06:16: DEBUG: 3592 {child_exit} waitpid returned = 3607 13/10/20@00:06:16: DEBUG: 3592 {server_end} ftp server 3607 exited 13/10/20@00:06:16: DEBUG: 3592 {svc_postmortem} Checking log size of ftp service 13/10/20@00:06:16: INFO: 3592 {conn_free} freeing connection 13/10/20@00:06:16: DEBUG: 3592 {child_exit} waitpid returned = -1 both services are getting started but neither of them is working. after banging for 3-4 hours i still don't have any clue about this error. Any help would be appreciated. thanks!

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • keepalived issues on xen domU

    - by David Cournapeau
    Hi, I cannot manage to run keepalived correctly on xen domU. I am following this link for configuration, and it works great on some local VM (running with KVM). If I set up the exact same configuration, but on xen domU, it does not work: both servers do not see each other and decide to be master (10.120.100.99 being the virtual IP) $ sudo ip addr sh eth0 # host1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:78:f5:31 brd ff:ff:ff:ff:ff:ff inet 10.120.100.104/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe78:f531/64 scope link valid_lft forever preferred_lft forever $ sudo ip addr sh eth0 # host2 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:51:36:20 brd ff:ff:ff:ff:ff:ff inet 10.120.100.105/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe51:3620/64 scope link valid_lft forever preferred_lft forever Is there a way I could debug this - it seems some people are able to use keepalived on xen following some mailing list, but without much info on their config.

    Read the article

  • Replacing compiz/metacity with openbox reduces workspaces to 1

    - by Brian
    I like to use the GNOME desktop, but I prefer to replace its window manager with openbox, with 4 workspaces. However, when I run openbox --replace, the number of workspaces available drops to 1. If I go into obconf, workspaces is still configured to be 4 (~/.config/openbox/rc.xml shows the same). I can get the workspaces to reappear by changing the value in obconf to anything else, and then back to 4. I have just been dealing with this problem since Ubuntu 9.04 (now up to 10.10) since I don't reboot very often. But it's really annoying to have to reset my workspaces whenever I do have to reboot. Changing the value in rc.xml and running openbox --reconfigure does not seem to have any effect. So what is obconf doing that I'm not (sends a dbus message perhaps [EDIT: watching with dbus-monitor I see no messages when changing the workspaces value in obconf])? I was hoping there would be a cleaner way to change the window manager than just running openbox --replace at login. So my questions are: Is there a better way to specify an alternate window manager (i.e. a way that doesn't cause the workspaces to break)? If not, how can I automatically set the number of workspaces back to 4? Update: I finally got around to trying what I commented on MrShunz's answer (adding WINDOW_MANAGER=/usr/bin/openbox to ~/.gnomerc). But the effect is the same as openbox --replace.

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • port forwarding with socks over proxy

    - by Oz123
    I am trying to browse a wiki that runs on a server inside one domain from another domain. The wiki is accessible only on the LAN, but I need to browse it from another LAN to which I connect with an SSH tunnel ... Here is my setup and the steps I did so far: ~.ssh/confing on wikihost: Host gateway User kisteuser Port 443 Hostname gateway.companydomain.com ProxyCommand /home/myuser/bin/ssh-https-tunnel %h %p # ssh-https-tunnel: # http://ttcplinux.sourceforge.net/tools/stunnel Protocol 2 IdentityFile ~/.ssh/key_dsa LocalForward 11069 localhost:11069 Host server1 User kisteuser Hostname localhost Port 11069 LocalForward 8022 server1:22 LocalForward 17001 server1:7100 LocalForward 8080 www-proxy:3128 RemoteForward 11069 localhost:22 from wikihost myuser@wikihost: ssh -XC -t gateway.companydomain.com ssh -L11069:localhost:22 server1 on another terminal: ssh gateway.companydomain.com Now, on my companydomain I would like to start firefox and browse the wiki on wikihost. I did: [email protected] ~ $ ssh gateway Have a lot of fun... kisteuser@gateway ~ $ ssh -D 8383 localhost user@localhost's password: user@wikiserver:~> My .ssh/config on that side looks like that: host server1 localforward 11069 localhost:11069 host localhost user myuser port 11069 host wikiserver forwardagent yes user myuser port 11069 hostname localhost Now, I started firefox on the server called gateway, and edited the proxy settings to use SOCKSv5, specifying that the proxy should be gateway and use the port 8383... kisteuser@gateway ~ $ LANG=C firefox -P --no-remote And, now I get the following error popping in the Terminal of wikiserver: myuser@wikiserver:~> channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused Confused? Me too ... Please help me understand how to properly build the tunnels and browse the wiki over SOCKS protocol. update: I managed to browse the wiki on wikiserver with the following changes: host wikiserver forwardagent yes user myuser port 11069 hostname localhost localforward 8339 localhost:8443 Now when I ssh gateway I launch Firefox and go to localhost:8339 and I hit the start page of the wiki, which is served on Port 8443. Now I ask myself is SOCKS really needed? Can someone elaborate on that ?

    Read the article

  • ospfd over an OpenVPN link - strange error in logs

    - by Alex
    I am trying to set up Quagga ospfd on two hosts connected by an OpenVPN link. These hosts have VPN IPs 10.31.0.1 and 10.31.0.13. ospfd config is pretty simple: hostname bizon password xxxxxxxxx enable password xxxxxxxxx ! log file /var/log/quagga/ospfd.log ! interface lo ! interface tun0 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun1 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun2 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 ! router ospf ospf router-id 10.31.0.1 network 10.31.0.0/16 area 0.0.0.0 network 10.119.2.0/24 area 0.0.0.0 redistribute connected area 0.0.0.0 range 10.0.0.0/8 ! line vty ! debug ospf event debug ospf packet all I am getting the following error in the ospfd.log (the log is from 10.31.0.13): 2012/10/05 01:25:28 OSPF: ip_v 4 2012/10/05 01:25:28 OSPF: ip_hl 5 2012/10/05 01:25:28 OSPF: ip_tos 192 2012/10/05 01:25:28 OSPF: ip_len 64 2012/10/05 01:25:28 OSPF: ip_id 64666 2012/10/05 01:25:28 OSPF: ip_off 0 2012/10/05 01:25:28 OSPF: ip_ttl 1 2012/10/05 01:25:28 OSPF: ip_p 89 2012/10/05 01:25:28 OSPF: ip_sum 0xe5d1 2012/10/05 01:25:28 OSPF: ip_src 10.31.0.1 2012/10/05 01:25:28 OSPF: ip_dst 224.0.0.5 2012/10/05 01:25:28 OSPF: Packet from [10.31.0.1] received on link tun1 but no ospf_interface I'm not sure what to do next. I have set up ospfd over OpenVPN several times but I used Debian and I am on CentOS 6 now. Quagga version is 0.99.15. Should I try to get more recent version?

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • Debian/Ubuntu - No network connection

    - by leviathanus
    I have a very weird situation on my Ubuntu 12.04 LTS Server. I can not access (ping) my gateway, although I believe my config is ok - I attach the outputs. Any hints where to look? (I changed the beginning of the IP to something different, just obfuscation) ping 5.9.10.129 PING 5.9.10.129 (5.9.10.129) 56(84) bytes of data. From 5.9.10.129 (5.9.10.129) icmp_seq=2 Destination Host Unreachable From 5.9.10.129 (5.9.10.129) icmp_seq=3 Destination Host Unreachable From 5.9.10.129 (5.9.10.129) icmp_seq=4 Destination Host Unreachable uname -r 3.2.0-29-generic ifconfig eth0 eth0 Link encap:Ethernet HWaddr 3c:97:0e:0e:54:d7 inet addr:5.9.10.142 Bcast:5.9.10.159 Mask:255.255.255.224 inet6 addr: fe80::8e70:5aff:feda:c4ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1216 errors:0 dropped:0 overruns:0 frame:0 TX packets:490 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107470 (107.4 KB) TX bytes:34344 (34.3 KB) Interrupt:17 Memory:d2500000-d2520000 ip route default via 5.9.10.129 dev eth0 metric 100 5.9.10.128/27 via 5.9.10.129 dev eth0 5.9.10.128/27 dev eth0 proto kernel scope link src 5.9.10.142 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 5.9.10.129 0.0.0.0 UG 1000 0 0 eth0 5.9.10.128 5.9.10.129 255.255.255.224 UG 0 0 0 eth0 5.9.10.128 0.0.0.0 255.255.255.224 U 0 0 0 eth0 iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination UPD: Eric, this is how routing information looks on a working server: 0.0.0.0 78.47.198.49 0.0.0.0 UG 100 0 0 eth0 78.47.198.48 78.47.198.49 255.255.255.240 UG 0 0 0 eth0 78.47.198.48 0.0.0.0 255.255.255.240 U 0 0 0 eth0 As I understand it, Hetzner tries to ensure security by this, so I can not take over an IP by changing my MAC. But this is another server, which has another netmask (255.255.255.240) UPD2: BatchyX, on the working server: 78.47.198.49 dev eth0 src 78.47.198.60 cache on the broken: 5.9.10.129 dev eth0 src 5.9.10.142 cache

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >