Search Results

Search found 20816 results on 833 pages for 'vsphere client'.

Page 360/833 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • How to set up a VPN Incoming connection with Windows to tunnel Internet traffic?

    - by Mehrdad
    I want to set up a VPN on a remote server to route all my Internet traffic for privacy reasons. I can set up an incoming connection and connect to it successfully. The problem is, I can just see the remote computer and no other Web sites will open. I want the remote server to act like a NAT. How can I do that? Note that I don't want to split Internet traffic. I actually want to send all the traffic to the remote server but need to make it relay the traffic. For the record, my remote server is Windows Web Server 2008 which does not have routing and remote access service. Clarification I'm mostly interested in server configuration. I don't have any problems configuring the client. By the way, Windows Web Server 2008 seems to have the same VPN features built in client OSes (like Vista) and specifically, it doesn't include the RRAS console in MMC. I'm also open to suggestions regarding third party PPTP/L2TP daemons available, if they are free.

    Read the article

  • Windows XP laptop doesn't appear in WSUS All computers list

    - by George
    I have this one laptop that doesn't appear in WSUS all computers list. We have about 23-25 PCs/laptops/servers in the network, all, but one are listed in WSUS. This is what I have done so far: 1) Changing Updates on local PC: Go to your Windows XP client and start a new Microsoft Management Console (MMC). At Start, Run, type MMC. Use Ctrl+M to add a new snap-in. Click Add, and then add the Group Policy Object Editor for the Local Computer. Click Close, and then click OK. Expand the Local Computer Policy. Under Computer Configuration, go to Administrative Templates, Windows Components, Windows Update. In the right-hand pane, double-click Specify intranet Microsoft update service location. Configure the settings to reflect my WSUS server. Click OK and then close the MMC without saving it. executed wuauclt.exe /detectnow 2) Edited registry key to be pushed to the PCs using GPO [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "WUServer"=http://wsusserver "TargetGroupEnabled"=dword:00000001 "TargetGroup"="WINXP" "WUStatusServer"=http://wsuswerver 3) executed wuauclt /resetauthorization /detectnow 4)Synchronised and refreshed the group I am running out of ideas here. The client is running Windows XP pro, WSUS version is 3.0 and is running on Windows 2008 R2 64-bit. Please, help! Thanks! EDIT 13.IX.2012 @ 15.40 I should have also mentioned that we do have a Windows Update GPO for workstations group and that laptop is a part of that group.

    Read the article

  • Problem with PXE boot

    - by user70523
    I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • How to get gigabit network speeds on Windows XP?

    - by JB
    We've just installed gigabit switches at work, and things on the Linux side are going well. Our linux boxes, which use a Intel Corporation 82566DM-2 Gigabit nic (according to lspci), consistently get over 900 mbits/sec: iperf -c ipserver ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.40.9 port 39823 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec We have a bunch of Windows XP 64-bit machines that use Broadcom NetXtreme 57xx cards. I spent around a day trying to get equivalent speeds on them, but couldn't get above 200 Mbits/sec. I noticed the Windows iperf tests said that the TCP window size was 8 Kb by default (as opposed to 16 Kb on Linux, so I modified my test to reflect that. Still no love. I went to Broadcom's site, downloaded the latest drivers for the card and installed. Still no love. However, finally, I tried a 64 Kb window size with the new drivers, and finally an improvement! $ iperf -c ipserver -w64k ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ 3] local 192.168.40.214 port 1848 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 933 MBytes 782 Mbits/sec Much better, but still not really taking advantage of the full capabilities of the network. If the Linux box can reach 950 Mbits/sec consistently, this box should be able to as well. Also, if you're wondering about the medium, this is over the same cable...I'm switching back and forth. Any suggestion or ideas would be really welcome. Thanks!

    Read the article

  • can't install mysql-devel on centos 6.5

    - by Latheesan Kanes
    I need mysql-devel package to be installed on my CentOS 6.5 running Percona 5.5 (already installed & running). When I try to install the devel package like this: yum --enablerepo=remi install mysql-devel I get the following error: Error: Package: mysql-devel-5.5.37-1.el6.remi.i686 (remi) Requires: real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Available: mysql-libs-5.5.36-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.36-1.el6.remi Available: mysql-libs-5.5.37-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Error: Package: mysql-5.5.37-1.el6.remi.i686 (remi) Requires: real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Available: mysql-libs-5.5.36-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.36-1.el6.remi Available: mysql-libs-5.5.37-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Error: mysql conflicts with Percona-Server-client-55-5.5.37-rel35.0.el6.i686 Here's what's currently installed on my server: [root@server1 ~]# yum list installed | grep mysql php-mysqlnd.i686 5.4.29-1.el6.remi @remi [root@server1 ~]# yum list installed | grep percona Percona-Server-client-55.i686 5.5.37-rel35.0.el6 @percona Percona-Server-server-55.i686 5.5.37-rel35.0.el6 @percona Percona-Server-shared-55.i686 5.5.37-rel35.0.el6 @percona [root@server1 ~]# Any ideas how to fix this dependency error?

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • Windows 7 - Windwos XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Apache LDAP auth: denied all time

    - by Dmytro
    There is my config (httpd 2.4): <AuthnProviderAlias ldap zzzldap> LDAPReferrals Off AuthLDAPURL "ldaps://ldap.zzz.com:636/o=zzz.com?uid?sub?(objectClass=*)" AuthLDAPBindDN "uid=zzz,ou=Applications,o=zzz.com" AuthLDAPBindPassword "zzz" </AuthnProviderAlias> <Location /svn> DAV svn SVNParentPath /DATA/svn AuthType Basic AuthName "Subversion repositories" SSLRequireSSL AuthBasicProvider zzzldap <RequireAll> Require valid-user Require ldap-attribute employeeNumber=12345 Require ldap-group cn=yyy,ou=Groups,o=zzz.com </RequireAll> </Location> The Require valid-user is work. But ldap-attribite, ldap-filter, ldap-group does not work - denied in logs all time. I spent a lot of time but can't understand what's going on. This is the example of my logs: [Tue Sep 25 16:42:26.772006 2012] [authz_core:debug] [pid 23087:tid 139684003014400] mod_authz_core.c(802): [client 1.1.1.1:52624] AH01626: authorization result of Require valid-user : granted [Tue Sep 25 16:42:26.772014 2012] [authz_core:debug] [pid 23087:tid 139684003014400] mod_authz_core.c(802): [client 1.1.1.1:52624] AH01626: authorization result of Require ldap-attribute employeeNumber=12345: denied I checked all info with ldapsearch: there is a valid username, employee ID and other...

    Read the article

  • Mac and L2TP VPN no problems, xp, vista and 7 no go :s

    - by The_cobra666
    Hi all, I've got some weird problem and I'm out off options. The situation: When connecting from my mac to the VPN server (Windows Server 2003 R2) with L2TP PSK, everything works like it should. However, when I connect from a Windows PC, nothing happens. it spits out error 809 and sometimes 789. Now I know that my ports are OK, since the mac can connect without any problems. It's the same for: XP, Vista SP2 and 7. None can connect. If I connect to the VPN server directly (to the internal IP instead of WAN from the router), it connect's without a problem. Connecting using PPTP works... now if only L2TP would work thank you very much Windows! I have checked the counters on my linux router with iptables -L -nv and they do not raise when connecting. Not on ACCEPT and not on DROP. Only when connecting from the mac. I've found the guide from Microsoft to enable: AssumeUDPEncapsulationContextOnSendRule in the registery. I have set it to "2", on the server and client. Still no go. After that registery key it started giving me error 789 instead of 809. The IPSEC services are running on the client and server. Is there anyone that ppleease can help me with this! I've been working on this for 2 days and I'm out of options. Thanks!

    Read the article

  • who has files open on a linux server

    - by Robert
    I have the fairly common task of finding who has files open on our Linux (Ubuntu ) file server in our Windows environment. We use Samba on the network and I use Putty from my workstation to establish a shell window to run bash scripts. I have been using something like this to find what files are open: (this returns a list of process ids with each open file) Robert:$ sudo lsof | grep "/srv/office/some/folder" Then, I follow up with something like this to show who owns the process: (this returns the name of the machine on the network using the IP4 protocol who owns the process) Robert:$ sudo lsof -p 27295 | grep "IPv4" Now I know the windows client who has a file open and can take action from there. As you can tell this is not difficult but time consuming. I would prefer to have a windows application I can run that would just give me what I want. So, I have been thinking about creating some process I can run on Linux that listens on a port and then returns a clean list of all open files with the IP address of the host who has the file open. Then, a small windows client application that can send the request on the port. It seems like this should be a very common need but I can not find anything like this that has been done before. Any suggestions?

    Read the article

  • Running multiple sites on a LAMP with secure isolation

    - by David C.
    Hi everybody, I have been administering a few LAMP servers with 2-5 sites on each of them. These are basically owned by the same user/client so there are no security issues except from attacks through vulnerable deamons or scripts. I am builing my own server and would like to start hosting multiple sites. My first concern is... ISOLATION. How can I avoid that a c99 script could deface all the virtual hosts? Also, should I prevent that c99 to be able to write/read the other sites' directories? (It is easy to "cat" a config.php from another site and then get into the mysql database) My server is a VPS with 512M burstable to 1G. Among the free hosting managers, is there any small one which works for my VPS? (which maybe is compatible with the security approach I would like to have) Currently I am not planning to host over 10 sites but I would not accept that a client/hacker could navigate into unwanted directories or, worse, run malicious scripts. FTP management would be fine. I don't want to complicate things with SSH isolation. What is the best practice in this case? Basically, what do hosting companies do to sleep well? :) Thanks very much! David

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    Hi, We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

  • SSH traffic over openvpn connection freezes when I cat a file

    - by user42055
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. I compiled openvpn 2.1.1 on my mac and copied over my config directory from my gentoo client. The mac connected and ssh sessions worked fine without freezing. I then compiled it on my older gentoo box (2.6.26 kernel) which I am retiring due to a dying hard drive, and ssh over it also works perfectly. Why does it fail on my brand new gentoo box ? I've tried compiling three different kernels in case it was that, but other than that there should be no difference between my older and my newer gentoo boxes that I can think of. Any suggestions on what's wrong ?

    Read the article

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • Postfix / Dovecot email setup not storing email

    - by Nick Duffell
    I'm trying to setup postfix / dovecot on my debian server to use it for a mail server. I set everything up according to a tutorial on the net, and it all seemed OK. I can send emails from it, so SMTP is not a problem, however I cannot receive emails. Looking into the files in /home/nick/mail/ I can see that if I send an email to myself (from the server, to itself) the emails are there, but are put straight into the Deleted Messages folder. I don't know why this is. When I send an email from another mail account (not on this server), the emails are nowhere to be found. Also, looking at the log file /var/log/mail.log all seems to be OK, I get the following when I receive an email, which looks OK to me: Nov 7 22:47:22 nickduffell postfix/local[17825]: 05B1173581A6: to=, relay=local, delay=0.37, delays=0.31/0.02/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) Any ideas? Thanks EDIT: I should also add that although the emails I send myself are in the Deleted Messages folder, and in my mail client I can see that "Trash" has 3 items, I cannot download them in my mail client...

    Read the article

  • Windows errors, how do I find root cause and fix it? Getting several errors

    - by Eric Martin
    My server is having issues and not responding to customer's https requests. I checked the event viewer and found several errors. These two are listed a couple of times: WINS encountered a database error. This may or may not be a serious error. WINS will try to recover from it. You can check the database error events under 'Application Log' category of the Event Viewer for the Exchange Component, ESENT, source to find out more details about database errors. If you continue to see a large number of these errors consistently over time (a span of few hours), you may want to restore the WINS database from a backup. The error number is in the second DWORD of the data section. And this one: An error occured while using SSL configuration for socket address 0.0.0.0:444. The error status code is contained within the returned data. SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: xxx.xxx.xxxx.xxx] I also found this in the event viewer but the computer has been restarted since this message and I have not seen it again. Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory. This is my virtual memory settings: I'm not familiar with WINS so I wasn't sure if that is where I start or how to resolve it. Is the WINS error causing the other problems or should I be looking somewhere else?

    Read the article

  • Why won't dhclient use the static IP I'm telling it to request?

    - by mike
    Here's my /etc/dhcp3/dhclient.conf: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, netbios-name-servers, netbios-scope, interface-mtu; timeout 60; reject 192.168.1.27; alias { interface "eth0"; fixed-address 192.168.1.222; } lease { interface "eth0"; fixed-address 192.168.1.222; option subnet-mask 255.255.255.0; option broadcast-address 255.255.255.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; } When I run "dhclient eth0", I get this: There is already a pid file /var/run/dhclient.pid with pid 6511 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:1c:25:97:82:20 Sending on LPF/eth0/00:1c:25:97:82:20 Sending on Socket/fallback DHCPREQUEST of 192.168.1.27 on eth0 to 255.255.255.255 port 67 DHCPACK of 192.168.1.27 from 192.168.1.254 bound to 192.168.1.27 -- renewal in 1468 seconds. I used strace to make sure that dhclient really is reading that conf file. Why isn't it paying attention to my "reject 192.168.1.27" and "fixed-address 192.168.1.222" lines?

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • What command should be used to connect via SSH through squid proxy?

    - by Raul Cardoso
    I have set up a Squid HTTP Proxy (in centOS) and intended to use it also for ssh connections. Managed to configure putty (in a windows client) to use this proxy while connecting by ssh. Confirmed in the "target host" that the ssh connection was coming from Proxy server ip instead of the windows client IP. Used: targethost: 22 for ssh proxyserv: 3128 for proxy (along with proxy credentials) I'm now having problems connecting to the "target host" using Ubuntu and the same proxy server. I have tried the following: me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 Enter proxy authentication password for test@proxyserv: SSH-2.0-OpenSSH_6.2p2 Ubuntu-6 It hangs in last line, expecting some input. All attempts resulted in a "Protocol mismatch." error. Putty successfully connects to the http proxy and sends credentials, showing me ssh login right away. - How to do (with commands in Ubuntu) the same putty does? - Is there any other way than connect-proxy command to do this? Edit: Also tried the following with same result ("Protocol mismatch") me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 ssh -l myshel_login Thanks in advance Edit: Solution details (thanks to NickW pointing the right way) installed corkscrew and added to ssh_config Host targethost ProxyCommand corkscrew proxyserv 3128 %h %p /etc/ssh/proxypass created proxypass file login:password Restarted ssh and used a simple ssh command ssh mylogin@targethost (ssh password was asked as usual)

    Read the article

  • Balancing internal services using a Cisco CSS 11501

    - by Ladadadada
    First, the background to the problem: I have a Cisco CSS11501 that I am using to load balance a few web servers. These web servers have two network interfaces, one internal and one external and we are sending the requests to the internal interface. We have the CSS configured to do NAT because our webservers need to see the client's IP address. Because the TCP packets hit the webservers with a source address on the Internet, the webserver tries to send the packet back to the client over the external interface and not through the load balancer. In order to stop these requests being sent back out to the Internet via the external interface, we added a routing rule on these boxes so that all traffic with a source address on the internet will use the load balancer as the gateway. This part works fine. What I would also like to to is use the CSS as a load balancer for internal services such as our MySQL slaves. When I do this, I run into a similar problem; the TCP connection goes from the web server to the load balancer and then from the load balancer to the MySQL slave but the CSS spoofs a source address of the original webserver. The MySQL slave then tries to send the response directly to the webserver via the internal network and not via the load balancer. The ideal solution would be to tell the CSS not to do source address spoofing on the internal network and only do it for requests originating on the Internet. Is this possible ? Failing that, is there a way of directing the load balanced traffic back through the load balancer while keeping the other traffic (say SSH) purely on the internal network ? Is there another way of using the CSS11501 to load balance internal services ?

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • Gwibber doesnt launch post upgrade

    - by Arcath
    i updated my pc from ubuntu 9.10 to 10.04 and one of the things i was looking forward too was the "me menu" and gwibber. For some reason gwibber doesnt launch at all. when i try to launch it from terminal i get: [21:02:20][arcath@Highgate ~]$ gwibber ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:8182): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' Traceback (most recent call last): File "/usr/bin/gwibber", line 50, in <module> from gwibber import client File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 3, in <module> import gtk, gobject, gwui, util, resources, actions, json, gconf File "/usr/lib/python2.6/dist-packages/gwibber/gwui.py", line 2, in <module> import os, json, urlparse, resources, util File "/usr/lib/python2.6/dist-packages/gwibber/util.py", line 2, in <module> from microblog.util.couch import RecordMonitor File "/usr/lib/python2.6/dist-packages/gwibber/microblog/util/couch.py", line 10, in <module> OAUTH_DATA = desktopcouch.local_files.get_oauth_tokens() File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 323, in get_oauth_tokens oauth_token_secrets = cf.items_in_section("oauth_token_secrets")[0] File "/usr/lib/python2.6/dist-packages/desktopcouch/local_files.py", line 189, in items_in_section raise ValueError("Section %r not present." % (section_name,)) ValueError: Section 'oauth_token_secrets' not present. and i cant work out whats wrong with it. Can anyone point me in the right direction?

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >