Search Results

Search found 32520 results on 1301 pages for 'local machine'.

Page 318/1301 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • tcp msl timeout

    - by iamrohitbanga
    The following is given in the book TCP IP Illustrated by Stevens Quiet Time Concept The 2MSL wait provides protection against delayed segments from an earlier incarnation of a connection from being interpreted as part of a new connection that uses the same local and foreign IP addresses and port numbers. But this works only if a host with connections in the 2MSL wait does not crash. What if a host with ports in the 2MSL wait crashes, reboots within MSL seconds, and immediately establishes new connections using the same local and foreign IP addresses and port numbers corresponding to the local ports that were in the 2MSL wait before the crash? In this scenario, delayed segments from the connections that existed before the crash can be misinterpreted as belonging to the new connections created after the reboot. This can happen regardless of how the initial sequence number is chosen after the reboot. To protect against this scenario, RFC 793 states that TCP should not create any connections for MSL seconds after rebooting. This is called the quiet time Few implementations abide by this since most hosts take longer than MSL seconds to reboot after a crash. Do operating systems wait for 2MSL seconds now after a reboot before initiating a TCP connection. The boot times are also less these days. Although the ports and sequence numbers are random but is this wait implemented in Linux?

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • virtualisation with kvm: export services from guest to the host

    - by ascobol
    Hello, I would like to export some services from the guest os to the host os, via kvm, and by the same way learn some things about networking. I have tried the following commands: In the host (kubuntu 10.4): $ sudo tunctl -u ascobol Set 'tap0' persistent and owned by uid 2401 $ sudo ifconfig tap0 192.168.2.1 netmask 255.255.255.0 broadcast 192.168.2.255 The ifconfig command returns: $ /sbin/ifconfig tap0 Link encap:Ethernet HWaddr 3e:4e:e3:cc:bc:92 inet addr:192.168.2.1 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::3c4e:e3ff:fecc:bc92/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:17 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Then I run the virtual machine (ubuntu server 10.4): $ sudo kvm -hda ubuntuserver104.qcow2 -net nic -net tap,name=tap0,script=no (I'm using sudo because without it fails with the following message:) warning: could not configure /dev/net/tun: no virtual network emulation With sudo the virtual machine boots, I just get this message: pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" In the virtual machine: $ ifconfig eth0 192.168.2.2 netmask 255.255.255.0 broadcast 192.168.2.255 Now if I run: $ ssh 192.168.2.2 I just get a No route to host What is wrong with this setup ? Thanks !

    Read the article

  • multiple ssh aliases is selecting wrong user when forwarding

    - by Chris Beck
    I'm following the dual identity procedure for bitbucket: I have 2 bitbucket accounts ccmcbeck and chrisbeck. The former is personal, the latter is work. On my local Mac, I have this in my ~/.ssh/config Host *.work.com User chris ForwardAgent yes IdentityFile ~/.ssh/work_dsa Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa On my local Mac I ssh -T all is good, I get: $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as chrisbeck. On my local Mac, the ssh version is OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 When I ssh foo.work.com to my Linux box, I get: $ ssh-add -l 1024 ... /Users/chris/.ssh/work_dsa (DSA) 2048 ... /Users/chris/.ssh/bitbucket_ccmcbeck_rsa (RSA) 2048 ... /Users/chris/.ssh/bitbucket_chrisbeck_rsa (RSA) On foo.work.com, I also have this in my ~/.ssh/config Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa However, on foo.work.com when I ssh -T, it references the wrong User for git@bitbucket-work $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as ccmcbeck. On foo.work.com, the ssh version is OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 Why is my configuration causing foo.work.com to reference the wrong User?

    Read the article

  • Windows Service with Logon a user

    - by David.Chu.ca
    I have a service running in a box with Windows XP and a box of Server (2003). There is one service running in autmactic mode. If I set the log on with a local user, the service will stop running if the user logs off the account from the Windows? In other words, in order to keep the service running, have I to keep the user as login mode? I understand that the service can be set as local system which will not need any one to login as long as the box is powered on. However, the service I need to run has to be running as a local user with the required permissions. The problem I have right now is that the box may reboot itself. If that happens, no one is logged in yet until some one has to log in as the required user. I want to make sure if the required user has to be as a log on user first. If that's the case, any suggestion to deal with unexpected reboot issue?

    Read the article

  • Cannot access domain from windows 2003 client

    - by Peuge
    Hey all, First off I am a novice at AD and DNS so please bear with me. This is my current situation: I have one server which is a DC and DNS server (win2k3) - Machine 1. I have another machine which is trying to join this domain - Machine2. This machine is also a win2k3 server. This is what I have done so far: I have setup DNS on the DC and its tcp/ip dns is pointing to itself. On machine2 I have set its dns to point to the dc. The DNS has been setup with a forward lookup zone with the same name as the domain (accdirect.com). I can ping machine1 from the machine2 by its FQDN and ip. I have set up forwarders on the DC for our ISP dns and can browse the internet on both machines. In the DNS mmc on the DC I can see a host (A) has been created for machine2. The problem is I still cannot join the domain. When I try join the domain via my computer - properties then it brings up the username/password box and after I go "ok" it says cannot find domain accdirect.com If I run this from machine2 dcdiag /s:accdirect.com /u:accdirect.com\admin /p: then I get the following: Performing initial setup: ** Warning: could not confirm the identity of this server in the directory versus the names returned by DNS servers. If there are problems accessing this directory server then you may need to check that this server is correctly registered with DNS [accdirect.com] Directory Binding Error 1722: Win32 Error 1722 This may limit some of the tests that can be performed. Done gathering initial info. On the dc all dcdiag and netdiag results pass. If anyone could help me I would really appreciate this! Sorry if any of my terminology is a bit off, I have only been doing this for two days. thanks Peuge

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • Booting Debian5 (Lenny) on 2.6.16 Kernel

    - by bk
    Due to a proprietary kernel module that I don't have the source to and is very picky about what kernel versions it will load into (even with modprobe --f), I find myself in need of running a 2.6.16.XX kernel on my Debian5 machine. The machine boots fine with the 2.6.26-2 stock kernel, and I have successfully build and booted 2.6.26 and 2.6.31 based kernels by making a .deb and the ndoing dpkg -i. However, when I do the same approach for 2.6.16, the kernel hangs at boot. I'm testing this in a VMWare image, so I don't think its an issue of newer hardware not supported by the older kernel. For a working kernel, at boot I get: Uncompressing Linux.. OK booting the kernel Loading, please wait... mdadm: No devices listed in the conf file were found kinit name_to_dev_t /dev/hda5 (dev5,3) ... With 2.6.16.60, I never get the kinit message. It hangs after the mdadm line. There are no mdadm arrays on this machine, so I doubt its an issue inside the mdadm stuff, which is supposed to just error out as it does in the 2.6.26 case above, but for some reason I'm getting stuck getting into kinit. I've been banging my head against this wall so I'm very open to suggestions on how to go about troubleshooting this.

    Read the article

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • MySQL not releasing temp file descriptors

    - by Wakaru44
    Since a few days ago, we’ve been experiencing some serious problems with our MySQL installation: MySQL keeps opening temporal files (normal behaviour) but these files are never released. The consequence is that, eventually, the disk space is exhausted and we have to restart the service and clean up /tmp manually. Using lsof, we see something like this: mysqld 16866 mysql 5u REG 8,3 0 692 /tmp/ibyWJylQ (deleted) mysqld 16866 mysql 6u REG 8,3 0 707 /tmp/ibf5adsT (deleted) mysqld 16866 mysql 7u REG 8,3 0 728 /tmp/ibGjPRyW (deleted) mysqld 16866 mysql 8u REG 8,3 0 5678 /tmp/ibMQDLMZ (deleted) mysqld 16866 mysql 13u REG 8,3 0 5679 /tmp/ibQAnM42 (deleted) Maybe it's not related, but when we shutdown the server, the files are finally freed, and we can see the following warnings in the MySQL log: 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1333 user: 'xxx' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1156 user: 'yyy' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1151 user: 'zzz' where 'xxx', 'yyy' and 'zzz' are distinct mysql users (and the only 3 users with active connections to the database). We have a few theories: There is a problem in the OS, that keeps file handlers open. Could it be possible that the OS "delete" operation blocks the threads until shutdown? This may explain the warning at shutdown and the fact that files are finally deleted when the process dies. Until now, data sets were so small that temp files were relatively small and there was enough time to release the file handles without exhausting disk space. We are using Mysql 5.5 on a RHEL 6.2 with the default kernel.

    Read the article

  • nxclient crashes when trying to open a terminal from a remote client through "ssh -Y"

    - by user167328
    I support around 150 linux machines. I have 2 virtual machines on an ESXi server which I access via nxmachine v3 from a windows 7 box. These machines run CentOS5 with KDE and Lubuntu12.04.1 and they are the admin GUIs from which I support the 150 machines. The linux machines which I manage are redhat4/5, CentOS5 and ubuntu 10 and 12. Normally I contact the machines via ssh -Y. Today I did an ssh -Y to a remote machine which is running Ubuntu 12.10 and ssh 6.0p1. Then I tried to open an lxterminal on the remote machine which should display on my KDE desktop. This immediately and reproducably crashed my nxclient session. I tried again from my lubuntu system with the same effect. I have not observed the phenomenon from other machines yet. The message log on my KDE host shows: Unexpected termination of nxagent because of signal: 11 Logger::log nxnode 3920 Googling for this revealed no usable answer. Does anybody have a clue what is going on here or can give a hint how to solve the issue? Add On: I asked the user at the remote machine to export his DISPLAY to my host and open an lxterminal. This worked without problems i. e. the nxclient did not crash. Then the user tried to send me xeyes and this also killed the nxclient with the same error message found in the message log as above. This makes me suspect that the problem is not solely connected to ssh but maybe to some library stuff.

    Read the article

  • Some HTTPS connections via NAT fail, but work on firewall itself.

    - by hnxn
    Hi, I am having trouble establishing some HTTPS connections from internal machines, even though these same connections work if initiated on the firewall itself. The firewall machine is running Ubuntu 10.04.1 and shorewall 4.4.6. The internet connection is Bell PPPoE DSL (in Canada). I have tried various MTU settings, it doesn't seem to make any difference. Other protocols (HTTP, FTP, etc) generally work. The problem seems to be limited to certain sites; this one never works from an internal machine, but always works from the firewall itself: From internal machine: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:51:31-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. ^C From firewall: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:58:28-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 840 [image/gif] Saving to: `corp_logo.gif' 2011-01-13 20:58:28 (149 MB/s) - `corp_logo.gif' saved [840/840] This URL always works from both internal and firewall: https://encrypted.google.com/images/logos/ssl_logo_lg.gif Any troubleshooting tips would be greatly appreciated!

    Read the article

  • administrator user unable to login, suspicious user accounts "sky$", "admin$"

    - by mks
    I have a Windows 2008 R2 Standard (64 bit) running in a virtual machine. Suddenly from yesterday onwards I am not able to login as administrator. Nobody changed the password. Both in the console as well as using remote desktop I am unable to login. Whenever I login as Administrator I am getting this error: "The user name or password is incorrect" Nothing has changed in the machine and I have logged in the past successfully both through console and via remote desktop several time on the same machine. One strange behaviour I noticed is, I am seeing some additional user accounts if I try to login as other user. The suspicious user account are: sky$ admin$ SUPPORT_388945a0 Is it created by some malware/virus? Or is it some windows hidden account? Microsoft site says that SUPPORT_388945a0 is: The Support_388945a0 account enables Help and Support Service interoperability with signed scripts. This account is primarily used to control access to signed scripts that are accessible from within Help and Support Services. Administrators can use this account to delegate the ability for an ordinary user, who does not have administrative access over a computer, to run signed scripts from links embedded within Help and Support Services. These scripts can be programmed to use the Support_388945a0 account credentials instead of the user’s credentials to perform specific administrative operations on the local computer that otherwise would not be supported by the ordinary user’s account. When the delegated user clicks on a link in Help and Support Services, the script executes under the security context of the Support_388945a0 account. This account has limited access to the computer and is disabled by default. However I am not sure from where this "admin$" and "sky$" came. Anyone has similar experience?

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • How do I configure custom URL handlers on OS X?

    - by cwd
    I've been reading a lot online about custom URL handlers / custom protocol handlers such as: Launching External Applications using Custom Protocols under OSX OS X URL handler to open links to local files I get that you can tell the system that a particular program is able to handle a certain scheme / protocol with the Info.plist file: <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLName</key> <string>Local File</string> <key>CFBundleURLSchemes</key> <array> <string>local</string> </array> </dict> </array> <key>NSUIElement</key> <true/> But if there are multiple applications that are capable of opening the same URL handler, such as mailto: how do you specify which one you want the system to use? There were some references to utilities like the More Internet preference pane which no longer seems to be available from the author's site. I did find it online by Googling but it seems a bit shaky - like it was written for an older OSX - perhaps Tiger. I haven't been able to find information on how to set the URL handler for protocols and custom protocols. I'm assuming there is a plist file somewhere that I can edit - or maybe there is a newer, better utility that works well with Mountain Lion?

    Read the article

  • pptpd-logwtmp.so config wrong ip address

    - by rgc
    i have setup a pptp vpn on ubuntu 10.04, and i config local ip address , remote ip address as following: local IP address 10.0.30.100 remote IP address 192.168.13.100 after i use windows to connect to vpn server in windows , i only could send packets, can not receive any packets. i also check the pptp log in ubuntu server, and find this strange thing rcvd [IPCP ConfAck id=0x1 <compress VJ 0f 01> <addr 10.0.30.100>] rcvd [IPCP ConfReq id=0x2 <compress VJ 0f 01> <addr 192.168.13.100> <ms-dns1 202.119.32.6> <ms-dns2 202.119.32.12>] sent [IPCP ConfAck id=0x2 <compress VJ 0f 01> <addr 192.168.13.100> <ms-dns1 202.119.32.6> <ms-dns2 202.119.32.12>] Cannot determine ethernet address for proxy ARP local IP address 10.0.30.100 remote IP address 192.168.13.100 pptpd-logwtmp.so ip-up ppp0 **172.25.146.174** i did not set 172.25.146.174, it should be 192.168.13.100, can anyone tell me what should i do to fix this problem?

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

  • Chrome browser completely messing with network?

    - by kiasecto
    I have a bizare problem with Google Chrome on a intel core i5 running windows 7 32bit. Whenever chrome is installed, access to other computers in the home group becomes really slow - such as opening shares. Its becomes really slow to resolve windows names. Something goes hay-wire with the local network - pining local machines which is usually 0mS pings I get random timeouts and random successes. Whenever I try to load a local address inside of chrome (including localhost, 192.168.0.1 etc) - it always says something in the status bar about resolving proxy and times out after about 5 seconds, then seems to work fine. If I go to settings inside of chrome, it just brings up the internet explorer connection settings, where I have not set any proxy settings. One I uninstall chrome, all these problems go away. Network shares and name resolvings work instantly, pings to any machines never have a problem. Localhost and other network IP address work fine in all other browsers. Anyone heard of this problem before and know what it might be? I even tried re-installing winodws 7 and the problem came straight back when chrome was loaded on again.

    Read the article

  • Installing Ruby 1.9.3 OSX 10.7.4 breaks after altering PATH

    - by R V
    I was having trouble installing ruby 1.9.3-p194 from ruby 1.8.7 on my mac osx 10.7.4. I have was trying to fix my homebrew after running "brew doctor" and got the message of "/usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: c++-4.2 cpp-4.2 erb g++-4.2 gcc-4.2 gcov-4.2 gem i686-apple-darwin11-cpp-4.2.1 i686-apple-darwin11-g++-4.2.1 i686-apple-darwin11-gcc-4.2.1 irb rake rdoc ri ruby testrb" I fixed it by entering the following, which I found on another stackoverflow answer: export PATH="/usr/local/bin:/usr/local/sbin:~/bin$PATH" Lo and behold! when I typed that ruby updates to 1.9.3-p194. Ruby files seem to compile and run just fine. However, afterward, my navigation around terminal is messed up severely. For instance I can't do the command "open example_file.html" and have the file pop up in Chrome, instead I get the error: "-bash: open: command not found" Also, when I change directory, I get an error, inputting "$ cd desktop" yields the output, "-bash: dirname: command not found" but the directory does then changes... strange. When I exit out of a terminal window all this resets. I'm back to Ruby 1.8.7, have to use the PATH command again to update to 1.9.3, command line navigation gets broken again. Any guidance on how to remedy so I can use 1.9.3-p194 and also have normal terminal navigation would be greatly appreciated.

    Read the article

  • Exchange 2010 Hub cannot deliver to Exchange 2007 Hub - "451 5.7.3 Cannot achieve Exchange Server authentication"

    - by Graeme Donaldson
    We have an existing Exchange 2007 server in Site A (exch07). I've installed an Exchange 2010 server in Site B (exch10). Both servers have the CAS, Mailbox and Hub roles. Messages sent via SMTP on exch10 which are destined for mailboxes on exch07 are queued with the "Last Error" reported in Queue Viewer as '451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.' I've found that some people have resolved this by creating new Receive Connectors which are scoped specifically to apply to connections from the remote hub/s, but I have had no luck doing this. Specifically I created new receive connectors on both servers with the following settings: Remote IP = IP/s of remote server Authentication = "Transport Layer Security (TLS)" and "Exchange Server authentication" Permission Groups = "Exchange servers" and "Legacy Exchange Servers" This made no difference, I see the same error message. What am I missing? Update: We noticed that the Application log had this error message from MSExchangeTransportService: Microsoft Exchange could not find a certificate that contains the domain name exch07.domain.local in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector exch10 with a FQDN parameter of exch07.domain.local. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. It turns out that the default self-signed certificate was no longer enabled for the SMTP service for some reason. After enabling the self-signed certificate for SMTP, we no longer get the error in the event logs, but delivery is still failing with the same error message. Update 2: I put a mailbox on exch10 and attempted to deliver a message via SMTP on exch07 and I get the same error.

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • Can't install MailParse on cpanel server

    - by Tom
    Hi, I've got a linux vps running CentOs 5.5 (cpanel/whm), I've installed MailParse via Module Installers section on whm, and it did install it, the end of setup log: running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-mailparse-2.1.5" install Installing shared extensions: /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-mailparse-2.1.5" | xargs ls -dils 508718 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5 508745 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr 508746 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib 508747 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php 508748 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions 508749 4 drwxr-xr-x 2 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626 508744 196 -rwxr-xr-x 1 root root 193502 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' install ok: channel://pecl.php.net/mailparse-2.1.5 Extension mailparse enabled in php.ini The mailparse.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Now, when i try to use mailparse functions using php i get the following error: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 What should i do?

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • Debian, Apache2, CGI: paths issue

    - by Bubnoff
    I have a perl form email script on the servers cgi-bin directory ( /usr/lib/cgi-bin ). /etc/apache2/sites-enabled/000-default ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> The issue is with paths. html calls script here: <form name="Request" method="post" action="http://server-test.local/cgi-bin/formprocessorpro.pl" onsubmit="return checkWholeForm49874(this)"> The directory with the templates and configs is passed here: <input type="hidden" name="base_path" value="../contact" /> The path to this form is: http://server-test.local/formstest/contact.htm No matter what variation I try for the base_path I get an error from the formprocessor script that it can't find the directory: An error occurred when opening the Form Configuration File (../contact/form.cfg): No such file or directory. I need to move this script from an old server, configured by a previous sysadmin, to a new server. Since cgi-bin is automatically linked to /usr/lib/cgi-bin and linked such that the script resides: http://server-test.local/cgi-bin/formprocessorpro.pl I would imagine that, given that the templates are in the webroot in a directory called contact, the correct path would be: ../contact Any ideas? It's been awhile since I've messed with CGI. Bubnoff

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >