Search Results

Search found 947 results on 38 pages for 'nic'.

Page 9/38 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How do I apply multiple subnets to a server with one NIC?

    - by Cosban
    I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well as the following information for the IP address on the second subnet: gateway: 5.xxx.xxx.161 netmask: 255.255.255.248 broadcast: 5.xxx.xxx.167 Everything I've tried with /etc/network/interfaces has either not worked, or has rendered the network completely useless. This is the current state of the file, which has the secondary IP address working on the same subnet as well as IPv6 working, but not the second subnet. # Nativen IPv6 Schnittstelle iface eth0 inet6 manual # Bridge IPv4 Schnittstelle (176.xxx.xxx.193/27) auto vmbr0 iface vmbr0 inet static address 176.xxx.xxx.196 netmask 255.255.255.224 gateway 176.xxx.xxx.193 broadcast 176.xxx.xxx.223 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 176.xxx.xxx.198/27 dev vmbr0 auto vmbr1 iface vmbr1 inet static address 5.xxx.xxx.166 netmask 255.255.255.248 gateway 5.xxx.xxx.161 broadcast 5.xxx.xxx.167 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 5.xxx.xxx.166/27 dev vmbr1 # Bridge IPv6 Schnittstelle (Reichweite: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx::/64) iface vmbr0 inet6 static address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx netmask 64 up ip -6 route add xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 up ip -6 route add default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • Receiving Multicast Messages on a Multihomed Windows PC

    - by Basti
    I'm developing a diagnostic tool on a PC with several Network Interfaces based on multicast/udp. The user can select a NIC, the application creates sockets, binds them to this NIC and adds them to the specific multicast group. The sending of multicast messages works fine. However receiving of messages only succeeds if I bind the sockets to a specific NIC of my PC. It almost looks like as there is a 'default' NIC for receiving multicast messages in Windows which is always the first NIC returned by the GetAdapterInfo function. I monitored the network with Wireshark and discovered that the "IGMP Join Group" message isn't sent from the NIC I bound the socket at, but by this 'default' NIC. If I disable this NIC (or remove the network cable), the next NIC of the list returned by GetAdapterInfo is used for receiving multicast messages. I was successful to change this 'default' NIC by adding an additional entry to the routing table of my PC, but I don't think this is a good solution of the problem. The problem also occurs with the code appended below. The join group messages isn't sent via 192.168.52 but via a different NIC. // socket_tst.cpp : Defines the entry point for the console application. // #include tchar.h #include winsock2.h #include ws2ipdef.h #include IpHlpApi.h #include IpTypes.h #include stdio.h int _tmain(int argc, _TCHAR* argv[]) { WSADATA m_wsaData; SOCKET m_socket; sockaddr_in m_sockAdr; UINT16 m_port = 319; u_long m_interfaceAdr = inet_addr("192.168.1.52"); u_long m_multicastAdr = inet_addr("224.0.0.107"); int returnValue = WSAStartup(MAKEWORD(2,2), &m_wsaData); if (returnValue != S_OK) { return returnValue; } // Create sockets if (INVALID_SOCKET == (m_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) ) { return WSAGetLastError(); } int doreuseaddress = TRUE; if (setsockopt(m_socket,SOL_SOCKET,SO_REUSEADDR,(char*) &doreuseaddress,sizeof(doreuseaddress)) == SOCKET_ERROR) { return WSAGetLastError(); } // Configure socket addresses memset(&m_sockAdr,0,sizeof(m_sockAdr)); m_sockAdr.sin_family = AF_INET; m_sockAdr.sin_port = htons(m_port); m_sockAdr.sin_addr.s_addr = m_interfaceAdr; //bind sockets if ( bind( m_socket, (SOCKADDR*) &m_sockAdr, sizeof(m_sockAdr) ) == SOCKET_ERROR ) { return WSAGetLastError(); } // join multicast struct ip_mreq_source imr; memset(&imr,0,sizeof(imr)); imr.imr_multiaddr.s_addr = m_multicastAdr; // address of multicastgroup imr.imr_sourceaddr.s_addr = 0; // sourceaddress (not used) imr.imr_interface.s_addr = m_interfaceAdr; // interface address /* first join multicast group, then registerer selected interface as * multicast sending interface */ if( setsockopt( m_socket ,IPPROTO_IP ,IP_ADD_MEMBERSHIP ,(char*) &imr , sizeof(imr)) == SOCKET_ERROR) { return SOCKET_ERROR; } else { if( setsockopt(m_socket ,IPPROTO_IP ,IP_MULTICAST_IF ,(CHAR*)&imr.imr_interface.s_addr ,sizeof(&imr.imr_interface.s_addr)) == SOCKET_ERROR ) { return SOCKET_ERROR; } } printf("receiving msgs...\n"); while(1) { // get inputbuffer from socket int sock_return = SOCKET_ERROR; sockaddr_in socketAddress; char buffer[1500]; int addressLength = sizeof(socketAddress); sock_return = recvfrom(m_socket, (char*) &buffer, 1500, 0, (SOCKADDR*)&socketAddress, &addressLength ); if( sock_return == SOCKET_ERROR) { int wsa_error = WSAGetLastError(); return wsa_error; } else { printf("got message!\n"); } } return 0; } Thanks four your help!

    Read the article

  • How do I configure OpenVPN for accessing the internet with one NIC?

    - by Lekensteyn
    I've been trying to get OpenVPN to work for three days. After reading many questions, the HOWTO, the FAQ and even parts of a guide to Linux networking, I cannot get my an Internet connection to the Internet. I'm trying to set up a OpenVPN server on a VPS, which will be used for: secure access to the Internet bypassing port restrictions (directadmin/2222 for example) an IPv6 connection (my client does only have IPv4 connectivity, while the VPS has both IPv4 and native IPv6 connectivity) (if possible) I can connect to my server and access the machine (HTTP), but Internet connectivity fails completely. I'm using ping 8.8.8.8 for testing whether my connection works or not. Using tcpdump and iptables -t nat -A POSTROUTING -j LOG, I can confirm that the packets reach my server. If I ping to 8.8.8.8 on the VPS, I get an echo-reply from 8.8.8.8 as expected. When pinging from the client, I do not get an echo-reply. The VPS has only one NIC: etho. It runs on Xen. Summary: I want to have a secure connection between my laptop and the Internet using OpenVPN. If that works, I want to have IPv6 connectivity as well. Network setup and software: Home laptop (eth0: 192.168.2.10) (tap0: 10.8.0.2) | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | router/gateway (gateway 192.168.2.1) | INTERNET | VPS (eth0:1.2.3.4) (gateway, tap0: 10.8.0.1) (running Debian 6; OpenVPN 2.1.3-2) wifi and my home router should not cause problems since all traffic goes encrypted over UDP port 1194. I've turned IP forwarding on: # echo 1 > /proc/sys/net/ipv4/ip_forward iptables has been configured to allow forwarding traffic as well: iptables -F FORWARD iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -A FORWARD -j DROP I've tried each of these rules separately without luck (flushing the chains before executing): iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j SNAT --to 1.2.3.4 iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE route -n before (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n after (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n before (client): 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 route -n after (client): 1.2.3.4 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 128.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 SERVER config proto udp dev tap ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" ifconfig-pool-persist ipp.txt keepalive 10 120 tls-auth ta.key 0 comp-lzo user nobody group nobody persist-key persist-tun log-append openvpn-log verb 3 mute 10 CLIENT config dev tap proto udp remote 1.2.3.4 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 comp-lzo verb 3 mute 20 traceroute 8.8.8.8 works as expected (similar output without OpenVPN activated): 1 10.8.0.1 (10.8.0.1) 24.276 ms 26.891 ms 29.454 ms 2 gw03.sbp.directvps.nl (178.21.112.1) 31.161 ms 31.890 ms 34.458 ms 3 ge0-v0652.cr0.nik-ams.nl.as8312.net (195.210.57.105) 35.353 ms 36.874 ms 38.403 ms 4 ge0-v3900.cr0.nik-ams.nl.as8312.net (195.210.57.53) 41.311 ms 41.561 ms 43.006 ms 5 * * * 6 209.85.248.88 (209.85.248.88) 147.061 ms 36.931 ms 28.063 ms 7 216.239.49.36 (216.239.49.36) 31.109 ms 33.292 ms 216.239.49.28 (216.239.49.28) 64.723 ms 8 209.85.255.130 (209.85.255.130) 49.350 ms 209.85.255.126 (209.85.255.126) 49.619 ms 209.85.255.122 (209.85.255.122) 52.416 ms 9 google-public-dns-a.google.com (8.8.8.8) 41.266 ms 44.054 ms 44.730 ms If you have any suggestions, please comment or answer. Thanks in advance.

    Read the article

  • How to use iSCSI inside HyperV VM?

    - by William
    I have 2 Dell R710 servers (intended to set up HyperV cluster) and a MD3000i SAN set up: Server1/Server2: NIC 1: connected to company LAN NIC 2: crossover to the other server's NIC 2 NIC 3: crossover to iSCSI port of SAN controller 1 NIC 4: crossover to iSCSI port of SAN controller 2 I have both servers setup as diskless servers with iSCSI boot from SAN without problem. But how can I access iSCSI from within the VM such that I can set up clustering inbetween the VMs? I can ping from the host to the SAN but found that NIC3/4 cannot be used for virtual network in HyperV? What am I doing wrong?

    Read the article

  • Prioritize One Network Share Over Another And/Or Cap Network Share Traffic? (Windows Server 2008 R2 Enterprise)

    - by FullTimeCoderPartTimeSysAdmin
    One of my fileservers is a hyper v VM running Windows Server 2008 R2 Enterprise. The NIC in the VM maps to a 1 GB NIC on the host that is dedicated just to this VM. I have two shares on the file server. One is very important and used by a few users. The other is less important but used by many users. The issue I'm having: When a ton of users are accessing the unimportant share, it can choke out requests to the important share. What I'd like to do: I'd like to some prioritize requests for for file on the more important share, or even dedicate a portion of the NIC's bandwidth just to requests for files on that share. Is there any way to do that? Alternately, can I add another NIC and specify that all traffic to one share goes over one NIC and traffic to the other share goes over the other?

    Read the article

  • iSCSI - what's faster?

    - by Unplugme71
    I have a DroboPro that is currently connected to a GS748TS switch. Also connected to the switch is a server and few workstations. Which method would be better in performance? Add a NIC to the server. Direct connect the DroboPro via iSCSI to the new NIC. Add a NIC to the server. Create a dedicated VLAN for the new NIC and Drobo. Add a NIC to the server and attach it to a separate switch. DroboPro connects to the switch. Becomes a private network, similar to a VLAN. DroboPro has a single ethernet connection. Server has a single ethernet connection (currently). Workstations each have a single ethernet connection.

    Read the article

  • Windows Firewall failing after 9-12 hours?

    - by routeNpingme
    I have 2 VM servers in the exact same NIC configuration: Server 2003 R2, one NIC connected to private (hardware firewall) network in a 10.x private address space, and one NIC connected straight to public internet. Windows Firewall is enabled for the Public Internet NIC only. Now, what doesn't make sense - this fails generally after 9-12 hours. It's not exact, but once or twice a day, traffic will just stop on the Internet NIC. No event log entries when it happens, and restarting the Windows Firewall service as well as stopping or restarting IPSec Services (just for fun) has no effect. Once the server is rebooted, everything is fine again for another 1/2 day. Any suggestions?

    Read the article

  • How to prevent Ubuntu from combining networks on 2 NIC server?

    - by SolarPower
    I've got a Ubuntu Server 10.10 with 2 network interfaces with a cable plugged into both going to switches on completely different networks with different routers. One network is the 10.1.10.X network with a separate gateway/router - the server has an IP of 10.1.10.50 with the gateway IP of 10.1.10.1. The other interface is 10.2.10.X, IP 10.2.10.50, gateway 10.2.10.1. All my Mac machines are on the 10.2.10.X network, and all servers on the 10.1.10.X. The ONLY connection between the two is this machine. From a Mac in my office, I CANNOT ping any computer on the 10.1.10.X network except the Ubuntu machine I'm talking about. However, under the Shared column in Finder, I can see every server on the other network listed. That makes me believe that somehow this Ubuntu machine is letting certain requests span both networks, which is a security problem. Hope this is enough info.

    Read the article

  • Multiple IP addresses to one NIC, but advanced IP settings window only shows one?

    - by tridium
    The ipconfig of my Windows Server 2003 server shows that the IP addresses 10.0.0.3, 10.0.0.11, and 10.0.0.12 are assigned to it. However, when I look in the Advanced TCP/IP Settings window for that connection, I only see the IP address of 10.0.0.3 listed there. In the Support tab for the connection, it shows that it's connected through 10.0.0.12, and in the Support Details window, it shows all the previously mentioned IP addresses. Where are these phantom IP addresses being stored and how can I free them up so I don't have any IP conflicts?

    Read the article

  • Can Xen be configured to dedicate only one port of a dual-port NIC to a domU?

    - by jamieb
    I'm using CentOS 5.4 on my dom0 with a stock Xen kernel. I'm attempting to use the pciback module to hide some of the Ethernet ports from the host and reserve them for a domU I intend to use for a firewall (process described here). However, when I launch the domU, I get the following error message: Using config file "/etc/xen/firewall". Error: pci: improper device assignment specified: pci: 0000:01:04.0 must be co-assigned to the same guest with 0000:01:06.0, but it is not owned by pciback. lspci gives me the following output: 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) From the sound of the error message, it seems like I also need to dedicate eth0 (PCI ID 01:04.0) to the domU. Am I correct? If not, what am I doing wrong? Thanks!

    Read the article

  • OpenSolaris livecd, NForce NIC driver, and NTFS USB mounting. Oh My!

    - by Jake Wharton
    I'm attempting to install OpenSolaris 2009.06 on my server. Before I do I would like to test that everything works and am running in to problems. It has an Abit AN-M2 motherboard with an NForce chipset. The driver config utility says that I need a third-party driver and links me to http://homepage2.nifty.com/mrym3/taiyodo/eng/. Scrolling to the bottom, I have downloaded both tgzs just in case. Now the fun part: The only way to get this on to the computer is via a USB drive since I can't access the network. Also, install CD in the drive otherwise I'd just burn them to DVD. Since my USB key is NTFS formatted I cannot mount it since the install CD seems to be lacking NTFS drivers which require more downloaded packages. What should I do? The server will simply be a dumb NAS and I know that there exists other OpenSolaris-based flavors such as Nexenta but from what I read the stock install is likely the best. If this is not the case and pursuing a different flavor is required or better I will also accept that as an answer (but please don't jump straight to it).

    Read the article

  • How can I detect if a NIC is UP in UNIX?

    - by Rich
    I am currently writing a bash script (for Nagios), and I would like to be able to detect if specific network cards are up or not. My best guess is to do something like this: ifconfig eth0 | grep UP | wc -l or: ethtool eth0 | grep "Link detected: yes" | wc -l Are either/both of those reliable ways of testing if the network card is up, or is there a better option? Perhaps there is a flag on ethtool which will do precisely what I want? Thanks in advance for any suggestions/pointers! Rich

    Read the article

  • Using iptables to block ALL outgoing traffic from one NIC?

    - by edanfalls
    Hi, I must pretty bad at Googling as this seems like a very basic question but I can't seem to find the answer anywhere... and man iptables is a very long read! I have two NICs - eth0 and eth1 - on a linux box and I want to block ALL outbound traffic (TCP and UDP across all ports) from one of the NICs, so that no traffic makes its way back up to the router. What is the command for this? I have only seen examples with specific ports. Thanks in advance.

    Read the article

  • How to make sure you server NIC performance is at best on Windows?

    - by Bobb
    I realised that I followed some obscure paper on setting NICs on Windows for too long. It might be outdated with new hardware released in past couple of years and with W2008R2. I read a bit about offloading and RSS settings on Windows and I realiased that it is very much circumstantial. Noone can really say - enable that and disable this. etc. So what I really want is for my next server try and setup testing environment and measure how my particular application will behave with different settings. The target is going to be latency of TCP primarily. Please note I am talking about latency inside the box. Are there precision tools for Windows to measure latency (down to microseconds)? P.S. I know this is not easy question. Windows time drift is awful problem for any precision test but still I am sure I am not the fist person to need that... Please share your experience

    Read the article

  • Where to get the network NIC's firmware rtl_nic/rtl8105e-1.fw?

    - by Kyrol
    Im installing Debian testing version wheezy on my Asus X53Sc with Intel Centrino Wireless-N 100. Im having a problem with my wifi connection. When I try to connect to internet with the wireless connection, an error occurs: Possible missing firmware /lib/firmware/rtl_nic/rtl8105e-1.fw for module r8169. I installed successfully the iwlwifi-100-5.ucode, but now I have this error. Any ideas or suggestions to resolve the problem ?

    Read the article

  • How fast can a Windows 2008 CIFS client write to SAMBA server using 10Gb NIC?

    - by one_bsd_guy
    We are experiencing a performance problem using Windows 2008 CIFS client. We have a FreeNAS server that delivers 1.3GB/s on ZFS write. We have 10Gb network connecting NAS server and CIFS clients. Using two Linux CIFS clients, we can get around 1.2GB/s. But windows 2008 clients can only give us 400MB/s. Is that the best a Windows 2008 client can deliver or we do have a poorly configured Windows client? Much appreciated.

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • Solaris 11 Update 1 - Link Aggregation

    - by Wesley Faria
    Solaris 11.1 No início desse mês em um evento mundial da Oracle chamado Oracle Open World foi lançada a nova release do Solaris 11. Ela chega cheia de novidades, são aproximadamente 300 novas funcionalidade em rede, segurança, administração e outros. Hoje vou falar de uma funcionalidade de rede muito interessante que é o Link Aggregation. O Solaris já suporta Link Aggregation desde Solaris 10 Update 1 porem no Solaris 11 Update 1 tivemos incrementos significantes. O Link Aggregation como o próprio nome diz, é a agregação de mais de uma inteface física de rede em uma interface lógica .Veja agumas funcionalidade do Link Aggregation: · Aumentar a largura da banda; · Imcrementar a segurança fazendo Failover e Failback; · Melhora a administração da rede; O Solaris 11.1 suporta 2(dois) tipos de Link Aggregation o Trunk aggregation e o Datalink Multipathing aggregation, ambos trabalham fazendo com que o pacote de rede seja distribuído entre as intefaces da agregação garantindo melhor utilização da rede.vamos ver um pouco melhor cada um deles. Trunk Aggregation O Trunk Aggregation tem como objetivo aumentar a largura de banda, seja para aplicações que possue um tráfego de rede alto seja para consolidação. Por exemplo temos um servidor que foi adquirido para comportar várias máquinas virtuais onde cada uma delas tem uma demanda e esse servidor possue 2(duas) placas de rede. Podemos então criar uma agregação entre essas 2(duas) placas de forma que o Solaris 11.1 vai enchergar as 2(duas) placas como se fosse 1(uma) fazendo com que a largura de banda duplique, veja na figura abaixo: A figura mostra uma agregação com 2(duas) placas físicas NIC 1 e NIC 2 conectadas no mesmo switch e 2(duas) interfaces virtuais VNIC A e VNIC B. Porem para que isso funcione temos que ter um switch com suporte a LACP ( Link Aggregation Control Protocol ). A função do LACP é fazer a aggregação na camada do switch pois se isso não for feito o pacote que sairá do servidor não poderá ser montado quando chegar no switch. Uma outra forma de configuração do Trunk Aggregation é o ponto-a-ponto onde ao invéz de se usar um switch, os 2 servidores são conectados diretamente. Nesse caso a agregação de um servidor irá falar diretamente com a agregação do outro garantindo uma proteção contra falhas e tambem uma largura de banda maior. Vejamos como configurar o Trunk Aggregation: 1 – Verificando quais intefaces disponíveis # dladm show-link 2 – Verificando interfaces # ipadm show-if 3 – Apagando o endereçamento das interfaces existentes # ipadm delete-ip <interface> 4 – Criando o Trunk aggregation # dladm create-aggr -L active -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Data Link Multipath Aggregation Como vimos anteriormente o Trunk aggregation é implementado apenas 1(um) switch que possua suporte a LACP portanto, temos um ponto único de falha que é o switch. Para solucionar esse problema no Solaris 10 utilizavamos o IPMP ( IP Multipathing ) que é a combinação de 2(duas) agregações em um mesmo link ou seja, outro camada de virtualização. Agora com o Solaris 11 Update 1 isso não é mais necessário, voce pode ter uma agregação de 2(duas) interfaces físicas e cada uma conectada a 1(um) swtich diferente, veja a figura abaixo: Temos aqui uma agregação chamada aggr contendo 4(quatro) interfaces físicas sendo que as interfaces NIC 1 e NIC 2 estão conectadas em um Switch e as intefaces NIC 3 e NIC 4 estão conectadas em outro Swicth. Além disso foram criadas mais 4(quatro) interfaces virtuais vnic A, vnic B, vnic C e vnic D que podem ser destinadas a diferentes aplicações/zones. Com isso garantimos alta disponibilidade em todas a camadas pois podemos ter falhas tanto em switches, links como em interfaces de rede físicas. Para configurar siga os mesmo passos da configuração do Trunk Aggregation até o passo 3 depois faça o seguinte: 4 – Criando o Trunk aggregation # dladm create-aggr -m haonly -l <interface> -l <interface> aggr0 5 – Listando a agregação criada # dladm show-aggr Depois de configurado seja no modo Trunk aggregation ou no modo Data Link Multipathing aggregation pode ser feito a troca de um modo para o outro, pode adcionar e remover interfaces físicas ou vituais. Bem pessoal, era isso que eu tinha para mostar sobre a nova funcionalidade do Link Aggregation do Solaris 11 Update 1 espero que tenham gostado, até uma próxima novidade.

    Read the article

  • Count of memory copies in *nix systems between packet at NIC and user application?

    - by Michael_73
    Hi there, This is just a general question relating to some high-performance computing I've been wondering about. A certain low-latency messaging vendor speaks in its supporting documentation about using raw sockets to transfer the data directly from the network device to the user application and in so doing it speaks about reducing the messaging latency even further than it does anyway (in other admittedly carefully thought-out design decisions). My question is therefore to those that grok the networking stacks on Unix or Unix-like systems. How much difference are they likely to be able to realise using this method? Feel free to answer in terms of memory copies, numbers of whales rescued or areas the size of Wales ;) Their messaging is UDP-based, as I understand it, so there's no problem with establishing TCP connections etc. Any other points of interest on this topic would be gratefully thought about! Best wishes, Mike

    Read the article

  • Tamilnadu HSC (+2) exam results – List of Websites

    - by samsudeen
    Tamilnadu State Board HSC ( +2 ) exam  results is expected to be release around the mid of this month ( probably on May 19). School students can get their marks at the same time from their respective schools. The results are usually published on websites or can availed from  mobile phone service providers through SMS. But it is for sure  most the sites will not be accessed for at least couple hours at the  time of result announcement Below are some of the quality web sites ( includes mirror sites to directly access the results page) that publishes the result links. http://www.tnresults.nic.in/ http://www.squarebrothers.com/ http://results.sify.com/ http://indiaresults.com/ http://www.dge1.tn.nic.in/ http://www.dge2.tn.nic.in/ http://www.dge3.tn.nic.in/ http://www.tngdc.in/ http://www.collegesintamilnadu.com/ http://www.classontheweb.com/ http://www.schools9.com/ http://www.chennaivision.com/ http://www.mygaruda.com/ http://www.tnagar.com/ http://www.indiacollegefinder.com/ http://www.chennaionline.com/ http://www.nakkeeran.com/ http://www.getyourscore.in/ http://www.examresults.net/ http://results.webdunia.com/ http://www.jayanews.in/ http://www.findchennai.com/ Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Bind DHCP Server to Network Bridge

    - by Luke
    My wireless router died, so I decided to route everything through my server. So I installed a second NIC and a wireless card to be my new network: 1 NIC to the Modem, 1 NIC to the switch, and the Wireless to... Well, wireless. Anyways, I got far enough to get DHCP to work on just ONE adapter when I used Internet Connection Sharing (I couldn't get RRAS set up for the life of me), then I decided to try bridging the wireless and second NIC. Now, the DHCP server won't bind to the bridge, but I can enter manual IP's in my clients and it'll connect to the Internet. I also tried changing my wireless adapter's IP to 192.168.0.2, and to 192.168.1.1 to try to set up a separate scope, but to no avail. Running Windows Server 2003

    Read the article

  • No compatible network card

    - by sbintcliffe
    Motherboard: Asus K8N-E Deluxe with onboard NIC (nVidia nForce) Secondary NIC: I've tried using a standard NIC (Device Manager displays this as D-Link DFE-538TX 10/100, but under manufacturer in the General tab of the properties in Windows it states Realtek Semiconductor Corp.) I have downloaded ESXi 4.0.0 build 208167 and cooked to disc. I've booted from it, the .TGZ modules load from the yellow and grey screen, the progress bar reaches to about 60% and like a second later the screen changes and I have the following information on screen; "No compatible network adapter found. Please consult the HCG." I've checked the HCG and found that my motherboard is listed. I also get the same message with the secondary NIC. Any ideas please?

    Read the article

  • Packet drop measured by ethtool, tcpdump and ifconfig

    - by Rayne
    Hi all, I have a question regarding packet drops. I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card. While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not. In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc? And am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC? Thank you. Regards, Rayne

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >