Search Results

Search found 14 results on 1 pages for '10gbethernet'.

Page 1/1 | 1 

  • 8Gb Fiber Channcel HBA vs. 10 Gb SPF+ Converged HBA

    - by Hossein Aarabi
    I am putting a Dell server together, more specifically R720. I have to select the correct Host Bus Adapter. This HBA on R710 will connect to a storage device. I am confused between these two: QLogic 2562, Dual Port 8Gb Optical Fiber Channel HBA (price $2,045) QLogic 8262, Dual Port 10Gb SFP+, Converged Network Adapter (price $1,618) I thought since the QLogic 2562 is a fiber channel and is more expensive then it is faster in terms of IOPS. But, it is a 8Gb as opposed to 10 Gb of SFP+. My questions: Which one is better (IOPS performance, etc.)? Why should I choose one over another?

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • Optical multicast

    - by Randomblue
    I have a 10G XPF+ optical cable with market updates from a stock exchange. This cable goes into a switch, which then multicasts every packet to a couple of computers. The problem with using a switch for multicast is that there is latency overhead, even with a pass-through switch (~200ns). Are there "optical" solutions (I'm thinking of a beam splitter of some sort) which would allow for close to zero latency 10G multicast?

    Read the article

  • Linux Experts Riddle: Network output of 10MB/s on 10GB/s NIC

    - by user150324
    I have two CentOS 6 servers. I am trying to transfer files between them. Source server has 10GB/s NIC nd destination server has 1GB/s NIC. Regardless to the command used nor the protocol, the transfer speed is ~1 Mega byte per second. The goal is at least couple dozens MB per second. I have tried: rsync (also with various encryptions), scp, wget, aftp, nc. Here's some testing results with iperf: [root@serv ~]# iperf -c XXX.XXX.XXX.XXX -i 1 ------------------------------------------------------------ Client connecting to XXX.XXX.XXX.XXX, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local XXX.XXX.XXX.XXX port 33180 connected with XXX.XXX.XXX.XXX port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.30 MBytes 10.9 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 1.0- 2.0 sec 1.28 MBytes 10.7 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 2.0- 3.0 sec 1.34 MBytes 11.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 3.0- 4.0 sec 1.53 MBytes 12.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 4.0- 5.0 sec 1.65 MBytes 13.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 5.0- 6.0 sec 1.79 MBytes 15.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 6.0- 7.0 sec 1.95 MBytes 16.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 7.0- 8.0 sec 1.98 MBytes 16.6 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 8.0- 9.0 sec 1.91 MBytes 16.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 9.0-10.0 sec 2.05 MBytes 17.2 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.68 MBytes 14.0 Mbits/sec I guess HD is not the bottleneck here.

    Read the article

  • Basic multicast network performance problems

    - by davedavedave
    I've been using mpong from 29west's mtools package to get some basic idea of multicast latency across various Cisco switches: 1Gb 2960G, 10Gb 4900M and 10Gb Nexus N5548P. The 1Gb is just for comparison. I have the following results for ~400 runs of mpong on each switch (sending 65536 "ping"-like messages to a receiver which then sends back -- all over multicast). Numbers are latencies measured in microseconds. Switch Average StdDev Min Max 2960 (1Gb) 109.68463 0.092816 109.4328 109.9464 4900M (10Gb) 705.52359 1.607976 703.7693 722.1514 NX 5548(10Gb) 58.563774 0.328242 57.77603 59.32207 The result for 4900M is very surprising. I've tried unicast ping and I see the 4900 has ~10us higher latency than the N5548P (average 73us vs 64us). Iperf (with no attempt to tune it) shows both 10Gb switches give me 9.4Gbps line speed. The two machines are connected to the same switch and we're not doing any multicast routing. OS is RHEL 6. 10Gb NICs are HP 10GbE PCI-E G2 Dual-port NICs (I believe they are rebranded Mellanox cards). The 4900 switch is used in a project with tight access control so I'm waiting for approval before I can access it and check the config. The other two I have full access to configure. I've looked at the Cisco document[2] detailing differences between NX-OS and IOS w.r.t multicast so I've got some ideas to try out but this isn't an area where I have much expertise. Does anyone have any idea what I should be looking at once I get access to the switch? [1] http://docwiki.cisco.com/wiki/Cisco_NX-OS/IOS_Multicast_Comparison

    Read the article

  • please demystify the 10Gb ethernet interfaces, cables

    - by maruti
    this really is a Dell question but tempted to ask the experts @ serverfault. choosen a Dell powerconnect 8024 10GbE switch. per the spec sheet this has 10GbaseT ports. "24x 10GBASE-T (10Gb/1Gb/100Mb) with 4x Combo Ports of SFP+ (10Gb/1Gb) or 10GBASE-T" the HBA on my storage server has 10G CX4 copper ports Dell does not sell any cables and this adds to my confusion. from the picture Dell 8024 seems to have RJ-45 type ports on the front panel? my question: is it a RJ-45 + CX4 cable or CX4 + CX4 cable?

    Read the article

  • Very low throughput on 10GbE network

    - by aix
    I have two Linux machines, each equipped with a Solarflare SFN5122F 10GbE NIC. The two NICs are connected together with an SFP+ Direct Attach cable. I am using netperf to measure TCP throughput between the two machines. On one box, I run: netserver and on the other: netperf -t TCP_STREAM -H 192.168.x.x -- -m 32768 I get: MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.x.x (192.168.x.x) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 32768 10.02 1321.34 The measured throughput is 1.3Gb/s. This is 7.5x below the theoretical maximum, and only 30% faster than 1GbE. What steps can I take to troubleshoot this?

    Read the article

  • Cisco Nexus 5000 Vs. UCS 6100

    - by radius
    Hello, I'm a bit lost when I take a look to Nexus 5000 and UCS 6100. The description of Nexus 5000 is quite clear and I see what it does but the description of the UCS 6100 is a bit unclear for me. Could someone told me what would be the difference between a Nexus 5000 with all port at 10G and an UCS 6100 with all port at 10G ? Thanks,

    Read the article

  • 10Gbe sfp+ Cross Over Cable required? Is there such a thing?

    - by dc-patos
    To preface, this is my first experience with 10GBe networking and I have encountered an issue which research does not seem to document a solution for... I have two servers (older DL580G5 and DL380G5), each with a HP NC522SFP 10Gbe dual sfp+ port adapter. I have purchased copper "passive" direct connect adapter cables (which look like twinax), which seem to work well when I connect them to the sfp+ ports on my Dell 5524 switch. However, if I directly connect the two servers with the same cable, the link doesn't come up. I am running WS2012 standard on each server. My intention is to use one of these servers as a home brew SAN and I would like to enable mutiple 10Gbe paths for iSCSI traffic. My question(s): Can I connect the two adapters to each other, such as I would with other less speedy generations of ethernet? If I can, do I require a crossover cable, or some type of other sfp+ cable solution to do this? My 10Gbe sfp+ switch ports are premium, but server to server connections are doable in small numbers for me and I would really like the multiple paths this would give me. Is there a simple solution?

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • iSCSI using uplink ports on switch

    - by Gregory Thomson
    Do the uplink ports on switches typically work okay as iSCSI ports? We're adding a 10gb iSCSI SAN, and want to get a combo switch (48x1gb & 4x10gb SFP+ uplink ports) and use the 10gb for the iSCSI SAN, while the 1gb are for a 1gb iSCSI SAN. We were told the uplinks don't provide buffering needed for iSCSI. Is this specific to the switches used, and do some provide that needed buffering and some not on the uplink ports?

    Read the article

  • hybrid cable for QSFP to CX4 convertion

    - by John-ZFS
    here is a hybrid cable for QSFP to CX4. Will this fit SFP+ ports? Deeply confused by standards and struck in a situation with wrong hardware selection!Personally have not seen the ports/hardware and hence the obviously stupid question! thanks for stopping by and bearing with me. http://www.cablesondemand.com/pcategory/72/category/QSFP+-+CX4/URvars/Catalog/Library/InfoManage/QSFP_TO_CX4_COPPER_CABLES.htm

    Read the article

1