Search Results

Search found 3061 results on 123 pages for 'interfaces'.

Page 10/123 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • OpenVZ with bridged interfaces and VLAN

    - by Deimosfr
    Hi, I've got a problem with OpenVZ with bridged VLAN. Here is my configuration: +------+ +-------+ +-----------+ +---------+ br0 |VE101 | | | | OpenBSD |----->| Debian |------->| | | WAN |--->| Router | | OpenVZ | +------+ | | | Firewall |----->| br0 br1 | br1 +------+ +-------+ +-----------+ +---------+------->|VE102 | |br0 | | |VLAN br0.110 +------+ v +---------+ |VE103.110| +---------+ I can't make VLAN work on br0 (br0.110) and I would like to understand why. I don't have any switch so no problem with unmanageable switch. I've configured a VLAN interface on OpenBSD in /etc/hostname.vlan110: inet 192.168.110.254 255.255.255.0 NONE vlan 110 vlandev sis1 And it seems to be working fine. I've also adapted my PF configuration to work with VLAN but I don't see any incoming traffic. On my Debian Lenny, here is my interfaces configuration : # The loopback network interface auto lo iface lo inet loopback # br0 auto br0 iface br0 inet static address 192.168.100.1 netmask 255.255.255.0 gateway 192.168.100.254 network 192.168.100.0 broadcast 192.168.100.255 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off # VLAN 110 auto br0.110 iface br0.110 inet static address 192.168.110.1 netmask 255.255.255.0 network 192.168.110.0 gateway 192.168.110.254 broadcast 192.168.110.255 pre-up vconfig add br0 110 post-down vconfig rem br0.110 It looks OK, but when I start my VE, here is the message: ... Configure veth devices: veth103.0 Adding interface veth103.0 to bridge br0.110 on CT0 for VE103 can't add veth103.0 to bridge br0.110: Operation not supported VE start in progress... So I've got one error here. I've followed this documentation http://wiki.openvz.org/VLAN but it doesn't work. I've certainly missed something but I don't know why. Someone could help me please? Thanks

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • Adding Multiple Interfaces to EC2 Ubuntu 12.04

    - by nocode
    I have a m1.medium Ubuntu 12.04 instance with two ENI's. I have a VPC setup with a private and public subnet. Private: 10.50.1.0/24 Public: 10.50.101.0/24 I initiated the instance on the private subnet. I configured a NAT instance and route all servers in the private subnet internet access. The route tables on the private subnet point towards the NAT instance and the route table on the public subnet point to the internet gateway. I am trying to add a public interface on the machine so that I can put it behind a ELB. When I added the second ENI and configured a static IP in /etc/network/interfaces and restarted the network services, I can no longer access from the Public subnet to the Private Subnet. Works Private private Private public Does not work Public private From Public Private, I ran a TCPDUMp on the private machine and can see the request coming in. My guess is it's trying to route over the new Public interface instead of the Private. Here's my route: default 10.50.1.1 0.0.0.0 UG 100 0 0 eth0 10.50.1.0 * 255.255.255.0 U 0 0 0 eth0 10.50.101.0 * 255.255.255.0 U 0 0 0 eth1 My networking knowledge is limited and I believe I have to add some routes but unsure of what command/syntax needs to be.

    Read the article

  • Watchguard Firebox "split" fibre optic line into 2 interfaces

    - by fRAiLtY-
    We have a requirement on our Watchguard Firebox XTM505 to be able to split our incoming external interface, in this case a fibre optic dedicated leased line, 100/100. We use the line in our office of approx 30 machines however we also re-sell to an external company who utilise it to provide wireless internet solutions to the public. The current infrastructure is as follows: Data in (Leased Line) - Juniper SRX210 managed by ISP - 1 cable out into unmanaged Netgear switch - 1 cable into our firewall and office network, 1 cable to our external providers core router managed by them. We have been informed that having the unmanaged switch in the position it is poses a security risk and that a good option would be to get our Watchguard Firewall to perform the split, by separating our office onto a trusted interface, and by "passing through" the external line to their managed router. It is alleged that the Watchguard is capable of doing this and also rate limiting the interfaces, i.e. 20mbps for the trusted interface and 80mbps for the "pass-through", however Watchguard technical support don't seem to be able to understand what we're trying to achieve. Can anyone provide any advice on whether this is possible on a Watchguard device and how or perhaps if there's a better way of achieving this, perhaps with a managed switch instead of unmanaged? Cheers

    Read the article

  • What is the fall off of subsecond throughput on Ethernet Network Interfaces

    - by Kyle Brandt
    On a network interface, speeds are given in term of data over time, in particular, they are bits per second. However, in the uber-fast world of computing -- a second is kind of a really long time. So for example, given a linear falloff. A 1 GBit per second interface would do 500MBit per half second, 250Mbit per quarter second etc. I imagine at certain units of time, this is no longer linear. Perhaps this is set by ethernet frequencies, system clock speeds, interrupt timers etc. I am sure this varies depending on the system -- but does anyone have more information or whitepapers on this? One of the main reasons I am curious is to understand output drops on interfaces. Even if the speed per second is much lower than the interface can handle -- perhaps there are spikes that cause drops for only small numbers of milliseconds. Perhaps various coalescing would hide this effect -- or perhaps increase it on the receiving interface? Do queues make a difference here? Example: So given if this is linear down to the MS we would have 1Mbit/MS, and if Wireshark isn't distorting what I see, should I see drops when I have a spike beyond 1Mbit?

    Read the article

  • Prevent Ninject from calling Initialize multiple times when binding to several interfaces

    - by Ahe
    Hi We have a concrete singleton service which implements Ninject.IInitializable and 2 interfaces. Problem is that services Initialize-methdod is called 2 times, when only one is desired. We are using .NET 3.5 and Ninject 2.0.0.0. Is there a pattern in Ninject prevent this from happening. Neither of the interfaces implement Ninject.IInitializable. the service class is: public class ConcreteService : IService1, IService2, Ninject.IInitializable { public void Initialize() { // This is called twice! } } And module looks like this: public class ServiceModule : NinjectModule { public override void Load() { this.Singleton<Iservice1, Iservice2, ConcreteService>(); } } where Singleton is an extension method defined like this: public static void Singleton<K, T>(this NinjectModule module) where T : K { module.Bind<K>().To<T>().InSingletonScope(); } public static void Singleton<K, L, T>(this NinjectModule module) where T : K, L { Singleton<K, T>(module); module.Bind<L>().ToMethod(n => n.Kernel.Get<T>()); } Of course we could add bool initialized-member to ConcreteService and initialize only when it is false, but it seems quite a bit of a hack. And it would require repeating the same logic in every service that implements two or more interfaces. Thanks for all the answers! I learned something from all of them! (I am having a hard time to decide which one mark correct). We ended up creating IActivable interface and extending ninject kernel (it also removed nicely code level dependencies to ninject, allthough attributes still remain).

    Read the article

  • Splitting EJBs and interfaces into separate module -- deployment fails

    - by Hank
    I'm having trouble following this guide to "extract" my interfaces and entities from my EAR to use them from another Web Application: I use NetBeans 6.8 and Glassfish 3.0.1 "Java Class Library" project contains all the entities and interfaces "Java EE Application" project class library added to the project, is packaged into the EAR contains EJB implementations, MDBs, Test "Java Web Application" project class library added to the project, is packaged into the WAR contains REST interface When I build and deploy the Web Application, all goes well. When I build the JEE application, I can see the jar-file (interfaces, entities) being included. But when I try to deploy the EAR, Glassfish refuses it with a java.lang.NoClassDefFoundError error: [#|2010-03-28T18:25:59.875+0200|WARNING|glassfishv3.0|javax.enterprise.system.tools.deployment.org.glassfish.deployment.common|_ThreadID=28;_ThreadName=Thread-1;|Error in annotation processing: java.lang.NoClassDefFoundError: mvs/core/StoreServiceLocal|#] [#|2010-03-28T18:25:59.876+0200|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=28;_ThreadName=Thread-1;|Exception while deploying the app java.lang.IllegalArgumentException: Invalid ejb jar [CoreServer]: it contains zero ejb. Note: 1. A valid ejb jar requires at least one session, entity (1.x/2.x style), or message-driven bean. 2. EJB3+ entity beans (@Entity) are POJOs and please package them as library jar. 3. If the jar file contains valid EJBs which are annotated with EJB component level annotations (@Stateless, @Stateful, @MessageDriven, @Singleton), please check server.log to see whether the annotations were processed properly. 'mvs/core/StoreServiceLocal' is an interface which is defined in the library jar file. What am I doing wrong?

    Read the article

  • Too many Tunnel Adapter Interfaces

    - by Tomas Lycken
    If I open a command prompt on my machine and type ipconfig /all, I see lots of Tunnel adapter Local Area Connection* 9: Media state . . . . . . . . . . . . . : Media disconnected Connection-specific DNS Sufficx . . . : Description . . . . . . . . . . . . . : Microsoft 6to4 Adapter #5 Physical address. . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . . . : No Autoconfiguration Enabled . . . . . . : Yes In fact, they're so many that my "real" adapters are pushed out of the stack, and can't be seen anymore. Is there any flag I can use on ipconfig to hide all virtual interfaces? Or is there some other way around this problem? Since they always say "Media disconnected" I suppose disabling could be an option, but if possible I'd rather not turn any functionality off. I just want to control what output I get from ipconfig. Also, I know these are related to IPv6 stuff. However, most of what I find on google merely states what these are, and that they're harmless - nothing about hiding/removing them.

    Read the article

  • Linux bonded Interfaces hanging periodically

    - by David
    I've several hosts that are showing problems with connectivity. When working from the command line, for example, typing is frozen for a second or so, then recovers - then it does it again. The most egregious example host would freeze (input) for 15-30 seconds, then recover and go out 5 seconds later. Switching cables didn't do anything - but removing one of the physical cables caused everything to clear up instantly (which why I think this is a network problem). Looking at the network I couldn't see any packets floating that would explain this. These ethernet interfaces (Gigabit Dell) were working normally previously, but since we moved the systems - and put them on a new set of switches - this has been a problem on multiple theoretically identically-configured hosts. The original switches were an HP Procurve 1810-24G and an HP Procurve 1800-24G connected with LLDP; the new switches are both Cisco SG 200-26, which I understand are rebranded Linksys switches. Is this caused by a problem with the switches? Is it the switch configurations? Are the Cisco switches incapable of handling this? I don't see where the configuration is located; I searched the usual /etc/sysconfig/network/devices but there's nothing in there about options (like mii polling) and nothing about the method of balancing the two. Searching scripts, I can't find anything in /etc/init.d/network either. The hosts are almost all Red Hat Enterprise Linux 5.x systems (5.6, 5.7) but some are Ubuntu Server 10.04.3 Lucid Lynx. I need help with both if it comes to that. UPDATE: We're also seeing some problems with servers on the original switches. The HP switches and the Cisco switches are also interconnected (temporarily); there is a cable run from one switch to the next. Pings on any of these hosts show about one ICMP packet out of every 5-6 getting dropped (timed out). Could there be an interaction between the two switches? Oh, and the hosts are using bonding with Balance-RR as the method.

    Read the article

  • Too many Tunnel Adapter Interfaces

    - by Tomas Lycken
    If I open a command prompt on my machine and type ipconfig /all, I see lots of Tunnel adapter Local Area Connection* 9: Media state . . . . . . . . . . . . . : Media disconnected Connection-specific DNS Sufficx . . . : Description . . . . . . . . . . . . . : Microsoft 6to4 Adapter #5 Physical address. . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . . . : No Autoconfiguration Enabled . . . . . . : Yes In fact, they're so many that my "real" adapters are pushed out of the stack, and can't be seen anymore. Is there any flag I can use on ipconfig to hide all virtual interfaces? Or is there some other way around this problem? Since they always say "Media disconnected" I suppose disabling could be an option, but if possible I'd rather not turn any functionality off. I just want to control what output I get from ipconfig. Also, I know these are related to IPv6 stuff. However, most of what I find on google merely states what these are, and that they're harmless - nothing about hiding/removing them.

    Read the article

  • Routing between 2 different subnets on 2 different interfaces in SonicOS

    - by Chris1499
    I'm having a bit of a problem allowing traffic between two of my subnets. Here's the structure I've built. The X0 interface has our windows server on it and it handles DHCP/DNS, etc. X1 has the WAN connection. The Sonicwall is handling DHCP on X2. The X3 interface is connected to a different vlan on the 48 port switch. The Sonicwall is handling DHCP on this network as well. So here's what i want to do. The network on X2 is for our guest wireless; i don't want it to be able to access any of the other networks, just the internet, so i that all blocked in the firewall. No issues there. The X3 network is going to be for programmable controllers, and needs to be able to access the X0 network where our computers are. This is where my problem is. I'm not able to get between the 192.168.2.xxx and the 192.168.1.xxx on interfaces X0 and X3 respectively. I have these rules set up in the firewall. The Lan Primary Subnet is the 192.168.2.0 on X0. So if i'm not mistaken, this will allow traffic between the two through the firewall. Now this is where I'm a little confused. Do i need to use NAT to get the traffic from X0 to go to X3 (and vice versa), or a static route, or both? Currently i have both, though i doubt they're done correctly (also in screenshot). I've tried to ping between the two without luck. Any advice, or if you see what's wrong with my setup, is much appreciated. If you need some more information, let me know. Thanks all! EDIT: So i found that i don't neither either NAT or a static route, that the setting in the firewall is enough. I can now ping from the 192.168.1.xxx network, however i can't access the server on the 192.168.2.xxx network. When i try to access i get "An error occured while reconnecting to Z: to server Microsoft Windows Network: The local device name is already in use. This connection has not been restored. What am i missing?

    Read the article

  • Check packet vlan tag using Tap virtual interface

    - by ankit
    Hi all, I am trying to learn how to implement virtual interfaces using the Tap driver. So far my understanding is that using the tap driver I can create a virtual interface and then have a userspace program attach to this interface to analyse the data coming into this device. Now what if I attach a cisco switch to my LAN interface using a TRUNK link, forward all the packets coming into the LAN interface to the virtual tap interface and then in my program attached to this interface do some coding to analyze the vlan tag in the packet and only allow certain vlans to be forwarded to the WAN interface ? Does this sound plausible or is there is flaw in my basic understanding ? Thanks for the help! ankit

    Read the article

  • Problems bringing up a second virtual network interface

    - by tubaguy50035
    I'm having issues adding a second IP address to one interface. Below is my /etc/networking/interfaces # The loopback network interface auto lo iface lo inet loopback #eth0 is our main IP address auto eth0 iface eth0 inet static address 198.58.103.* netmask 255.255.255.0 gateway 198.58.103.1 #eth0:0 is our private address auto eth0:0 iface eth0:0 inet static address 192.168.129.134 netmask 255.255.128.0 #eth0:1 is for www.site.com auto eth0:1 iface eth0:1 inet static address 198.58.104.* netmask 255.255.255.0 gateway 198.58.104.1 When I run /etc/init.d/networking restart, I get a fail error about bringing up eth0:1: RTNETLINK answers: File exists Failed to bring up eth0:1. Any reason this would be? I didn't have any problems with I first set up eth0 and eth0:0.

    Read the article

  • How to setup an Openvpn server with two gateways to internet

    - by fourat
    I have an openvpn server behind two wan interfaces: eth1 and eth2 where eth1 is the default gw and eth2 is where openvpn binds to. The problems my ovpn server is replying back to ovpn client via the default gw (through eth1) and the tcp negociation is lost before establishing any tunnel. Here's what's happening: wan client -----> eth2 ----> openvpn -----> eth1 ----> lost and not delivered back to client Is there a way to tell ovpn to stick on eth2 and consider it for all traffic ?

    Read the article

  • Why use I-prefix for interfaces in Java?

    - by Lars Andren
    Is there some reason why people use I-prefix for interfaces in Java? It seems to be a C#-convention spilling over. For C# it makes sense, as the answers to this question explains. However, for Java a class declaration clearly states which class that is extended and which interfaces that are implemented: public class Crow extends Animal implements Bird I think Joshua Bloch didn't suggest this in Effective Java, and I think he usually makes a lot of sense. I get the I-verbing as presented in an answer to the question above, but is there some other use with this convention for Java?

    Read the article

  • Abstract class over Interfaces in ADO.Net Environment

    - by Amit Ranjan
    I am developing a web app but is not satisfied with is architecture that I am following. The architecture is plain old conventional 3 tier architecture. What i want is follow some design pattern or architecture that will be help me in decoupling my code. I have idea about MVC and MVP architectures for Web App but i need different from that. I want to use OOPS concepts using abstract classes and interfaces, polymorphism etc in my app but not MVC and MVP. I dont know why? I havent tried any ado.net application earlier via abstract class or interfaces, so i need your help. Thanks

    Read the article

  • Separation of interfaces and implementation

    - by bonefisher
    From assembly(or module) perspective, what do you think of separation of Interface (1.assembly) and its Implementation (2.assembly)? In this way we can use some IoC container to develop more decoupling desing.. Say we have an assembly 'A', which contains interfaces only. Then we have an assembly 'B' which references 'A' and implements those interfaces..It is dependent only on 'A'. In assembly 'C' then we can use the IoC container to create objects of 'A' using dependency injection of objects from 'B'. This way 'B' and 'C' are completely unaware (not dependent) of themselves..

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >