Search Results

Search found 1706 results on 69 pages for 'packet shaping'.

Page 9/69 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How do I run Cisco Packet Tracer 6.0.1?

    - by user186296
    I have a problem running Packet tracer 6.0.1 on Ubuntu 13.04. I have a downloaded file from Cisco Netspace - this file had not extension. I added the extension tar.gz to this file and I could unpack it to a location with other folders and files, there was a file install too. This was a .bin file and I used sudo chmod +x on this file and I used ./filename and installation has been launched I had to accept licence and so on. After successfully installation I tried to launch Packet Tracer and nothing happened. Please help me, how can I correctly install and run Packet Tracer. Note that I'm new in using Linux.

    Read the article

  • What are the attack vectors for passwords sent over http?

    - by KevinM
    I am trying to convince a customer to pay for SSL for a web site that requires login. I want to make sure I correctly understand the major scenarios in which someone can see the passwords that are being sent. My understanding is that at any of the hops along the way can use a packet analyzer to view what is being sent. This seems to require that any hacker (or their malware/botnet) be on the same subnet as any of the hops the packet takes to arrive at its destination. Is that right? Assuming some flavor of this subnet requirement holds true, do I need to worry about all the hops or just the first one? The first one I can obviously worry about if they're on a public Wifi network since anyone could be listening in. Should I be worried about what's going on in subnets that packets will travel across outside this? I don't know a ton about network traffic, but I would assume it's flowing through data centers of major carriers and there's not a lot of juicy attack vectors there, but please correct me if I am wrong. Are there other vectors to be worried about outside of someone listening with a packet analyzer? I am a networking and security noob, so please feel free to set me straight if I am using the wrong terminology in any of this.

    Read the article

  • Malformed packet error during MySQL LOAD DATA LOCAL INFILE

    - by dnagirl
    I am trying to load a file into an MySQL(v5.1.38) innodb table using PHP's mysqli::query and a LOAD DATA LOCAL INFILE query. The query returns a 'Malformed packet' error code 2027. Any ideas what is wrong? Here is the target table: CREATE TABLE `zbroom`.`lee_datareceive` ( `a` varchar(45) NOT NULL, `b` varchar(45) NOT NULL, `c` varchar(45) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Here is the query: LOAD DATA LOCAL INFILE '/path/to/file.txt' INTO TABLE lee_datareceive FIELDS TERMINATED BY '\t'; Here is the file data. Values are tab separated: t1 t2 t3 a b c d e f g h i

    Read the article

  • RST packet sent by the server

    - by intoTHEwild
    I am developing a client in Flash and using http req/resp to communicate with the server. For a while the session works fine and then the connection is terminated by the server. I did a wireshark sniff at the server and the last message which it sends is a RST packet. Also it happens only when I'm using IE and the server and client are in different domains. This does not happen in FireFox. I have been struggling to find a sol, till I found this thread. It's a bit old but I hope I could get some help. I am not sure if this bit of info is important but I am connecting to the server via a gateway. Any clue or suggestions for where should I look into to locate the problem ?

    Read the article

  • sending a packet to multiple client at a time from udp socket

    - by mawia
    Hi! all, I was trying to write a udp server who send an instance of a file to multiple clients.Now suppose any how I manage to know the address of those client statically(for the sake of simplicity) and now I want to send this packet to these addresses.So how exactly I need to fill the sockaddr structure to contain the address of these clients.I am taking an array of sockaddr structure(to contain client address) and trying to send at each one them at a time.Now problem is to fill the individual sockaddr structure to contain the client address. I was trying to do something like that sa[1].sin_family = AF_INET; sa[1].sin_addr.s_addr = htonl(INADDR_ANY);//should'nt I replace this INADDR_ANY with client ip?? sa[1].sin_port = htons(50002); Correct me if this is not the correct way. All your help in this regard will be highly appreciated. With Thanks in advance, Mawia

    Read the article

  • Network Data Packet connectivity intent

    - by Rakesh
    I am writing an Android application which can enable and disable the Network Data packet connection. I am also using one broadcast receiver to check the Network Data packet connection. I have registered broadcast receiver and provided required permission in Manifest file. But when I run this application it changes the connection state and after that it crashes. But when I don't include this broadcast receiver it works fine. I am not able to see any kind of log which can provide some clue. Here is my code for broadcast receiver. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.rakesh.simplewidget" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="10" /> <!-- Permissions --> <uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /> <uses-permission android:name="android.permission.MODIFY_PHONE_STATE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".SimpleWidgetExampleActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <!-- <receiver android:name=".ExampleAppWidgetProvider" android:label="Widget ErrorBuster" > <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget1_info" /> </receiver> --> <receiver android:name=".ConnectivityReceiver" > <intent-filter> <action android:name="android.net.conn.CONNECTIVITY_CHANGE" /> </intent-filter> </receiver> </application> </manifest> My Broadcast receiver class is as following. import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.net.ConnectivityManager; import android.net.NetworkInfo; import android.util.Log; public class ConnectivityReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { NetworkInfo info = (NetworkInfo)intent.getParcelableExtra(ConnectivityManager.EXTRA_NETWORK_INFO); if(info.getType() == ConnectivityManager.TYPE_MOBILE){ if(info.isConnectedOrConnecting()){ Log.e("RK","Mobile data is connected"); }else{ Log.e("RK","Mobile data is disconnected"); } } } } my Main activity file. package com.rakesh.simplewidget; import java.lang.reflect.Field; import java.lang.reflect.Method; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.graphics.Color; import android.net.ConnectivityManager; import android.os.Bundle; import android.telephony.TelephonyManager; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.Toast; public class SimpleWidgetExampleActivity extends Activity { private Button btNetworkSetting; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); btNetworkSetting = (Button)findViewById(R.id.btNetworkSetting); if(checkConnectivityState(getApplicationContext())){ btNetworkSetting.setBackgroundColor(Color.GREEN); }else{ btNetworkSetting.setBackgroundColor(Color.GRAY); } } public void openNetworkSetting(View view){ Method dataConnSwitchmethod; Class telephonyManagerClass; Object ITelephonyStub; Class ITelephonyClass; Context context = view.getContext(); boolean enabled = !checkConnectivityState(context); final ConnectivityManager conman = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); try{ final Class conmanClass = Class.forName(conman.getClass().getName()); final Field iConnectivityManagerField = conmanClass.getDeclaredField("mService"); iConnectivityManagerField.setAccessible(true); final Object iConnectivityManager = iConnectivityManagerField.get(conman); final Class iConnectivityManagerClass = Class.forName(iConnectivityManager.getClass().getName()); final Method setMobileDataEnabledMethod = iConnectivityManagerClass.getDeclaredMethod("setMobileDataEnabled", Boolean.TYPE); setMobileDataEnabledMethod.setAccessible(true); setMobileDataEnabledMethod.invoke(iConnectivityManager, enabled); if(enabled){ Toast.makeText(view.getContext(), "Enabled Network Data", Toast.LENGTH_LONG).show(); view.setBackgroundColor(Color.GREEN); } else{ Toast.makeText(view.getContext(), "Disabled Network Data", Toast.LENGTH_LONG).show(); view.setBackgroundColor(Color.LTGRAY); } }catch(Exception e){ Log.e("Error", "some error"); Toast.makeText(view.getContext(), "It didn't work", Toast.LENGTH_LONG).show(); } } private boolean checkConnectivityState(Context context){ final TelephonyManager telephonyManager = (TelephonyManager) context .getSystemService(Context.TELEPHONY_SERVICE); ConnectivityManager af ; return telephonyManager.getDataState() == TelephonyManager.DATA_CONNECTED; } } Log file: java.lang.RuntimeException: Unable to instantiate receiver com.rakesh.simplewidget.ConnectivityReceiver: java.lang.ClassNotFoundException: com.rakesh.simplewidget.ConnectivityReceiver in loader dalvik.system.PathClassLoader[/data/app/com.rakesh.simplewidget-2.apk] E/AndroidRuntime(26094): at android.app.ActivityThread.handleReceiver(ActivityThread.java:1777) E/AndroidRuntime(26094): at android.app.ActivityThread.access$2400(ActivityThread.java:117) E/AndroidRuntime(26094): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:985) E/AndroidRuntime(26094): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime(26094): at android.os.Looper.loop(Looper.java:130) E/AndroidRuntime(26094): at android.app.ActivityThread.main(ActivityThread.java:3691) E/AndroidRuntime(26094): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime(26094): at java.lang.reflect.Method.invoke(Method.java:507) E/AndroidRuntime(26094): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) E/AndroidRuntime(26094): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) E/AndroidRuntime(26094): at dalvik.system.NativeStart.main(Native Method) It seems Android is not able to recognize file Broadcast Receiver class. Any idea why I am getting this error? PS: Some information about Android environment and platform. - Android API 10. - Running on Samsung Galaxy II which has android 2.3.6 Edit: my broadcast receiver file ConnectivityReceiver.java was present in default package and it was not being recognized by Android. Android was looking for this file in current package i.e com.rakesh.simplewidget; I just moved connectivityReciever.java file to com.rakesh.simplewidget package and problem was solved.

    Read the article

  • Any program to help me check whether an ethernet channel can support full-length VLAN packet?

    - by Jimm Chen
    Sometimes, I have to face such a situation that I need to quickly and explicitly know whether a full length VLAN packet can traverse between two RJ45 ports. Yes, I mean 802.1Q ethernet frame with Etype=81 00 (diagram below). What I can do now is: Get two Windows PCs, for each PC, intall Intel Gigabit NIC and Intel specific driver to create a virtual NIC, with VLAN ID=3 assigned. Then connect the two PCs to each of the two RJ45 port. Finally execute ping to generate a full-length ethernet packet. ping -f -l 1472 <dest-IP> This way, I can be sure that the sent packet has the maximum "IP data payload" of 1500 bytes(8 bytes of ICMP header and 1472 bytes of ICMP data). If the ping gets reply, I know that the ethernet channel support full-length VLAN packet. From my experiment, some home switch or broad band routers(e.g. Linksys WRT54G) does not support full-length VLAN packet switching, so only ping -f -l 1468 succeeds. You see, I have to use an expensive Intel NIC to carry on that test, quite inconvenient. You know, for most laptop today, they do not equip an Intel NIC, and, even it is an Intel NIC, Intel VLAN driver, Intel has limitations on the models on which VLAN driver can be installed. So, my question is: Is there a small program that can let me send a full-length VLAN packet without installing a dedicated VLAN driver? Or better, the program has a stock feature that does the very job for my situation. Windows programs preferred, Linux solution welcome. Simpler the program, the better. Thank you.

    Read the article

  • Mac Mini Wake on Lan

    - by ILMV
    My Mac Mini has a setting in the Energy Saver category called Wake for Ethernet network access. Now the way I read this option is that any network access to the Mac whilst it is asleep will wake it up, but it doesn't. I have read that I have to send it the magic packet to wake it up, but what I really want to do is be able to simply attempt to access the Mini over the network and it wake up on demand without sending a magic packet. Can this be done? If it helps I am using a Netgear router.

    Read the article

  • Networking for RTS games with lockstep using UDP

    - by user782220
    Apparently from what I can gather Starcraft 2 moved to UDP in a patch. Now obviously with fps games there is no dispute that UDP is the only way to go. But with RTS games what benefits does UDP give over TCP given that the network model is lockstep? I suppose another way to phrase this is: what features of TCP make TCP inferior compared to UDP with resend, etc. implemented in the context of rts lockstep networking model?

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Rate limiting an internet connection per user

    - by Alister
    I've got a friend who has a "rent-by-room" property and includes internet access as part of this. However some tenants are somewhat hogging the internet (i.e. constantly downloading). I was wondering if anyone knows of a fairly easy way of rate limiting each connection to make the system more equitable. A preferred solution would be a cheap piece of hardware or some sort of Linux "appliance". I would rather not have to get an iptables headache if this is avoidable.

    Read the article

  • How much traffic a linux-based shaper would be able to chew

    - by facha
    Hi, everyone I have a linux based traffic shaper (iptables + tc htb policy). It works in bridge mode. Shapes traffic based on IPs and ports (there are about 100 rules in the "mangle" chain of iptables). Right now its throughoutput is about 100 mb/s (I don't remember pps, there are about 800 users in the network). Just was wondering - when I will hit the limit. How much traffic could a linux-based shaper possibly get throuhg it. If you have one under heavy load, please could you write what machine you use and what load there is. Or if you have any other info about the subj, please write as well. Thanks in advance.

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • Assignment to LAN queues (qWAN1 & qWAN2) from dual-WAN in pfSense

    - by Kurian
    I need help creating floating rules in pfSense for the following: I have two upper limited download queues qWAN1 and qWAN2 on my LAN interface. Each has its own qACK1, qDefault1 etc. How do I assign all unclassified traffic from WAN1 to LAN->qWAN1->qDefault1 and from WAN2 to LAN->qWAN2->qDefault2. qLink is the one and only default queue on LAN so that I get wire speed to the pfSense host from the LAN.

    Read the article

  • Simulating a low-bandwidth, high-latency network connection on Linux

    - by Justin L.
    I'd like to simulate a high-latency, low-bandwidth network connection on my Linux machine. Limiting bandwidth has been discussed before, e.g. here, but I can't find any posts which address limiting both bandwidth and latency. I can get either high latency or low bandwidth using tc. But I haven't been able to combine these into a single connection. In particular, the example rate control script here doesn't work for me: # tc qdisc add dev lo root handle 1:0 netem delay 100ms # tc qdisc add dev lo parent 1:1 handle 10: tbf rate 256kbit buffer 1600 limit 3000 RTNETLINK answers: Operation not supported How can I create a low-bandwidth, high-latency connection, using tc or any other readily-available tool?

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • How can I limit the upload/download bandwidth on my CentOS server?

    - by Dan Nestor
    How can I limit the upload and download bandwidth on my CentOS server? This is a box with a single interface, eth0. Ideally, I would like a command-line solution (I've been trying to use tc), something that I could easily switch on and off in a script. So far I've been trying to do something like tc filter add dev eth0 protocol ip prio 50 u32 police rate 100kbit burst 10240 drop but I'm obviously missing a lot of knowledge and information. Can somebody help with a quick one-liner? Many thanks, Dan

    Read the article

  • Generate a limited amount of random network traffic between 2 hosts

    - by Andrew S
    I'm trying to find a utility that will allow me to generate a constant flow of random network traffic at a specified rate between 2 hosts. The utility needs to run on Windows and OSX. I've tried iperf but it seems to be more oriented toward short-term testing/statistics and it really taxes the CPU even at slower rates. I want something that will generate traffic for a few weeks at say 10Mbps while I use other tools to monitor the impact of that level of traffic on the network.

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >