Search Results

Search found 11 results on 1 pages for 'flamewires'.

Page 1/1 | 1 

  • Link aggregation with freebsd8 and a cicso 3550, what am i doing wrong?

    - by Flamewires
    Hey, I am trying to setup Link Aggrigation with LACP (well, anything that provides increased bandwidth and failover using my setup will work). I'm running FreeBSD 8.0 on 3 machines. M1 is running 2 10/100 ethernetcards setup for link aggrigation using lagg. for reference: ifconfig em0 up ifconfig tx0 up ifconfig create lagg0 ifconfig lagg0 laggproto lacp laggport tx0 laggport em0 192.168.1.16 netmask 255.255.255.0 I plugged them into ports 1 and 2 of a Cicso 3550. then ran: configure terminal interface range Fa0/1 - 2 switchport mode access switchport access vlan 1 channel-group 1 mode active (everythings in vlan 1) Now Im able to connect the other computers to other ports on the switch and failover works great, i can unplug cables in the middle of a transfer and the traffic gets rerouted. However, im not noticing any speed increase. My test setup: load balancing: i tried dst and src on the switch, neither seemed to give me a speed increase. I am SCPing 2 500 meg files from the lagg computer to other computers (one each) which are also running 10/100 full duplex cards. I get transfer speeds of about 11.2-11.4 Mbps to a single host, and about half that (5.9-6.2) Mbps when transferring to both at the same time. From what I understood with destination load balancing the router was suppose to balance traffic headed for 1 computer over 1 port and traffic headed for another over a diff(in this case) the other port. With destination-MAC address forwarding, when packets are forwarded to an EtherChannel, the packets are distributed across the ports in the channel based on the destination host MAC address of the incoming packet. Therefore, packets to the same destination are forwarded over the same port, and packets to a different destination are sent on a different port in the channel. For the 3550 series switch, when source-MAC address forwarding is used, load distribution based on the source and destination IP address is also enabled for routed IP traffic. All routed IP traffic chooses a port based on the source and destination IP address. Packets between two IP hosts always use the same port in the channel, and traffic between any other pair of hosts can use a different port in the channel. (Link) What am i doing wrong/what would i need to do to see a speed increase beyond what i could do with just a single card?

    Read the article

  • lacp, cicso 3550, 3560, help with configuration

    - by Flamewires
    Hey all this is a repost from a question I asked on the cisco forums but never got a useful reply. Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are: BSDServer: #!/bin/sh #bring up both interfaces ifconfig em0 up media 1000baseTX mediaopt full-duplex ifconfig em1 up media 1000baseTX mediaopt full-duplex #create the lagg interface ifconfig lagg0 create #set lagg0's protocol to lacp, add both cards to the interface, #and assign it em1's ip/netmask ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0 The switches are configured as follows: #clear out old junk no int Po1 default int range GigabitEthernet 0/15 - 16 # config ports interface range GigabitEthernet 0/15 - 16 description lagg-test switchport duplex full speed 1000 switchport access vlan 192 spanning-tree portfast channel-group 1 mode active channel-protocol lacp **** switchport trunk encapsulation dot1q **** no shutdown exit interface Port-channel 1 description lagginterface switchport access vlan 192 exit port-channel load-balance src-mac end obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports. With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line. First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase? Second, This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing. In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c server-ip-address), so the source mac(and IP) are different for both connections. Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.

    Read the article

  • Bittorrent surveillance/monitoring

    - by Flamewires
    Is there any tool to sniff bittorrent traffic and reassemble data about the torrent? Im looking for file names, peers, tracker address, local IP, etc. This is purely for academic interest in which all parties would be willing participants and therefore please dont upvote responses that talk merely about legal issues with using this kind of approach on a production network. I also am assuming that the torrent connections are unencrypted. Thanks

    Read the article

  • SAS disk performance drops a while after reboot.

    - by Flamewires
    So we have some workstations with identical hardware. The Fedora14 box has a couple weeks uptime and still get good performance. hdparm -tT /dev/sda /dev/sda: Timing cached reads: 21766 MB in 2.00 seconds = 10902.12 MB/sec Timing buffered disk reads: 586 MB in 3.00 seconds = 195.20 MB/sec The Cent 5.5 boxes however seem to be okay after a reboot, /dev/sda: Timing cached reads: 34636 MB in 2.00 seconds = 17354.64 MB/sec Timing buffered disk reads: 498 MB in 3.01 seconds = 165.62 MB/sec but some time later( unsure exactly, tested at approx 1 day uptime) /dev/sda: Timing cached reads: 2132 MB in 2.00 seconds = 1064.96 MB/sec Timing buffered disk reads: 160 MB in 3.01 seconds = 53.16 MB/sec drop to this. This is with very low load. I believe they all have the same bios settings. Any ideas what could cause this on Cent? Ask for more info. It might also be worth noting, that passing the --direct flag causes the slow boxes to perform similarly to the non-slow ones for buffered disk reads.

    Read the article

  • Encrypted partitions with redundancy on ubuntu server

    - by Flamewires
    Hey I have to make a file system with an encrypted partition with on ubuntu server. something like Unencrypted: / - 10 GB /home - 10GB /var - 5GB -------------- Encrypted: /opt - 50GB This I can figure out in the setup, just partition as normal, setup /tmp as a encrypted volume with dm-crypt. However im not sure how to mirror this entire drive, so that if either failed i could still boot. and how will that affect the encrypted partition. Any help would be appreciated.

    Read the article

  • How can I extract just the elements I want from a Perl array?

    - by Flamewires
    Hey I'm wondering how I can get this code to work. Basically I want to keep the lines of $filename as long as they contain the $user in the path: open STDERR, ">/dev/null"; $filename=`find -H /home | grep $file`; @filenames = split(/\n/, $filename); for $i (@filenames) { if ($i =~ m/$user/) { #keep results } else { delete $i; # does not work. } } $filename = join ("\n", @filenames); close STDERR; I know you can delete like delete $array[index] but I don't have an index with this kind of loop that I know of.

    Read the article

  • Deleting an element from an array in perl

    - by Flamewires
    Hey I'm wondering how i can get this code to work, basically i want to keep the lines of $filename as long as they contain the $user in the path. Sry, perl noob. open STDERR, ">/dev/null"; $filename=`find -H /home | grep $file`; @filenames = split(/\n/, $filename); for $i (@filenames) { if ($i =~ m/$user/) { #keep results } else { delete $i; # does not work. } } $filename = join ("\n", @filenames); close STDERR; I know you can delete like delete $array[index] but I don't have an index with this kind of loop that I know of.

    Read the article

  • initializing a vector of custom class in c++

    - by Flamewires
    Hey basically Im trying to store a "solution" and create a vector of these. The problem I'm having is with initialization. Heres my class for reference class Solution { private: // boost::thread m_Thread; int itt_found; int dim; pfn_fitness f; double value; std::vector<double> x; public: Solution(size_t size, int funcNo) : itt_found(0), x(size, 0.0), value(0.0), dim(30), f(Eval_Functions[funcNo]) { for (int i = 1; i < (int) size; i++) { x[i] = ((double)rand()/((double)RAND_MAX))*maxs[funcNo]; } } Solution() : itt_found(0), x(31, 0.0), value(0.0), dim(30), f(Eval_Functions[1]) { for (int i = 1; i < 31; i++) { x[i] = ((double)rand()/((double)RAND_MAX))*maxs[1]; } } Solution operator= (Solution S) { x = S.GetX(); itt_found = S.GetIttFound(); dim = S.GetDim(); f = S.GetFunc(); value = S.GetValue(); return *this; } void start() { value = f (dim, x); } /* plus additional getter/setter methods*/ } Solution S(30, 1) or Solution(2, 5) work and initalizes everything, but I need X of these solution objects. std::vector<Solution> Parents(X) will create X solutions with the default constructor and i want to construct using the (int, int) constructor. Is there any easy(one liner?) way to do this? Or would i have to do something like: size_t numparents = 10; vector<Solution> Parents; Parents.reserve(numparents); for (int i = 0; i<(int)numparents; i++) { Solution S(31, 0); Parents.push_back(S); }

    Read the article

  • Boost threading/mutexs, why does this work?

    - by Flamewires
    Code: #include <iostream> #include "stdafx.h" #include <boost/thread.hpp> #include <boost/thread/mutex.hpp> using namespace std; boost::mutex mut; double results[10]; void doubler(int x) { //boost::mutex::scoped_lock lck(mut); results[x] = x*2; } int _tmain(int argc, _TCHAR* argv[]) { boost::thread_group thds; for (int x = 10; x>0; x--) { boost::thread *Thread = new boost::thread(&doubler, x); thds.add_thread(Thread); } thds.join_all(); for (int x = 0; x<10; x++) { cout << results[x] << endl; } return 0; } Output: 0 2 4 6 8 10 12 14 16 18 Press any key to continue . . . So...my question is why does this work(as far as i can tell, i ran it about 20 times), producing the above output, even with the locking commented out? I thought the general idea was: in each thread: calculate 2*x copy results to CPU register(s) store calculation in correct part of array copy results back to main(shared) memory I would think that under all but perfect conditions this would result in some part of the results array having 0 values. Is it only copying the required double of the array to a cpu register? Or is it just too short of a calculation to get preempted before it writes the result back to ram? Thanks.

    Read the article

  • Help C++ifying this C style code.

    - by Flamewires
    Hey I'm used to developing in C and I would like to use C++ in a project. Can anyone give me an example of how I would translate this C-style code into C++ code. I know it should compile in a c++ complier but I'm talking using c++ techniques(I.e. classes, RAII) typedef struct Solution Solution; struct Solution { double x[30]; int itt_found; double value; }; Solution *NewSolution() { Solution *S = (Solution *)malloc(sizeof(Solution)); for (int i=0;<=30;i++) { S-x[i] = 0; } S-itt_found = -1; return S; } void FreeSolution(Solution *S) { if (S != NULL) free(S); } int main() { Solution *S = NewSolution(); S-value = eval(S-x);// evals is another function that returns a double S-itt_found = 0; FreeSolution(S); return EXIT_SUCCESS; } Ideally I would like to be able to so something like this in main, but I'm not sure exactly how to create the class, i've read a lot of stuff but incorporating it all together correctly seems a little hard atm. Solution S(30);//constructor that takes as an argument the size of the double array S.eval();//a method that would run eval on S.x[] and store result in S.value cout << S.value << endl; Ask if you need more info, thanks.

    Read the article

  • Initializing a C++ vector to random values... fast

    - by Flamewires
    Hey, id like to make this as fast as possible because it gets called A LOT in a program i'm writing, so is there any faster way to initialize a C++ vector to random values than: double range;//set to the range of a particular function i want to evaluate. std::vector<double> x(30, 0.0); for (int i=0;i<x.size();i++) { x.at(i) = (rand()/(double)RAND_MAX)*range; } EDIT:Fixed x's initializer.

    Read the article

1