Search Results

Search found 8532 results on 342 pages for 'packet examples'.

Page 54/342 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Is dependency injection only for service type objects and singletons? (and NOT for gui?)

    - by sensui
    I'm currently experimenting with the Google's guice inversion of control container. I previously had singletons for just about any service (database, active directory) my application used. Now I refactored the code: all the dependencies are given as parameters to constructors. So far, so good. Now the hardest part is with the graphical user interface. I face this problem: I have a table (JTable) of products wrapped in an ProductFrame. I give the dependencies as parameters (EditProductDialog). @Inject public ProductFrame(EditProductDialog editProductDialog) { // ... } // ... @Inject public EditProductDialog(DBProductController productController, Product product) { // ... } The problem is that guice can't know what Product I have selected in the table, so it can't know what to inject in the EditProductDialog. Dependency Injection is pretty viral (if I modify one class to use dependency injection I also need to modify all the other classes it interacts with) so my question is should I directly instantiate EditProductDialog? But then I would have to pass manually the DBProductController to the EditProductDialog and I will also need to pass it to the ProductFrame and all this boils down to not using dependency injection at all. Or is my design flawed and because of that I can't really adapt the project to dependecy injection? Give me some examples of how you used dependency injection with the graphical user interface. All the examples found on the Internet are really simple examples where you use some services (mostly databases) with dependency injection.

    Read the article

  • Can an application affect TCP retransmits

    - by sipwiz
    I'm troubleshooting some communications issues and in the network traces I am occasionally coming across TCP sequence errors. One example I've got is: Server to Client: Seq=3174, Len=50 Client to Server: Ack=3224 Server to Client: Seq=3224, Len=50 Client to Server: Ack=3224 Server to Client: Seq=3274, Len=10 Client to Server: Ack=3224, SLE=3274, SRE=3284 Packets 4 & 5 are recorded in the trace (which is from a router in between the client and server) at almost exactly the same time so they most likely crossed in transit. The TCP session has got out of sync with the client missing the last two transmissions from the server. Those two packets should have been retransmitted but they weren't, the next log in the trace is a RST packet from the Client 24 seconds after packet 6. My question is related to what could be responsible for the failure to retransmit the server data from packets 3 & 5? I would assume that the retransmit would be at the operating system level but is there anyway the application could influence it and stop it being sent? A thread blocking or put to sleep or something like that?

    Read the article

  • Android: Help with tabs view

    - by James
    So I'm trying to build a tabs view for an Android app, and for some reason I get a force close every time I try to run it on the emulator. When I run the examples, everything shows fine, so I went as far as to just about copy most of the layout from the examples(a mix of Tabs2.java and Tabs3.java), but for some reason it still wont run, any ideas? Here is my code(List1.class is a copy from the examples for testing purposes). It all compiles fine, just gets a force close the second it starts: package com.jvavrik.gcm; import android.app.TabActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.TabHost; import android.widget.TextView; public class GCM extends TabActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final TabHost tabHost = getTabHost(); tabHost.addTab(tabHost.newTabSpec("tab1") .setIndicator("g", getResources().getDrawable(R.drawable.star_big_on)) .setContent(new Intent(this, List1.class))); tabHost.addTab(tabHost.newTabSpec("tab2") .setIndicator("C") .setContent(new Intent(this, List1.class)) ); tabHost.addTab(tabHost.newTabSpec("tab3") .setIndicator("S") .setContent(new Intent(this, List1.class)) ); tabHost.addTab(tabHost.newTabSpec("tab4") .setIndicator("A") .setContent(new Intent(this, List1.class)) ); } }

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • Decompress a GZipped response from the server (Socket)

    - by Lith
    Umm, ok, after sending some data to the server, noting this particular part: "Accept-Encoding: gzip,deflate\r\n" I am getting the following response: HTTP/1.1 200 OK Server: nginx Date: Fri, 09 Apr 2010 23:25:27 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.2.8 Expires: Mon, 26 Jul 1997 05:00:00 GMT Last-Modified: Fri, 09 Apr 2010 23:25:27 GMT Cache-Control: no-store, no-cache, must-revalidate Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Content-Encoding: gzip Vary: Accept-Encoding 7aa ??U-?Rh?%?2?w??PM]??7?qZ?K?)???2?&??m???"q??/p9w?????x?[`tA!G???G?5z??????a>k????????Q ???N?? ('??f?,(??Y:5B???-?)?3x^0e:j?`,???**???F>G)?2????@???b??????A?k???Ar?n? But how do I decompress it? Note that I am using the Socket Class to do all the work. I know how to decompress it, but the problem here lies in the fact that I cannot separate the Packet from the GZipped data, psuedo-psuedocode (or whatever) on how I do it: Socket sends packet; Socket reads response from server, stores into a ByteArray; Create MemoryStream, use ByteArray; Create GZipStream, use Memorystream; now the problem occurs; I am getting the following Error: System.IO.InvalidDataException The magic number in GZip header is not correct. Make sure you are passing in a GZip stream. I hope the explanation is clear enough __.

    Read the article

  • How do I get require_login()-like functionality using the new PHP Client Library for Facebook?

    - by cc
    Howdy. I've been tasked with making a Facebook game, but I'm new to Facebook development, so I'm just getting started. Apologies in advance if this is a no-brainer to people. I'm having trouble following all the examples I see on sites, and I keep running into missing pages in the Facebook documentation when I am trying to read up. I think it's because there's a new version of the PHP Client Library for Facebook, and everything I'm finding is referring to the old client. For instance, I see this code in a lot of examples: require 'facebook.php'; $facebook = new Facebook( array( 'appId' => '(id)', 'secret' => '(secret)' ) ); $facebook_account = $facebook->require_login(); ...but there's no "require_login()" in the client library provided in the facebook.php file. From what I can tell, it looks like Facebook has very recently rolled out some new system for development, but I don't see any sample code anywhere to deal with it. The new library comes with an "example.php" file, but it appears to be only for adding "Log in with Facebook" functionality to other sites (what I'm assuming is what they mean by "Facebook Connect" sites), not for just running apps in a Canvas page on Facebook itself. Specifically, what I need to do is let users visit an application page within Facebook, have it bring up the dialog box allowing them to authorize the app, have it show up in their "games" page, and then have it pass me the relevant info about the user so I can start creating the game. But I can't seem to find any tutorials or examples that show how to do this using the new library. Seems like this should be pretty straightforward, but I'm running into roadblocks. Or am I missing something about the PHP client library? Should require_login() be working for me, and there's something broken with my implementation, such as having the wrong client library or something? I downloaded from GitHub yesterday, so I'm pretty sure I have the most recent version of the code I have, but perhaps I'm downloading the wrong "facebook.php" file...?

    Read the article

  • gevent urllib is slow

    - by djay
    I've created a set of demos of a TCP server however my gevent examples are noticely slower. I'm sure must be how I compiled gevent but can't work out the problem. I'm using OSX leopard using fink compiled python 2.6 and 2.7. I've tried both the stable gevent and gevent 1.0b1 and it acts the same. The echo takes 5 seconds to respond, where the other examples take <1sec. If I remove the urllib call then the problem goes away. I put all the code in https://github.com/djay/geventechodemo To run the examples I'm using zc.buildout so to build $ python2.7 bootstrap.py $ bin/buildout To run the gevent example: $ bin/py geventecho3.py & [1] 80790 waiting for connection... $ telnet localhost 8080 Trying 127.0.0.1... ...connected from: ('127.0.0.1', 56588) Connected to localhost. Escape character is '^]'. hello echo: avast This will take 3-4 seconds to respond on my system. However the twisted example $ bin/py threadecho2.py or the twisted example $ bin/py twistedecho2.py Is less than 1s. Any idea what I'm doing wrong?

    Read the article

  • S.redirectTo leads always to a blank screen

    - by Jaime Ocampo
    I am now playing a little bit with lift (2.8.0), and all the features in LiftRules work as intended. But I haven't been able to use S.redirectTo at all. I always ends with a blank screen, no matter what. No error messages at all! As an example, I have the following form: ... <lift:logIn.logInForm form="post"> <p><login:name /></p> <p><login:password /></p> <p><login:submit /></p> </lift:logIn.logInForm> ... And the code is: object LogIn extends helper.LogHelper { ... def logInForm(in: NodeSeq): NodeSeq = { var name = "" var password = "" def login() = { logger.info("name: " + name) logger.info("password: " + password) if (name == "test1") S.redirectTo("/example") if (name == "test2") S.redirectTo("/example.html") if (name == "test3") S.redirectTo("example.html") S.redirectTo("/") } bind("login", in, "name" -> SHtml.text(name, name = _), "password" -> SHtml.password(password, password = _), "submit" -> SHtml.submit("Login", login)) } } The method 'login' is invoked, I can check that in the log information. But as I said, no matter which name I enter, I always end with a blank screen, although 'examples.html' is available when being accessed directly in the browser. How should I invoke S.redirectoTo in order to navigate to 'examples.html'? Also, why don't I receive an error message (I am logging at a debug level)? I think all the configuration in Boot is correct, since all LitRules examples (statelessRewrite, dispatch, viewDispatch, snippets) work fine.

    Read the article

  • error catching exception while System.out.print

    - by user1702633
    I have 2 classes, one that implements a double lookup( int i); and one where I use that lookup(int i) in solving a question, or in this case printing the lookup values. This case is for an array. So I read the exception documentation or google/textbook and come with the following code: public double lookup(int i) throws Exception { if( i > numItems) throw new Exception("out of bounds"); return items[i]; } and take it over to my class and try to print my set, where set is a name of the object type I define in the class above. public void print() { for (int i = 0; i < set.size() - 1; i++) { System.out.print(set.lookup(i) + ","); } System.out.print(set.lookup(set.size())); } I'm using two print()'s to avoid the last "," in the print, but am getting an unhandled exception Exception (my exception's name was Exception) I think I have to catch my exception in my print() but cannot find the correct formatting online. Do I have to write catch exception Exception? because that gives me a syntax error saying invalid type on catch. Sources like http://docs.oracle.com/javase/tutorial/essential/exceptions/ are of little help to me, I'm can't seem to grasp what the text is telling me. I'm also having trouble finding sources with multiple examples where I can actually understand the coding in the examples. so could anybody give me a source/example for the above catch phrase and perhaps a decent source of examples for new Java programmers? my book is horrendous and I cannot seem to find an understandable example for the above catch phrase online.

    Read the article

  • no default constructor exists for class

    - by MixedCoder
    #include "Includes.h" enum BlowfishAlgorithm { ECB, CBC, CFB64, OFB64, }; class Blowfish { public: struct bf_key_st { unsigned long P[18]; unsigned long S[1024]; }; Blowfish(BlowfishAlgorithm algorithm); void Dispose(); void SetKey(unsigned char data[]); unsigned char Encrypt(unsigned char buffer[]); unsigned char Decrypt(unsigned char buffer[]); char EncryptIV(); char DecryptIV(); private: BlowfishAlgorithm _algorithm; unsigned char _encryptIv[200]; unsigned char _decryptIv[200]; int _encryptNum; int _decryptNum; }; class GameCryptography { public: Blowfish _blowfish; GameCryptography(unsigned char key[]); void Decrypt(unsigned char packet[]); void Encrypt(unsigned char packet[]); Blowfish Blowfish; void SetKey(unsigned char k[]); void SetIvs(unsigned char i1[],unsigned char i2[]); }; GameCryptography::GameCryptography(unsigned char key[]) { } Error:IntelliSense: no default constructor exists for class "Blowfish" ???!

    Read the article

  • when to use Hibernate vs. Simple ResultSets for small application

    - by luke
    I just started working on upgrading a small component in a distributed java application. The main application is a rather complicated applet/servlet combo running on JBoss and it extensively uses Hibernate for its DataAccess. The component i am working on however is very a very straightforward data importing service. Basically the workflow is Listen for a network event Parse the data packet, extract a set of identifiers Map the identifier set to a primary key in our database Parse the rest of the packet and insert items in a related table using the foreign key found in step 3 Repeat in the previous version of this component it used a hibernate based DAL, that is no longer usable for a variety of reasons (in particular it is EOL), so I am in charge of replacing the Data Access layer for this component. So on the one hand I think i should use Hibernate because that's what the rest of the application does, but on the other i think i should just use regular java.sql.* classes because my requirements are really straightforward and aren't expected to change any time soon. So my question is (and i understand it is subjective) at what point do you think that the added complexity of using an ORM tool (in terms of configuration, dependencies...) is worth it? UPDATE due to the way the DataAccesLayer for the main application was written (weird dependencies) i cannot easily use it, i would have to implement it myself.

    Read the article

  • How to read/write from erlang to a named pipe ?

    - by cstar
    I need my erlang application to read and write through a named pipe. Opening the named pipe as a file will fail with eisdir. I wrote the following module, but it is fragile and feels wrong in many ways. Also it fails on reading after a while. Is there a way to make it more ... elegant ? -module(port_forwarder). -export([start/2, forwarder/2]). -include("logger.hrl"). start(From, To)-> spawn(fun() -> forwarder(From, To) end). forwarder(FromFile, ToFile) -> To = open_port({spawn,"/bin/cat > " ++ ToFifo}, [binary, out, eof,{packet, 4}]), From = open_port({spawn,"/bin/cat " ++ FromFifo}, [binary, in, eof, {packet, 4}]), forwarder(From, To, nil). forwarder(From, To, Pid) -> receive {Manager, {command, Bin}} -> ?ERROR("Sending : ~p", [Bin]), To ! {self(), {command, Bin}}, forwarder(From, To, Manager); {From ,{data,Data}} -> Pid ! {self(), {data, Data}}, forwarder(From, To, Pid); E -> ?ERROR("Quitting, first message not understood : ~p", [E]) end. As you may have noticed, it's mimicking the port format in what it accepts or returns. I want it to replace a C code that will be reading the other ends of the pipes and being launched from the debugger.

    Read the article

  • Getting the eventargs of registered events

    - by Bjorn Vdkerckhove
    i'm new with maps and openlayers, but i'm playing around with openlayers because i'll need mapfunctionality in my next project. The map is an image (because it's a drawn map of a medieval town). I found how i can register events, and it's working. But the problem is, that the "eventargs" is not working as in the examples i found. In one of the examples they are getting the x and y values after the users panned like this: map.events.register('moveend', map, function (e) { alert(e.xy); }); If i try this in visual studio, e doesn't have a 'xy' property. What am i missing? This is the code i have right now: <script type="text/javascript"> var map, layer; function init() { var windowHeight = $(window).height(); var windowWidth = $(window).width(); var mapdiv = $('#map'); mapdiv.css({width: windowWidth + 'px', height: windowHeight + 'px'}); map = new OpenLayers.Map('map', { maxResolution: 1000 }); layer = new OpenLayers.Layer.Image( 'Globe ESA', '[url]', new OpenLayers.Bounds(-180.0, -12333.5, 21755.5, 90.0), new OpenLayers.Size(windowWidth, windowHeight), { numZoomLevels: 100 }); map.addLayer(layer); nav = new OpenLayers.Control.Navigation(); map.addControl(nav); //events test map.events.register('moveend', map, function (e) { alert(e.xy); }); map.zoomToMaxExtent(); } </script> In the examples of openlayer, they don't use the eventargs, but there must be a way to get the zoomlevel, or the x and y after panning, right? Thank you!

    Read the article

  • Need help getting buttons to work...

    - by Mike Droid
    I am trying to get my first button to update a display number in my view when clicked. This view will have several buttons and "outputs" displayed. After reading examples and Q's here, I finally put something together that runs, but my first button is still not working; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.ship_layout); mSwitcher = (TextSwitcher) findViewById(R.id.eng_val); } private TextSwitcher mSwitcher; // Will be connected with the buttons via XML void onClick(View v){ switch (v.getId()) { case R.id.engplus: engcounter++; updateCounter(); break; case R.id.engneg: engcounter--; updateCounter(); break; } } private void updateCounter() { mSwitcher.setText(String.valueOf(engcounter)); } The .xml for this button is; <TextSwitcher android:id="@+id/eng_val" android:visibility="visible" android:paddingTop="9px" android:paddingLeft="50px" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@+id/build" android:layout_toRightOf="@+id/engeq" android:textColor="#DD00ff00" android:textSize="24sp"/> This is within a Relative Layout that appears otherwise OK. When I had set the view to have a TextView with the number set as a string , the number displayed, but I could not figure out how to update the text with a numerical field. That may be my real problem. I have gone through many examples generally referenced from the dev. site (UI, Common Tasks, various samples), and I am still not seeing the connection here... Again, this is simply a try at getting variables to respond to buttons and update on the view. So, a few Q's for anyone that can help; 1) Is there any easier way of doing this (ie. send numerical value to View) ? 2) Why isn't my TextSwitcher displaying the number? 3) Should I be using a TextSwitcher here? 4) Any examples of this you can point me to?

    Read the article

  • Using LINQ, need help splitting a byte array on data received from Silverlight sockets

    - by gcadmes
    The message packats received contains multiple messages deliniated by a header=0xFD and a footer=0xFE // sample message packet with three // different size messages List<byte> receiveBuffer = new List<byte>(); receiveBuffer.AddRange(new byte[] { 0xFD, 1, 2, 0xFE, 0xFD, 1, 2, 3, 4, 5, 6, 7, 8, 0xFE, 0xFD, 33, 65, 25, 44, 0xFE}); // note: this sample code is without synchronization, // statements, error handling...etc. while (receiveBuffer.Count > 0) { var bytesInRange = receiveBuffer.TakeWhile(n => n != 0xFE); foreach (var n in bytesInRange) Console.WriteLine(n); // process message.. // 1) remove bytes read from receive buffer // 2) construct message object... // 3) etc... receiveBuffer.RemoveRange(0, bytesInRange.Count()); } As you can see, (including header/footer) the first message in this message packet contains 4 bytes, and the 2nd message contains 10 bytes,a and the 3rd message contains 6 bytes. In the while loop, I was expecting the TakeWhile to add the bytes that did not equal the footer part of the message. Note: Since I am removing the bytes after reading them, the header can always be expected to be at position '0'. I searched examples for splitting byte arrays, but non demonstrated splitting on arrays of unknown and fluctuating sizes. Any help will be greatly appreciated. thanks much!

    Read the article

  • How to allow local LAN access while connected to Cisco VPN?

    - by Ian Boyd
    How can I maintain local LAN access while connected to Cisco VPN? When connecting using Cisco VPN, the server has to ability to instruct the client to prevent local LAN access. Assuming this server-side option cannot be turned off, how can allow local LAN access while connected with a Cisco VPN client? I used to think it was simply a matter of routes being added that capture LAN traffic with a higher metric, for example: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.255.0.0 10.0.0.3 10.0.0.3 20 <--Local LAN 10.0.0.0 255.255.0.0 192.168.199.1 192.168.199.12 1 <--VPN Link And trying to delete the 10.0.x.x -> 192.168.199.12 route don't have any effect: >route delete 10.0.0.0 >route delete 10.0.0.0 mask 255.255.0.0 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 192.168.199.12 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 0x3 And while it still might simply be a routing issue, attempts to add or delete routes fail. At what level is Cisco VPN client driver doing what in the networking stack that takes overrides a local administrator's ability to administer their machine? The Cisco VPN client cannot be employing magic. It's still software running on my computer. What mechanism is it using to interfere with my machine's network? What happens when an IP/ICMP packet arrives on the network? Where in the networking stack is the packet getting eaten? See also No internet connection with Cisco VPN Cisco VPN Client interrupts connectivity to my LDAP server Cisco VPN stops Windows 7 Browsing How can I prohibit the creation of a route in Windows XP upon connection to Cisco VPN? Rerouting local LAN and Internet traffic when in VPN VPN Client "Allow local LAN Access" Allow Local LAN Access for VPN Clients on the VPN 3000 Concentrator Configuration Example LAN access gone when I connect to VPN Windows XP Documentation: Route Edit: Things I've not yet tried: >route delete 10.0.* Update: Since Cisco has abandoned their old client, in favor of AnyConnect (HTTP SSL based VPN), this question, unsolved, can be left as a relic of history. Going forward, we can try to solve the same problem with their new client.

    Read the article

  • LDAP Structure: dc=example,dc=com vs o=Example

    - by PAS
    I am relatively new to LDAP, and have seen two types of examples of how to set up your structure. One method is to have the base being: dc=example,dc=com while other examples have the base being o=Example. Continuing along, you can have a group looking like: dn: cn=team,ou=Group,dc=example,dc=com cn: team objectClass: posixGroup memberUid: user1 memberUid: user2 ... or using the "O" style: dn: cn=team, o=Example objectClass: posixGroup memberUid: user1 memberUid: user2 My questions are: Are there any best practices that dictate using one method over the other? Is it just a matter of preference which style you use? Are there any advantages to using one over the other? Is one method the old style, and one the new-and-improved version? So far, I have gone with the dc=example,dc=com style. Any advice the community could give on the matter would be greatly appreciated.

    Read the article

  • Can ISC dhcpd operate as a proxy dhcp server for PXE boot?

    - by Matt
    I have an existing LAN with a DHCP server already dishing out IP addresses. For various reasons I cannot replace that server so it will still need to dish out IP addresses. I've been experimenting with dnsmasq in proxy mode to provide PXE boot filenames. Now I have dnsmasq chainloading iPXE ok, but I found that the problem with dnsmasq is that in proxy mode it won't send dhcp options down. So I can't seem to send option 17 to boot iscsi san. I read somewhere that it's not enabled in the source code. Oh well, so I thought perhaps I should try isc dhcpd (default version4 with ubuntu). But I can't find any configuration examples that work as a proxy. Does isc dhcpd even work in proxy mode? examples on the web imply patching the source. What other options do I have?

    Read the article

  • How can I filter packets from a port monitor?

    - by engineerchuan
    I have some data going from Point A to Point B. I have a SPAN monitor set up to a monitoring device C. To recreate some real world scenarios, I want to filter out all traffic which is a certain type (H.323 VoIP Signaling Packets) so that C sees a subset of the information that is flowing from A to B. What would the easiest way to do this be? I assume I would need a computer with 2 NIC cards and some software to examine each packet and chuck out the H.323 VoIP packets? Thanks!

    Read the article

  • Watchguard firebox: public IP addresses behind firewall with as much usable IP addresses as possible

    - by martinezpt
    Our ISP assigned us 16 public IP addresses that we want to assign to hosts behind a Watchguard firebox x750e. The IP addresses are: x.x.x.176/28 of which x.x.x.177 is the gateway. The hosts will be running software that needs to be directly assigned the public IP address so 1:1 NAT is not an option. I found this document that gives examples on how to assign public IP addresses to hosts behind the firewall, using an optional interface: http://www.watchguard.com/help/configuration-examples/public_IP_behind_XTM_configuration_example_(en-US).pdf However, I can't implement scenario 1 as it won't allow me to use the same subnet on both interfaces. As for scenario 2, splitting the address range into 2 subnets will decrease the usable hosts on the optional interface to 5 (8 - network - broadcast - optional interface ip). I'm convinced that there must be a better way to address this problem and maximize the number of usable IP addresses but I'm not very familiar with this specific firewall. Are there any suggestions on how to keep the hosts behind the firewall with public IP addresses while maximizing the usable IP addresses? thanks

    Read the article

  • checksum in raw sockets and pcap

    - by hero
    i am using pcap library to sniff some packets, change their tcp data , and then inject my packet on the network. my question is: if i changed in the tcp data, should i recalculate the length field in the tcp header? should i also change the checksum? i read in a page on how to create raw sockets that if you set the tcp_checksum to 0, the kernel will automatically calculate it and fill it, is this true for windows machines also?

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >