Search Results

Search found 1582 results on 64 pages for 'packet snifers'.

Page 55/64 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Setting up ehcache replication - what multicast settings do I need?

    - by Darren Greaves
    I am trying to set up ehcache replication as documented here: http://ehcache.sourceforge.net/EhcacheUserGuide.html#id.s22.2 This is on a Windows machine but will ultimately run on Solaris in production. The instructions say to set up a provider as follows: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> And a listener like this: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=localhost, port=40001, socketTimeoutMillis=2000"/> My questions are: Are the multicast IP address and port arbitrary (I know the address has to live within a specific range but do they have to be specific numbers)? Do they need to be set up in some way by our system administrator (I am on an office network)? I want to test it locally so am running two separate tomcat instances with the above config. What do I need to change in each one? I know both the listeners can't listen on the same port - but what about the provider? Also, are the listener ports arbitrary too? I've tried setting it up as above but in my testing the caches don't appear to be replicated - the value added in one tomcat's cache is not present in the other cache. Is there anything I can do to debug this situation (other than packet sniffing)? Thanks in advance for any help, been tearing my hair out over this one!

    Read the article

  • How do I find out what process Id and thread id / name has a file open

    - by peter
    Hi All, I am using C# in an application and am having some problems with a file becoming locked. The piece of code does this, while (true) { Read a packet from a socket (with data in it to add to the file) Open a file Writes data to it Close a file } But in the process the file becomes locked. I don't really understand how, we are are definately catching and reporting exceptions so I don't see how the file doesn't get closed every time. My best guess is that something else is opening the file, but I want to prove it. Can someone please provide a piece of code to check whether the file is open and if so report what processid and threadid has the file open. For example if I had this code, StreamWriter streamWriter1 = new StreamWriter(@"c:\logs\test.txt"); streamWriter1.WriteLine("Test"); // code to check for locks?? StreamWriter streamWriter2 = new StreamWriter(@"c:\logs\test.txt"); streamWriter1.Close(); streamWriter2.Close(); That will throw an exception because the file is locked when we try and open it the second time. So where the comment is what could I put in there to report that the current app (process Id) and the current thread (thread Id) have the file locked? Thanks.

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest.

    Read the article

  • Sending jpegs by tcp socket...sometimes incomplete.

    - by Guy
    Vb.net Hi I've been working on a project for months now (vb 2008 express). There is one final problem which I can't solve. I need to send images to a client from a 'server'(listener). The code below works most of the time but sometimes the image is incomplete. I believe this might be something to do with the tcp packet sizes varying, maybe limited by how busy it is out there on the net. I have seen examples of code that splits the image into chunks and sends them out, but I can't get them to work maybe because I'm using a different vb version. The pictures to be sent are small 20k max. Any working code examples would be wonderful. I have been experimenting and failing with this final hurdle for weeks. Thanks in anticipation. Client----- Sub GetPic() '------- Connect to Server ClientSocket = New Socket(AddressFamily.InterNetwork, SocketType.Stream, _ ProtocolType.Tcp) ClientSocket.Connect(Epoint) '------- Send Picture Request Dim Bytes() As Byte = System.Text.ASCIIEncoding.ASCII.GetBytes("Send Picture") ClientSocket.Send(Bytes, Bytes.Length, SocketFlags.None) '------- Receive Response Dim RecvBuffer(20000) As Byte Dim Numbytes As Integer Numbytes = ClientSocket.Receive(RecvBuffer) Dim Darray(Numbytes) As Byte Buffer.BlockCopy(RecvBuffer, 0, Darray, 0, Numbytes) '------- Close Connection ClientSocket.Shutdown(SocketShutdown.Both) ClientSocket.Close() '------- Dim MStrm = New MemoryStream(Darray) Picture = Image.FromStream(MStrm) End Sub Listener----- 'Threaded from a listener Sub ClientThread(ByVal Client As TcpClient) Dim MStrm As New MemoryStream Dim Rbuffer(1024) As Byte Dim Tbyte As Byte() Dim NStrm As NetworkStream = Client.GetStream() Dim I As Integer = NStrm.Read(Rbuffer, 0, Rbuffer.Length) Dim Incoming As String = System.Text.Encoding.ASCII.GetString(Rbuffer, 0, I) If Incoming = "Send Picture" then Picture Save(MStrm, Picture.RawFormat) Tbyte = MStrm.ToArray NStrm.Write(Tbyte, 0, Tbyte.Length) End if Client.Close() End Sub

    Read the article

  • libav/ffmpeg: avcodec_decode_video2() returns -1 when separating demultiplexing and decoding

    - by unbekannt
    I'm using libav (from a C++ program on Linux and Windows) to decode video streams from a file, which works fine (decoding various formats like H264 and MPEG2) using avformat_open_input(), av_read_frame() and avcodec_decode_video2(). Now I have to separate demultiplexing and decoding. One class will call avformat_open_input() and av_read_frame() and then pass the AVPackets into a queue that is read by another class. There I use avcodec_alloc_context3() to get the AVCodecContext needed for avcodec_decode_video2(). I've tested that with a MPEG2 video stream and it works. Problems arise if I try to decode a H264 stream: avcodec_decode_video2() always returns -1 and outputs "no frame". I understand that additional data (SPS/PPS) is needed to decode this stream, so I've tried to replicate the original AVCodecContext from the demultiplexer in the decoder, but it won't work: Copying the content of the extradata field and setting all other values that differ from the default ones in the decoder: -1 is returned Using the same context (i.e. passing along the pointer) results in a crash I also tried to set CODEC_FLAG2_CHUNKS. avcodec_decode_video2() then always returns packet.size - 3 (??) and frameFinished is never set to 1. In my opinion I have a general problem here that will arise whenever settings from the original CodecContext are needed to decode the AVPackets. I'd be grateful for any hints on how to solve that problem!

    Read the article

  • Passive and active sockets

    - by davsan
    Quoting from this socket tutorial: Sockets come in two primary flavors. An active socket is con­nect­ed to a remote active socket via an open data con­nec­tion... A passive socket is not con­nect­ed, but rather awaits an in­com­ing con­nec­tion, which will spawn a new active socket once a con­nec­tion is es­tab­lished ... Each port can have a single passive socket binded to it, await­ing in­com­ing con­nec­tions, and mul­ti­ple active sockets, each cor­re­spond­ing to an open con­nec­tion on the port. It's as if the factory worker is waiting for new mes­sages to arrive (he rep­re­sents the passive socket), and when one message arrives from a new sender, he ini­ti­ates a cor­re­spon­dence (a con­nec­tion) with them by del­e­gat­ing someone else (an active socket) to ac­tu­al­ly read the packet and respond back to the sender if nec­es­sary. This permits the factory worker to be free to receive new packets. ... Then the tutorial explains that, after a connection is established, the active socket continues receiving data until there are no remaining bytes, and then closes the connection. What I didn't understand is this: Suppose there's an incoming connection to the port, and the sender wants to send some little data every 20 minutes. If the active socket closes the connection when there are no remaining bytes, does the sender have to reconnect to the port every time it wants to send data? How do we persist a once established connection for a longer time? Can you tell me what I'm missing here? My second question is, who determines the limit of the concurrently working active sockets?

    Read the article

  • Why MSMQ won't send a space character?

    - by cyclotis04
    I'm exploring MSMQ services, and I wrote a simple console client-server application that sends each of the client's keystrokes to the server. Whenever hit a control character (DEL, ESC, INS, etc) the server understandably throws an error. However, whenever I type a space character, the server receives the packet but doesn't throw an error and doesn't display the space. Server: namespace QIM { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static XmlSerializer xs; static void Main(string[] args) { lock (_mqLock) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); } xs = new XmlSerializer(typeof(string)); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write("."); Console.Write(xs.Deserialize(msg.BodyStream)); } catch (Exception ex) { Console.Write(ex); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } } Client: namespace QIM_Client { class Program { const string QUEUE = @".\Private$\qim"; static MessageQueue _mq; static void Main(string[] args) { if (!MessageQueue.Exists(QUEUE)) _mq = MessageQueue.Create(QUEUE); else _mq = new MessageQueue(QUEUE); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } } } Client Input: Testing, Testing... Server Output: .T.e.s.t.i.n.g.,..T.e.s.t.i.n.g...... You'll notice that the space character sends a message, but the character isn't displayed.

    Read the article

  • Java: Clearing up the confusion on what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Symbian: clear buffer of RSocket object

    - by Heinz
    Hi, I have to come back once again to sockets in Symbian. Code to set up a connection to a remote server looks as follows: TInetAddr serverAddr; TUint iPort=111; TRequestStatus iStatus; TSockXfrLength len; TInt res = iSocketSrv.Connect(); res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp); res = iSocket.SetOpt(KSoTcpSendWinSize, KSolInetTcp, 0x10000); serverAddr.SetPort(iPort); serverAddr.SetAddress(INET_ADDR(11,11,179,154)); iSocket.Connect(serverAddr,iStatus); User::WaitForRequest(iStatus); Over the iSocket i receive packets of variable size. On very few occurences it happens that such a packet is corrupted. What I would like to do then is to clear all the data that is currently in the iSocket buffer and ready to be read. I have not seen any method of RSocket that allows me to clear the content of the buffer. Does anyone know how to do that? If possible, I would like to avoid using RecvOneOrMore() or similar recv function clear the buffer Thanks

    Read the article

  • UDP sockets in ad hoc network (Ubuntu 9.10)

    - by Ekhiotz
    Hi! I am using BSD sockets in Ubuntu 9.10 to send UDP packets in broadcast with the following code: sock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP); //sock_fd=socket(AF_INET,SOCK_DGRAM,0); receiver_addr.sin_family = PF_INET; //does not send with broadcast in ad hoc receiver_addr.sin_addr.s_addr = htonl(INADDR_BROADCAST); inet_aton("169.254.255.255",&receiver_addr.sin_addr); receiver_addr.sin_port = htons(port); int broadcast = 1; // this call is what allows broadcast packets to be sent: if (setsockopt(sock_fd, SOL_SOCKET, SO_BROADCAST, &broadcast, sizeof broadcast) == -1) { perror("setsockopt (SO_BROADCAST)"); exit(1); } ret=sendto(sock_fd, packet, size, 0,(struct sockaddr*)&receiver_addr,sizeof(receiver_addr)); Note that is not all the code, it is only to have an idea. The program sends all the data with INADDR_BROADCAST if I am connected to an infrastructure wireless network. However, if my laptop is connected to an ad-hoc network, it is able to receive all the data, but not to send it. I have solved the problem using the 169.254.255.255 broadcast address, but I would like to know what is going on. Thank you in advance!

    Read the article

  • Cannot open database "DataDir" requested by the login. The login failed

    - by Turkan Caglar
    Hi, I have a single user software that uses SQL Server database 2005. My computer was crushed while my software was on. I saved a copy of my database after I rescued my computer. However my software gives my following message. ( I don't know anything about SQL) However, I guess my database has the last login information. Since software was not shot down properly, it still thinks that I am logged in doesn't allow me to login again. How can I solve the problem? Can you help me? Or do you know any source that I can get help? Cannot open database "DataDir" requested by the login. The login failed ComputerName = GURELS User ID=sa;Initial Catalog=DataDir;Data Source=GURELS\CSS;Application Name=ChefTec User ID=sa;Initial Catalog=CTDir;Data Source=GURELS\CSS;Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Application Name=ChefTec;Workstation ID=GURELS;Use Encryption for Data=False;Tag with column collation when possible=False Thank you very much Turkan e-mail" [email protected] phone:617 259 6644

    Read the article

  • Linux network stack : adding protocols with an LKM and dev_add_pack

    - by agent0range
    Hello, I have recently been trying to familiarize myself with the Linux Networking stack and device drivers (have both similarly named O'Reilly books) with the eventual goal of offloading UDP. I have already implemented UDP on the NIC but now the hard part... Rather than ask for assistance on this larger goal I was hoping someone could clarify for me a particular snippet I found that is part of a LKM which registeres a new protocol (OTP) that acts as a filter between the device driver and network stack. http://www.phrack.org/archives/55/p55_0x0c_Building%20Into%20The%20Linux%20Network%20Layer_by_lifeline%20&%20kossak.txt (Note: this Phrack article contains three different modules, code for the OTP is at the bottom of the page) In the init function of his example he has: otp_proto.type = htons(ETH_P_ALL); otp_proto.func = otp_func; dev_add_pack(&otp_proto); which (if I understand correctly) should register otp_proto as a packet sniffer and put it into the ptype_all data structure. My question is about the dev_add_pack. Is it the case that the protocol being registered as a filter will always be placed at this layer between L2 and the device driver? Or, for instance could I make such a filtering occur between the application and transport layers (analyze socket parameters) using the same process? I apologize if this is confusing - I am having some trouble wrapping my head around the bigger picture when it comes to modules altering kernel stack functionality. Thanks

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • GUI Agent accepts statuses from Daemon and shows it using progress indicator

    - by Pavel
    Hi to all! My application is a GUI agent, which communicate with daemon through the unix domain socket, wrapped in CFSocket.... So there are main loop and added CFRunLoop source. Daemon sends statuses and agent shows it with a progress indicator. When there are any data on socket, callback function begin to work and at this time I have to immediately show the new window with progress indicator and increase counter. //this function initiate the runloop for listening socket - (int) AcceptDaemonConnection:(ConnectionRef)conn { int err = 0; conn->fSockCF = CFSocketCreateWithNative(NULL, (CFSocketNativeHandle) conn->fSockFD, kCFSocketAcceptCallBack, ConnectionGotData, NULL); if (conn->fSockCF == NULL) err = EINVAL; if (err == 0) { conn->fRunLoopSource = CFSocketCreateRunLoopSource(NULL, conn->fSockCF, 0); if (conn->fRunLoopSource == NULL) err = EINVAL; else CFRunLoopAddSource(CFRunLoopGetCurrent(), conn->fRunLoopSource, kCFRunLoopDefaultMode); CFRelease(conn->fRunLoopSource); } return err; } // callback function void ConnectionGotData(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void * data, void * info) { #pragma unused(s) #pragma unused(address) #pragma unused(info) assert(type == kCFSocketAcceptCallBack); assert( (int *) data != NULL ); assert( (*(int *) data) != -1 ); TStatusUpdate status; int nativeSocket = *(int *) data; status = [agg AcceptPacket:nativeSocket]; // [stWindow InitNewWindow] inside [agg SendUpdateStatus:status.percent]; } AcceptPacket function receives packet from the socket and trying to show new window with progress indicator. Corresponding function is called, but nothing happens... I think, that I have to make work the main application loop with interrupting CFSocket loop... Or send a notification? No idea....

    Read the article

  • Publishing WCF .NET 3.5 to IIS 5.1

    - by Adam
    I've been developing a WCF web service using .NET 3.5 with IIS7 and it works perfectly on my local computer. I tried publishing it to a server running IIS 5.1 and even though I can view the WSDL in my browser, the client application doesn't seem to be connecting to it correctly. I launched a packet sniffing app (Charles Proxy) and the response for the first message comes back to the client empty (0 bytes). Every message after the first one times out. The WCF service is part of a larger application that uses ASP .NET 3.5. That application has been working fine on IIS 5.1 for awhile now so I think it's something specific to WCF. I also tried throwing an exception in the SVC file to see if it made it that far and the exception never got thrown so I have a feeling it's something more low level that's not working. Any thoughts? Is there anything I need to install on the IIS5 server? If so how am I still able to view the WSDL in my browser?

    Read the article

  • How do I use Spreadsheet::WriteExcel to create a chart from numeric log data?

    - by yaohung
    I used csv2xls.pl to convert a text log into .xls format, and then I create a chart as in the following: my $chart3 = $workbook->add_chart( type => 'line' , embedded => 1); # Configure the series. $chart3->add_series( categories => '=Sheet1!$B$2:$B$64', values => '=Sheet1!$C$2:$C$64', name => 'Test data series 1', ); # Add some labels. $chart3->set_title( name => 'Bridge Rate Analysis' ); $chart3->set_x_axis( name => 'Packet Size ' ); $chart3->set_y_axis( name => 'BVI Rate' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'G2', $chart3 ); I can see the chart in the .xls file. However, all the data are in text format, not numeric, so the chart looks wrong. How do I convert text into number before applying this create-chart function? Also, how do I sort the .xls file before creating the chart?

    Read the article

  • DNS protocol message example

    - by virtual-lab
    hello there, I am trying to figure out how to send out DNS messages from an application socket adapter to a DNSBL. I spent the last two days understanding the basics, including experimenting with WireShark to catch an example of message exchanged. Now I would like to query the DNS without using dig or host command (I'm using Ubuntu); how can I perform this action at low level, without the help of these tools in wrapping the request in a proper DNS message format? How the message should be post it? Hex or String? Thanks in advance for any help. Regards Alessandro Ilardo Comment added I am investigating on JDev and Oracle SOA. The platform provides a Socket Adapter which simply apply a transformation (XSLT) and send the message straight to the socket. How the payload parameters (ex. the host I'm looking up) are wrapped within the message is left to the developer. So basically I have an idea on how the all DNS message is structured, but rather than put everything on JDev stright away I'd like to make some tests on my own just to make sure I got a valid message format. So, I am not using any specific language (I don't even understand why they moved my question from serverfault) and I don't want to use any tools which would hide part of the message, such as the header. I know they work well btw. I guess this stuff has something to do with packet injection. Someone suggested me to use telnet, but I've only used for SMTP or HTTP, I haven't got a clue on how it works for DNS request. Does it make more sense now?

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • TCP Socket.Connect is generating false positives

    - by Mark
    I'm experiencing really weird behavior with the Socket.Connect method in C#. I am attempting a TCP Socket.Connect to a valid IP but closed port and the method is continuing as if I have successfully connected. When I packet sniffed what was going on I saw that the app was receiving RST packets from the remote machine. Yet from the tracing that is in place it is clear that the connect method is not throwing an exception. Any ideas what might be causing this? The code that is running is basically this IPEndPoint iep = new IPEndPoint(System.Net.IPAddress.Parse(m_ipAddress), m_port); Socket tcpSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); tcpSocket.Connect(iep); To add to the mystery... when running this code in a stand alone console application, the result is as expected – the connect method throws an exception. However, when running it in the Windows Service deployment we have the connect method does not throw an exception. Edit in response to Mystere Man's answer How would the exception be swallowed? I have a Trace.WriteLine right above the .Connect method and a Trace.WriteLine right under it (not shown in the code sample for readability). I know that both traces are running. I also have a try catch around the whole thing which also does a Trace.Writeline and I don't see that in the log files anywhere. I have also enabled the internal socket tracing as you suggested. I don't see any exceptions. I see what appears to be successful connections. I am trying to identify differences between the windows service app and the diagnostic console app I made. I am running out of ideas though End edit Thanks

    Read the article

  • Using awk to return only certain chunks of data

    - by Koriar
    I'm not 100% certain how to phrase my question simply, so I apologize if this has been answered somewhere and I was just unable to find it. What I have are debug logs with authentication packets in them along with a bunch of other output. I need to search through about 2 million lines of logs to find every packet that contains a certain mac address. The packets look something like this (slightly censored): -----------------[ header ]----------------- Event: Authd-Response (1900) Sequence: -54 Timestamp: 1969-12-31 19:30:00 (0) ---------------[ attributes ]--------------- Auth-Result = Auth-Accept Service-Profile-SID = 53 Service-Profile-SID = 49 RADIUS-Access-Accept-Attr/WiMAX-Capability = 0x(numbers) Session-Timeout = 3600 Service-Profile-SID = 4 Service-Profile-SID = 29 Chargeable-User-Identity = "(Numbers)" User-Password = "(the MAC address I'm looking for)" -------------------------------------------- However there are about 10 different possible types with different possible lengths. They all start with the header line and end with the all-dashes line. I've had success using awk to get the code blocks themselves using this: awk '/-----------------\[ header \]-----------------/,/--------------------------------------------/' filename.txt But I was hoping to be able to use it to return only the packets which contain the MAC address that I need. I've been trying to figure this out for a few days now and I'm pretty stuck. I could try and write a bash script, but I could swear that I've used awk to do something like this before...

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • Creating Binary Block from struct

    - by MOnsDaR
    I hope the title is describing the problem, i'll change it if anyone has a better idea. I'm storing information in a struct like this: struct AnyStruct { AnyStruct : testInt(20), testDouble(100.01), testBool1(true), testBool2(false), testBool3(true), testChar('x') {} int testInt; double testDouble; bool testBool1; bool testBool2; bool testBool3; char testChar; std::vector<char> getBinaryBlock() { //how to build that? } } The struct should be sent via network in a binary byte-buffer with the following structure: Bit 00- 31: testInt Bit 32- 61: testDouble most significant portion Bit 62- 93: testDouble least significant portion Bit 94: testBool1 Bit 95: testBool2 Bit 96: testBool3 Bit 97-104: testChar According to this definition the resulting std::vector should have a size of 13 bytes (char == byte) My question now is how I can form such a packet out of the different datatypes I've got. I've already read through a lot of pages and found datatypes like std::bitset or boost::dynamic_bitset, but neither seems to solve my problem. I think it is easy to see, that the above code is just an example, the original standard is far more complex and contains more different datatypes. Solving the above example should solve my problems with the complex structures too i think. One last point: The problem should be solved just by using standard, portable language-features of C++ like STL or Boost (

    Read the article

  • What's the cleanest way to do byte-level manipulation?

    - by Jurily
    I have the following C struct from the source code of a server, and many similar: // preprocessing magic: 1-byte alignment typedef struct AUTH_LOGON_CHALLENGE_C { // 4 byte header uint8 cmd; uint8 error; uint16 size; // 30 bytes uint8 gamename[4]; uint8 version1; uint8 version2; uint8 version3; uint16 build; uint8 platform[4]; uint8 os[4]; uint8 country[4]; uint32 timezone_bias; uint32 ip; uint8 I_len; // I_len bytes uint8 I[1]; } sAuthLogonChallenge_C; // usage (the actual code that will read my packets): sAuthLogonChallenge_C *ch = (sAuthLogonChallenge_C*)&buf[0]; // where buf is a raw byte array These are TCP packets, and I need to implement something that emits and reads them in C#. What's the cleanest way to do this? My current approach involves [StructLayout(LayoutKind.Sequential, Pack = 1)] unsafe struct foo { ... } and a lot of fixed statements to read and write it, but it feels really clunky, and since the packet itself is not fixed length, I don't feel comfortable using it. Also, it's a lot of work. However, it does describe the data structure nicely, and the protocol may change over time, so this may be ideal for maintenance. What are my options? Would it be easier to just write it in C++ and use some .NET magic to use that?

    Read the article

  • Android: sending xml as document object, POST method

    - by juro
    i am new at programming and i need some help with that please =/ web service is already written but not by me. so all i have to do is send xml as document object by post method through web service. my code: public class send extends application { @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://app.local/test/"); try { DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder documentBuilder = documentBuilderFactory.newDocumentBuilder(); Document document = documentBuilder.newDocument(); Element rootElement = document.createElement("packet"); rootElement.setAttribute("version", "1.2"); document.appendChild(rootElement); Element em = document.createElement("imei"); em.appendChild(document.createTextNode("000000000000000")); rootElement.appendChild(em); em = document.createElement("username"); em.appendChild(document.createTextNode("5555")); rootElement.appendChild(em); HttpResponse response = httpclient.execute(httppost); } catch (Exception e) { System.out.println("XML Pasing Excpetion = " + e); } } }

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >