Search Results

Search found 3489 results on 140 pages for 'tcp'.

Page 13/140 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Do TCP connections work differently within the same subnet?

    - by Dean
    I've encountered some network behaviour that confuses me while trying to get Java RMI working. I use netcat to connect to a local machine: [my_machine]$ nc -w 1 192.168.0.100 60000 && echo success success I try to do the same to my server: [my_machine]$ nc -w 1 my-servers-ip 60000 && echo success This doesn't work, unless I explicitly listen on the server socket: [amazon_ec2]$ nc -l 60000 [my_machine]$ nc -w 1 my-servers-ip 60000 && echo success success For the version that fails, the SYN packet receives a RST, ACK in response. I'm not too knowledgable about this stuff, at this point I only have wild theories such as the one in the question. Any ideas? Potentially useful details: Local Machine (192.168.0.100) - Macbook Remote Machine (Amazon EC2) - Amazon Linux AMI 2012.03 Security Group Settings: 22 (SSH) 0.0.0.0/0 1099 0.0.0.0/0 49152-65535 0.0.0.0/0 "iptables -L" shows no rules set

    Read the article

  • Why do I see different TCP behaviour between IIS and FTP server applications on Windows 2003?

    - by rupello
    I am comparing Wireshark traces of a 10MB file download file from: the FileZilla FTP server and IIS (using HTTP) on the same Windows 2003 server. The FTP download performs faster and the trace shows the server behaving as expected, sending more data to the client with every ACK received: Link to full-size image The HTTP server trace shows a more bursty pattern. The timing of the send bursts are sometimes unrelated to any ACKs received from the client (circled in red): Link to full-size image Anyone have a suggestion as to why IIS traffic is having like this? Update: We have tried modifying the http.sys registry settings (setting MaxBytesPerSend to 256k and MaxBufferedSendBytes to 64k as recommended). Changing MaxBytesPerSend does seem to improve performance by increasing the amount of in-flight data , but we still see the same bursty pattern.

    Read the article

  • Server Firewall preventing sending of email [migrated]

    - by Jo Fitzgerald
    The firewall on my VPS appears to be preventing my site from sending email. It was working fine until the end of last month. My hosting provider (Webfusion) has been next to useless. I am able to send email if I open INPUT ports 32768-65535, but not if these ports are closed. Why would this be? I have the following rules in my firewall: # sudo iptables -L Chain INPUT (policy DROP) target prot opt source destination VZ_INPUT all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination VZ_FORWARD all -- anywhere anywhere Chain OUTPUT (policy DROP) target prot opt source destination VZ_OUTPUT all -- anywhere anywhere Chain VZ_FORWARD (1 references) target prot opt source destination Chain VZ_INPUT (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssmtp ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpts:32768:65535 ACCEPT udp -- anywhere anywhere udp dpts:32768:65535 ACCEPT tcp -- localhost.localdomain localhost.localdomain ACCEPT udp -- localhost.localdomain localhost.localdomain Chain VZ_OUTPUT (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere ACCEPT udp -- anywhere anywhere The VPS is running Plesk 10.4.4 (please ask if you require further technical information to help me)

    Read the article

  • How to implement a bidirectional "mailbox service" over tcp?

    - by igorgatis
    The idea is to allow to peer processes to exchange messages (packets) over tcp as much asynchronously as possible. The way I'd like it to work is each process to have an outbox and an inbox. The send operation is just a push on the outbox. The receive operation is just a pop on the inbox. Underlying protocol would take care of the communication details. Is there a way to implement such mechanism using a single TCP connection? How would that be implemented using BSD sockets and modern OO Socket APIs (like Java or C# socket API)?

    Read the article

  • Using TCP Acks to measure latency to a server?

    - by Ted Graham
    I am trying to measure latency to a server that I don't control. This is in a colocated environment, so the latency is on the order of 500 us (.5 ms). I understand that Cisco gear frequently deprioritizes ICMP traffic, making ping times unreliable. Is there a way for me to tell if this is the case on the gear I am traversing? Can I use TCP acknowledgements to determine the minimum latency to the remote server? To do this, I would somehow need to force the remote server to send a TCP ack immediately on receiving my data.

    Read the article

  • how to retain one million of simultanous TCP connections?

    - by cow
    i am to design a server that needs to serve millions of clients that are simultaneously connected with the server via TCP. the data traffic between the server and the clients may be sparse. so bandwidth issue can be ignored. one important requirement is that whenever the server needs to send data to any client it can use the existing TCP connection instead of opening a new connection toward the client (because client can be behind a firewall). does anybody know how to do it and what hardware/software is needed (at the least cost)? thanks in advance for any suggestion.

    Read the article

  • When will a TCP network packet be fragmented at the application layer?

    - by zooropa
    When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?

    Read the article

  • Sequential WSASend() calls - can I rely on TCP to put them on the wire in the posting order?

    - by Poni
    On Windows I/O completion ports, say I do this: void function() { WSASend("1111"); // A WSASend("2222"); // B WSASend("3333"); // C } If I got a "write-complete" that says 3 bytes of WSASend() A were sent, is it possible that right after that I'll get a "write-complete" that tells me that some or all of B & C were sent, or will TCP will hold them until I re-issue a WSASend() call with the rest of A's data? Or will TCP complete it automatically?

    Read the article

  • How do I make a TCP connection between 2 servers if both can start the connection ?

    - by DeeD
    I have a defined number of servers that can locally process data in their own way. But after some time I want to synchronize some states that are common on each server. My idea was that establish a TCP connection from each server to the other servers like a mesh network. My problem is that in what order do I make the connections since there is no "master" server here, so that each server is responsible for creating there own connections to each server. My idea was that make each server connect and if the server that is getting connected already has a connection to the connecting server, then just drop the connection. But how do I handle the fact that 2 servers is trying to connect at the same time? Because then I get 2 TCP connections instead of 1. Any ideas?

    Read the article

  • Iptables -gw parameter

    - by schoen
    I want to copy tcp traffic. i want to use these commands " iptables -A PREROUTING -t mangle -p tcp --dport 7 -j ROUTE --gw 1.2.3.4 --tee iptables -A POSTROUTING -t mangle -p tcp --sport 7 -j ROUTE --gw 1.2.3.4 --tee" like stated here http://stackoverflow.com/questions/7247668/duplicate-tcp-traffic-with-a-proxy but iptables keeps telling me "iptables v1.4.8: unknown option '--gw'" What can I do to fix this? With Kind Regards

    Read the article

  • Network Access: I can't access 192.168.1.101 from 192.168.1.102.

    - by takpar
    Hi, I'm running Ubuntu 10.04 on my PC with IP 192.168.1.101. every thing work fine, e.g. my web server is running and I can see http://localhost/ or http://192.168.1.101 properly. But the problem is that I cannot see my PC from my laptop at 192.168.1.102 e.g. at my laptop http://192.168.1.101 gives Connection timed out in browser. or trying to telnet on any port leads to: telnet: Unable to connect to remote host: Connection timed out laptop is running a fresh install of Ubuntu as well and there is no setup for firewall stuff in both computers. PS: Both computers can ping each other well. The router is a cicso linksys wireless ADSL modem. Currently, I can connect to FTP server on the Windows running on 192.168.1.102 from 192.168.1.101 without problem. Theses are commands ran on my PC, 192.168.1.101: ifconfig: adp@adp-desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:e1:8e:cf inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::226:18ff:fee1:8ecf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1831935 errors:0 dropped:0 overruns:0 frame:0 TX packets:1493786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1996855925 (1.9 GB) TX bytes:215288238 (215.2 MB) Interrupt:27 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:951742 errors:0 dropped:0 overruns:0 frame:0 TX packets:951742 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:494351095 (494.3 MB) TX bytes:494351095 (494.3 MB) vmnet1 Link encap:Ethernet HWaddr 00:50:46:c0:00:01 inet addr:192.168.91.1 Bcast:192.168.91.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet HWaddr 00:50:46:c0:00:08 inet addr:192.168.156.1 Bcast:192.168.156.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) port 80 is set to 0.0.0.0 well: adp@adp-desktop:~$ netstat -ln | grep 'LISTEN ' tcp 0 0 127.0.0.1:52815 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4559 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:7634 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN tcp 0 0 127.0.1.1:7777 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:33601 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp6 0 0 :::139 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN tcp6 0 0 :::445 :::* LISTEN /etc/hosts.deny is empty: adp@adp-desktop:~$ cat /etc/hosts.deny # /etc/hosts.deny: list of hosts that are _not_ allowed to access the system. # See the manual pages hosts_access(5) and hosts_options(5). # # Example: ALL: some.host.name, .some.domain # ALL EXCEPT in.fingerd: other.host.name, .other.domain # # If you're going to protect the portmapper use the name "portmap" for the # daemon name. Remember that you can only use the keyword "ALL" and IP # addresses (NOT host or domain names) for the portmapper, as well as for # rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8) # for further information. # # The PARANOID wildcard matches any host whose name does not match its # address. # # You may wish to enable this to ensure any programs that don't # validate looked up hostnames still leave understandable logs. In past # versions of Debian this has been the default. # ALL: PARANOID netstat -l: adp@adp-desktop:~$ netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:52815 *:* LISTEN tcp 0 0 *:hylafax *:* LISTEN tcp 0 0 *:www *:* LISTEN tcp 0 0 *:4369 *:* LISTEN tcp 0 0 localhost:7634 *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:xmpp-server *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 *:5280 *:* LISTEN tcp 0 0 adp-desktop:7777 *:* LISTEN tcp 0 0 *:33601 *:* LISTEN tcp 0 0 *:xmpp-client *:* LISTEN tcp 0 0 localhost:mysql *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:mdns *:* udp 0 0 *:47467 *:* udp 0 0 192.168.1.10:netbios-ns *:* udp 0 0 192.168.91.1:netbios-ns *:* udp 0 0 192.168.156.:netbios-ns *:* udp 0 0 *:netbios-ns *:* udp 0 0 192.168.1.1:netbios-dgm *:* udp 0 0 192.168.91.:netbios-dgm *:* udp 0 0 192.168.156:netbios-dgm *:* udp 0 0 *:netbios-dgm *:* raw 0 0 *:icmp *:* 7 netstat -rn: adp@adp-desktop:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.91.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1 192.168.156.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 commands on the laptop, 192.168.1.102: ifconfig: root@fakeuser-laptop:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:33:a2:31:15 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:21 eth1 Link encap:Ethernet HWaddr 00:2d:d9:3e:1f:6c inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::21d:d9ff:fe3e:1f6c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5681 errors:0 dropped:0 overruns:0 frame:10313 TX packets:6717 errors:6 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4055251 (4.0 MB) TX bytes:779308 (779.3 KB) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:206 errors:0 dropped:0 overruns:0 frame:0 TX packets:206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:15172 (15.1 KB) TX bytes:15172 (15.1 KB) netstat -rn: root@fakeuser-laptop:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Book about tcp, http, named pipe, shared memory, wcf and other inter-process communication protocol

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Latency in TCP/IP-over-Ethernet networks

    - by aix
    What resources (books, Web pages etc) would you recommend that: explain the causes of latency in TCP/IP-over-Ethernet networks; mention tools for looking out for things that cause latency (e.g. certain entries in netstat -s); suggest ways to tweak the Linux TCP stack to reduce TCP latency (Nagle, socket buffers etc). The closest I am aware of is this document, but it's rather brief. Alternatively, you're welcome to answer the above questions directly. edit To be clear, the question isn't just about "abnormal" latency, but about latency in general. Additionally, it is specifically about TCP/IP-over-Ethernet and not about other protocols (even if they have better latency characteristics.)

    Read the article

  • Latency in TCP/IP-over-Ethernet networks

    - by aix
    What resources (books, Web pages etc) exist out there that: explain the causes of latency in TCP/IP-over-Ethernet networks; mention tools for looking out for things that cause latency (e.g. certain entries in netstat -s); suggest ways to tweak the Linux TCP stack to reduce TCP latency (Nagle, socket buffers etc). The closest I am aware of is this document, but it's rather brief. Alternatively, you're welcome to answer the above questions directly.

    Read the article

  • Windows Server 2008: Limit UDP/TCP packets per IP or ban

    - by WBAR
    How I can limit UDP/TCP packets per IP send to my host (or better PORT) per second or minute ? Would be nice to ban that IP for 12/24 hours or even for ever. I got Windows Server 2008 and I'm very poor in Windows administration but quite good in Linux. EDIT: By basic problem is that They sending a lot of rubbish UPD and TCP packets.. TCP packets without SYNCH, fragmented UDP packets so my servers stop responding.. So I need to cut off users (IPs) sending more than X packets per second. I need solution witch provides me, somehow, configurable: X packets of certain type (UDP, TCP or both - lets say parameter named Z ) are allowed to be received by IP on Y port, otherwise this packet should be DROPPED. My virtual hosts are hosted by VirtualBox and I'm able to forward all incoming packets certain type and certain port to the specific Virtual Host, but I need to DROP them before my VirtualBox receive them.

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • iptables blocking ssh communication

    - by Michal Sapsa
    I'm using this script for iptables: #!/bin/sh echo "1" > /proc/sys/net/ipv4/ip_forward iptables -F iptables -X iptables -F -t nat iptables -X -t nat iptables -F -t filter iptables -X -t filter iptables -t filter -P FORWARD DROP iptables -t filter -A FORWARD -s 192.168.0.0/255.255.0.0 -d 0/0 -j ACCEPT iptables -t filter -A FORWARD -s 0/0 -d 192.168.0.0/255.255.0.0 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.8.0.1/255.255.255.0 -j MASQUERADE iptables -A FORWARD -s 10.8.0.1/255.255.255.0 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -d 0/0 -j MASQUERADE iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu iptables -t nat -A PREROUTING -i eth1 -p udp --dport 16161 -j DNAT --to 192.168.0.251:16161 iptables -t nat -A PREROUTING -i eth1 -p udp --sport 16161 -j DNAT --to 192.168.0.251:16161 #openvpn iptables -I INPUT -p tcp --dport 1194 -j ACCEPT iptables -I INPUT -p udp --dport 1194 -j ACCEPT I end up with some iptables rules that should work but don't work - probably because of me. # Generated by iptables-save v1.4.12 on Mon May 26 13:15:43 2014 *raw :PREROUTING ACCEPT [1657523:1357257330] :OUTPUT ACCEPT [36804:34834370] -A PREROUTING -p icmp -j TRACE -A PREROUTING -p tcp -j TRACE -A OUTPUT -p icmp -j TRACE -A OUTPUT -p tcp -j TRACE COMMIT # Completed on Mon May 26 13:15:43 2014 # Generated by iptables-save v1.4.12 on Mon May 26 13:15:43 2014 *nat :PREROUTING ACCEPT [5033:345623] :INPUT ACCEPT [154:34662] :OUTPUT ACCEPT [6:1968] :POSTROUTING ACCEPT [2:120] -A PREROUTING -i eth0 -p tcp -m tcp --dport 16161 -j DNAT --to-destination 192.168.0.251:22 -A PREROUTING -i eth1 -p tcp -m tcp --dport 16161 -j DNAT --to-destination 192.168.0.251:22 -A POSTROUTING -s 10.8.0.0/24 -j MASQUERADE -A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE COMMIT # Completed on Mon May 26 13:15:44 2014 # Generated by iptables-save v1.4.12 on Mon May 26 13:15:44 2014 *filter :INPUT ACCEPT [548:69692] :FORWARD DROP [8:384] :OUTPUT ACCEPT [2120:1097479] -A INPUT -p udp -m udp --dport 1194 -j ACCEPT -A INPUT -p tcp -m tcp --dport 1194 -j ACCEPT -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu -A FORWARD -s 192.168.0.0/16 -j ACCEPT -A FORWARD -d 192.168.0.0/16 -j ACCEPT -A FORWARD -s 10.8.0.0/24 -j ACCEPT -A FORWARD -i eth0 -o eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -i eth1 -o eth0 -p tcp -m tcp --dport 22 -j ACCEPT COMMIT TRACE at PREROUTEING AND OUTPUT are only for debuging this thing. When I ssh at public ip with port 16161 I don't get any message, only TimeOut so it looks like I don't get communication back to remote server. ETH0 is the world, ETH1 is LAN Any IPTABLES Masters willing to give a hand ? iptables -vL Chain INPUT (policy ACCEPT 20548 packets, 3198K bytes) pkts bytes target prot opt in out source destination 38822 7014K ACCEPT udp -- any any anywhere anywhere udp dpt:openvpn 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:openvpn Chain FORWARD (policy DROP 1129 packets, 64390 bytes) pkts bytes target prot opt in out source destination 214K 11M TCPMSS tcp -- any any anywhere anywhere tcpflags: SYN,RST/SYN TCPMSS clamp to PMTU 4565K 1090M ACCEPT all -- any any 192.168.0.0/16 anywhere 5916K 7315M ACCEPT all -- any any anywhere 192.168.0.0/16 0 0 ACCEPT all -- any any 10.8.0.0/24 anywhere 0 0 ACCEPT tcp -- any any anywhere 192.168.0.251 tcp dpt:16161 Chain OUTPUT (policy ACCEPT 59462 packets, 19M bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • RabbitMQ message broker unable to open unused port 61613

    - by mjn
    On a Windows Vista system, RabbitMQ fails to open port 61613 which is not used (as netstat and TCPView show). The server log indicates that it is possible to bind port 5672, but the next lines show the problem with port 61613. I have cleared all firewall settings and rebooted. Several times in the past this helped to solve the problem. But as the problem frequently reappears, I would like to know if there is somthing I am missing to solve its root cause. =INFO REPORT==== 29-Jun-2013::12:09:16 === started TCP Listener on [::]:5672 =INFO REPORT==== 29-Jun-2013::12:09:16 === started TCP Listener on 0.0.0.0:5672 =INFO REPORT==== 29-Jun-2013::12:09:16 === rabbit_stomp: default user 'guest' enabled =INFO REPORT==== 29-Jun-2013::12:09:16 === started STOMP TCP Listener on [::]:61613 =ERROR REPORT==== 29-Jun-2013::12:09:16 === failed to start STOMP TCP Listener on 0.0.0.0:61613 - eacces (permission denied) =INFO REPORT==== 29-Jun-2013::12:09:16 === stopped STOMP TCP Listener on [::]:61613

    Read the article

  • OpenBSD has open ports in default installation

    - by celil
    I have been considering replacing Ubuntu with OpenBSD to improve the security on my local server. I need to have ssh access to it, and I also need it to serve static web content - so the only ports I need open are 22 and 80. However, when I scan my server for open ports after installing OpenBSD 4.8, and enabling ssh and http at /etc/rc.conf httpd_flags="" sshd_flags="" I discovered that it had several other open ports: Port Scan has started… Port Scanning host: 192.168.56.102 Open TCP Port: 13 daytime Open TCP Port: 22 ssh Open TCP Port: 37 time Open TCP Port: 80 http Open TCP Port: 113 ident ssh (22) and http (80) should be open as I enabled httpd and sshd, but why are the other ports open, and should I worry about them creating additional security vulnerabilities? Should they be open in a default installation?

    Read the article

  • Reconstruct a file from a TCP stream

    - by Abhishek Chanda
    I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using libpcap to capture TCP datagrams and trying to reconstruct the file there. Here is what I did Listen for packets on an interface Group all packets which have the same ACK number Sort the group based on SEQ number Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Does everything sound correct here? Some more details: I am using C++ The packet data is being stored as std::vector<char> I did change the byte order while reading the ack number and seq number from the packet using ntohl I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing?

    Read the article

  • What happens with TCP packets between 2 Socket.BeginReceive Call??

    - by Rodrigo
    Hi, i have a doubt about Socket Programming, i am developing a TCP packets Sniffer, i am using Socket.BeginAccept, Socket.BeginReceive to capture every packet, but when a packet is received i have to process something, it is a fast operation, but would take some milliseconds, and then call the BeginReceive again. My question is, what would happen if some packets are sent while i am processing, and havent called BeginReceive??...are lost?...are buffered internally?...is there a limit?... Thanks in advance.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >