Search Results

Search found 3538 results on 142 pages for 'tcp hijacking'.

Page 7/142 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Detect TCP connection close when playing Flash video

    - by JoJo
    On the Flash client side, how do I detect when the server purposely closes the TCP connection to its video stream? I'll need to take action when this occurs - maybe attempt to restart the video or display an error message. Currently, the connection closing and the connection being slow look the same to me. The NetStream object ushers a NetStream.Play.Stop event in both cases. When the connection is slow, it usually recovers by itself within seconds. I wish to only take action when the connection is closed, not when it is slow. Here's how my general setup looks like. It's the basic NetConnection-NetStream-Video setup. this.vidConnection = new NetConnection(); this.vidConnection.addEventListener(AsyncErrorEvent.ASYNC_ERROR, this.connectionAsyncError); this.vidConnection.addEventListener(IOErrorEvent.IO_ERROR, this.connectionIoError); this.vidConnection.addEventListener(NetStatusEvent.NET_STATUS, this.connectionNetStatus); this.vidConnection.connect(null); this.vidStream = new NetStream(this.vidConnection); this.vidStream.addEventListener(AsyncErrorEvent.ASYNC_ERROR, this.streamAsyncError); this.vidStream.addEventListener(IOErrorEvent.IO_ERROR, this.streamIoError); this.vidStream.addEventListener(NetStatusEvent.NET_STATUS, this.streamNetStatus); this.vid.attachNetStream(this.vidStream); None of the error events fire when the server closes the TCP or when the connection freezes up. Only the NetStream.Play.Stop event fires. Here's a trace of what happens from initially playing the video to the TCP connection closing. connection net status = NetConnection.Connect.Success playStream(http://192.168.0.44/flv/4d29104a9aefa) NetStream.Play.Start NetStream.Buffer.Flush NetStream.Buffer.Full NetStream.Buffer.Empty checkDimensions 0 0 onMetaData NetStream.Buffer.Full NetStream.Buffer.Flush checkDimensions 960 544 NetStream.Buffer.Empty NetStream.Buffer.Flush NetStream.Play.Stop

    Read the article

  • Long-held TCP sessions in an ASMX client

    - by John
    Hi, I have an ASP.NET application which talks to a third-party SOAP web service. My application uses an ASMX client proxy (i.e. System.Web.Services.Protocols.SoapHttpClientProtocol). The third-party service uses WCF, although I don't expect that makes much difference. I should note that we're using .NET 3.5 SP1. We haven't customised the proxy or done anything unusual - we're just making standard web service requests and getting back the results. We have encapsulated the proxy reference within a using block so it will get disposed after the response is received. We've been told that our application is behaving strangely in its use of TCP sessions. Instead of opening a new TCP session for each request from a new proxy instance (which is what I would have expected it to do), it's apparently keeping several connections alive and re-using them. This is causing some issues at the third party end, as they are expecting us to be using multiple sessions. Is this a known behaviour for the SoapHttpClientProtocol client proxy? If so, is there any way we can override it so that each request results in a new TCP session? Thanks, John

    Read the article

  • Detecting TCP dropout over an unreliable network

    - by yx
    I am doing some experimentation over an unreliable radio network (home brewed) using very rudimentary java socket programming to transfer messages back and forth between the end nodes. The setup is as follows: Node A --- Relay Node --- Node B One problem I am constantly running into is that somehow the connection drops out and neither Node A or B knows that the link is dead, and yet continues to transmit data. The TCP connection does not time out either. I have added in a heartbeat message that causes a timeout after a while, but I still would like to know what is the underlying cause of why TCP does not time out. Here are the options I am enabling when setting up a socket: channel.socket().setKeepAlive(false); channel.socket().setTrafficClass(0x08); // for max throughput This behavior is strange since it is totally different than when I have a wired network. On a wired network, I can simulate a disconnected connection by pulling out the ethernet cord, however, once I plug the cord back in, the connection becomes restablished and messages begin to be passed through once more. On the radio network, the connection is never reestablished and once it silently dies, the messages never resume. Is there some other unknown java implentation or setting for a socket that I can use, also, why am I seeing this behavior in the first place? And yes, before anyone says anything, I know TCP is not the preffered choice over an unreliable network, but in this case I wanted to ensure no packet loss.

    Read the article

  • WCF net.tcp windows service - call duration and calls outstanding increases over time

    - by Brook
    I have a windows service which uses the ServiceHost class to host a WCF Service using the net.tcp binding. I have done some tweaking to the config to throttle sessions as well as number of connections, but it seems that every once in a while my "Calls outstanding" and "Call duration" shoot up and stay up in perfmon. It seems to me I have a leak somewhere, but the code I have is all fairly minimal, I'm relying on ServiceHost to handle the details. Here's how I start my service ServiceHost host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); My Faulted event just does the following (more or less, logging etc removed) if (host.State == CommunicationState.Faulted) { host.Abort(); } else { host.Close(); } host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); Here's some snippets from my app.config to show some of the things I've tried <runtime> <gcConcurrent enabled="true" /> <generatePublisherEvidence enabled="false" /> </runtime> ......... <behaviors> <serviceBehaviors> <behavior name="Throttled"> <serviceThrottling maxConcurrentCalls="300" maxConcurrentSessions="300" maxConcurrentInstances="300" /> .......... <services> <service name="MyService" behaviorConfiguration="Throttled"> <endpoint address="net.tcp://localhost:49001/MyService" binding="netTcpBinding" bindingConfiguration="Tcp" contract="IMyService"> </endpoint> </service> </services> .......... <netTcpBinding> <binding name="Tcp" openTimeout="00:00:10" closeTimeout="00:00:10" portSharingEnabled="true" receiveTimeout="00:5:00" sendTimeout="00:5:00" hostNameComparisonMode="WeakWildcard" listenBacklog="1000" maxConnections="1000"> <reliableSession enabled="false"/> <security mode="None"/> </binding> </netTcpBinding> .......... <!--for my diagnostics--> <diagnostics performanceCounters="ServiceOnly" wmiProviderEnabled="true" /> There's obviously some resource getting tied up, but I thought I covered everything with my config. I'm only getting about ~150 clients so I don't think I'm coming up against my "300" limit. "Calls per second" stays constant at anywhere from 2-5 calls per second. The service will run for hours and hours with 0-2 "calls outstanding" and very low "call duration" and then eventually it will shoot up to 30 calls oustanding and 20s call duration. Any tips on what might be causing my "calls outstanding" and "call duration" to spike? Where am I leaking? Point me in the right direction?

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • Blocking an IP in Webmin

    - by Dan J
    I've been checking my /var/log/secure log recently and have seen the same bot trying to brute force onto my Centos server running webmin. I created a chain + rule in Networking - Linux Firewall: Drop If source is 113.106.88.146 But I'm still seeing the attempted logins in the log: Jun 6 10:52:18 CentOS5 sshd[9711]: pam_unix(sshd:auth): check pass; user unknown Jun 6 10:52:18 CentOS5 sshd[9711]: pam_succeed_if(sshd:auth): error retrieving information about user larry Jun 6 10:52:19 CentOS5 sshd[9711]: Failed password for invalid user larry from 113.106.88.146 port 49328 ssh2 Here is the contents of /etc/sysconfig/iptables: # Generated by webmin *filter :banned-ips - [0:0] -A INPUT -p udp -m udp --dport ftp-data -j ACCEPT -A INPUT -p udp -m udp --dport ftp -j ACCEPT -A INPUT -p udp -m udp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport 20000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp --dport https -j ACCEPT -A INPUT -p tcp -m tcp --dport http -j ACCEPT -A INPUT -p tcp -m tcp --dport imaps -j ACCEPT -A INPUT -p tcp -m tcp --dport imap -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3s -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3 -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp-data -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp -j ACCEPT -A INPUT -p tcp -m tcp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport smtp -j ACCEPT -A INPUT -p tcp -m tcp --dport ssh -j ACCEPT -A banned-ips -s 113.106.88.146 -j DROP COMMIT # Completed # Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed # Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed

    Read the article

  • what to disable on Windows server? (by list of opened ports)

    - by javapowered
    I'm using HP DL360p Gen8 for HFT trading. I want to disable any network services I don't need cause I also want to try to disable Windows Firewall to test if this will improve perfomance. Could someone suggest what currently is turned on and can be likely turned off having ports list below? I need only RDP (also I drag & drop files via RDP) Proto Local Address Foreign Address State TCP 0.0.0.0:135 Term:0 LISTENING TCP 0.0.0.0:445 Term:0 LISTENING TCP 0.0.0.0:2301 Term:0 LISTENING TCP 0.0.0.0:2381 Term:0 LISTENING TCP 0.0.0.0:3389 Term:0 LISTENING TCP 0.0.0.0:47001 Term:0 LISTENING TCP 0.0.0.0:49152 Term:0 LISTENING TCP 0.0.0.0:49153 Term:0 LISTENING TCP 0.0.0.0:49154 Term:0 LISTENING TCP 0.0.0.0:49156 Term:0 LISTENING TCP 0.0.0.0:49157 Term:0 LISTENING TCP HIDEN:139 Term:0 LISTENING TCP HIDEN:3389 HIDEN:63373 ESTABLISHED TCP HIDEN:139 Term:0 LISTENING TCP HIDEN:139 Term:0 LISTENING TCP [::]:135 Term:0 LISTENING TCP [::]:445 Term:0 LISTENING TCP [::]:2301 Term:0 LISTENING TCP [::]:2381 Term:0 LISTENING TCP [::]:3389 Term:0 LISTENING TCP [::]:47001 Term:0 LISTENING TCP [::]:49152 Term:0 LISTENING TCP [::]:49153 Term:0 LISTENING TCP [::]:49154 Term:0 LISTENING TCP [::]:49156 Term:0 LISTENING TCP [::]:49157 Term:0 LISTENING UDP 0.0.0.0:68 *:* UDP 0.0.0.0:123 *:* UDP 0.0.0.0:161 *:* UDP 0.0.0.0:500 *:* UDP 0.0.0.0:4500 *:* UDP 0.0.0.0:5355 *:* UDP HIDEN:137 *:* UDP HIDEN:138 *:* UDP HIDEN:137 *:* UDP HIDEN:138 *:* UDP HIDEN:137 *:* UDP HIDEN:138 *:* UDP [::]:123 *:* UDP [::]:161 *:* UDP [::]:500 *:* UDP [::]:4500 *:* UDP [::]:5355 *:*

    Read the article

  • Injecting raw TCP packets with Python

    - by Evgeniy Arbatov
    Hello! What would be a suitable way to inject a raw TCP packet with Python? For example, I have the payload consisting of hexadecimal numbers and I want to send that sequence of hexadecimal numbers to a network daemon: so that if I choose to send 'abcdef', I see 'abcdef' on the wire too. But not '6162636566' as in the case of: new = socket.socket(socket.AF_INET, socket.SOCK_STREAM) new.connect(('127.0.0.1', 9999)) new.send('abcdef') Can I use Python's SOCK_RAW for this purpose? If so, can you give me an example of sending raw TCP packets with SOCK_RAW (since I did not get it working myself) Thanks! Evgeniy

    Read the article

  • Java TCP keep-alive for a master server

    - by asmo
    Context: Master server (Java, TCP) monitoring a list of hosted games (a different machine for the master server and for each hosted game server). Any user can host a game on his PC. Hosted games can last weeks or months. Need: Knowing when hosted game servers are closed or no longer reachable. Restriction 1: Can't rely on hosted servers' "gone offline update message", since those messages may never arrive (power down, Internet link cut, etc.) Restriction 2: I'm not sure about TCP's built-in keep-alive, since it would mean a 24/7 open socket with each hosted server (correct me if I'm wrong) Any thoughts?

    Read the article

  • Windows TCP connection - Still alive after process terminate

    - by Kartlee
    Hi People, I run a license server in linux and a process in Windows to check out tokens from it. It does a tcp socket connection to server to communicate and once the process in Windows is closed, the tokens are checked in back to server. But I see sometime the connection show as established in netstat output even when process in Windows is terminated. This happens when the process in Windows is running for a long time and terminate. It takes 2-3 hours for the connection to go away in neststat output. TCP BABDT350:4505 180.190.40.34:51847 ESTABLISHED 2832 [app.EXE] Can you guys tell me is this is a network stack configuration issue in Windows? Is it possible for the connection to go away once process terminate? Please let me know your answers to this.

    Read the article

  • adding SSL to microchip Generic TCP server application

    - by Surjya Narayana Padhi
    Hi, Has anybody upgraded the code of generic tcp server application provided by Microchip to SSL? I added new listener port to existing server socket. But then also its not TCPPutIsReady state. When I tried to connect through ssh client Tera Term its asking for username and password. But does it required for client to provide username and password? I a bit new to SSL. So please let me know the steps to connect to any ssl server using Tera Term. Another doubt is that can i use a TCP server socket without using http or ftp or telnet session?

    Read the article

  • non blocking tcp connect with epoll

    - by doccarcass
    My linux application is performing non-blocking TCP connect syscall and then use epoll_wait to detect three way handshake completion. Sometimes epoll_wait returns with both POLLOUT & POLLERR revents set for the same socket descriptor. I would like to understand what's going on at TCP level. I'm not able to reproduce it on demand. My guess is that between two calls to epoll_wait inside my event loop we had a SYN+ACK/ACK/FIN sequence but again I'm not able to reproduce it. Any clue ? Regards, Seb

    Read the article

  • Bind Data after TCP receive.

    - by gtas
    I have this strange problem all day now. I dunno if you handled something similar. I used two different serializers and now i know its not this problem. Im sending some data over TCP Sockets. Serialize - Send - Deserialize, everything works ok, i can get my objects search through them, use they're properties! But, if e.g receive a BusinessObject[] and convert into List<BusinessObject>, then bind the list in a Control.DataSource = businessObjectList; BOOM! NotSupportedException. Tried it with 3 different controls. Same behaviour. My head is empty of ideas right now!! The Send TCP happens on Desktop Framework, Receive on Compact Framework. But i dont think this has to do with anything. I wish for an explanation on this!

    Read the article

  • Max tcp/ip connections on Windows Server 2008

    - by zendar
    I have .Net service that listens on single port over TCP protocol. Clients connect and then transmit data for some time (from few minutes to several hours). Is there any limit on number of connections on Windows 2008 server? I did not hit any, since now there is up to 50 users. Plan is to have thousands of users, so I'd like to know if there will be problems in future. Edit: As Cloud answered, it seems that there are some limits in some versions of Windows Server 2008. Is there any reference on those limits? I tried Google, but it returns articles on limit on half-bound tcp connections.

    Read the article

  • Increasing TCP/IP Window size

    - by Lior
    I am trying to send messages over tcp/ip between two servers. I want to send a message that is 30KB. But I want to send it with as a whole. I don't want tcp protocol to break it into segments. I am using communication between 2 Windows Server 2008 R2. The client and the server are coded using c#. I tryed using tcpclnt.SendBufferSize = 100000; tcpclnt.Client.DontFragment = true; and the same at the server. I also tried configuring the window size of the server(editing the registry).

    Read the article

  • Giving Users an Option Between UDP & TCP?

    - by cam
    After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a paper (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this? The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).

    Read the article

  • How to retain one million simultaneous TCP connections?

    - by cow
    I am to design a server that needs to serve millions of clients that are simultaneously connected with the server via TCP. The data traffic between the server and the clients will be sparse, so bandwidth issues can be ignored. One important requirement is that whenever the server needs to send data to any client it should use the existing TCP connection instead of opening a new connection toward the client (because the client may be behind a firewall). Does anybody know how to do this, and what hardware/software is needed (at the least cost)? Thanks in advance for any suggestions.

    Read the article

  • Reading TCP Sequence Number Before Sending a Packet

    - by Sadeq Dousti
    I'm writing a C/C++ client-server program under Linux. Assume a message m is to be sent from the client to the server. Is it possible for the client to read the TCP sequence number of the packet which will carry m, before sending m? In fact, I'd like to append this sequence number to m, and send the resulting packet. (Well, things are more complicated, but let's keep it that simple. In fact, I'd like to apply authentication info to this sequence number, and then append it to m.) Moreover, is it possible for the server to read the TCP sequence number of the packet carrying m?

    Read the article

  • Java Web Server with Jetty - TCP Connections Taking Long

    - by daysleeper
    I have an application with fairly high traffic (20K req/min) running on the JVM with a Jetty servlet container on Ubuntu. Below is my Jetty configuration: 10 20 2000 2 When I analyze the network traffic, I realize that sometimes it is taking long to establish TCP connections on the port that Jetty is running. The long connections are varying between 3.0s and 9.0s. The port is configured to accept MAX number of TCP connections. Do you know what might be causing the delay in accepting connections? Thanks

    Read the article

  • Session hijacking prevention...how far will my script get me? additional prevention procedures?

    - by Yusaf Khaliq
    When the user logs in the current session vairables are set $_SESSION['user']['timeout'] = time(); $_SESSION['user']['ip'] = $_SERVER['REMOTE_ADDR']; $_SESSION['user']['agent'] = $_SERVER['HTTP_USER_AGENT']; In my common.php page (required on ALL php pages) i have used the below script, which resets a 15 minute timer each time the user is active furhtermore checks the IP address and checks the user_agent, if they do not match that as of when they first logged in/when the session was first set, the session is unset furthermore with inactivity of up to 15 minutes the session is also unset. ... is what i have done a good method for preventing session hijacking furthermore is it secure and or is it enough? If not what more can be done? if(!empty($_SESSION['user'])){ if ($_SESSION['user']['timeout'] + 15 * 60 < time()) { unset($_SESSION['user']); } else { $_SESSION['user']['timeout'] = time(); if($_SESSION['user']['ip'] != $_SERVER['REMOTE_ADDR']){ unset($_SESSION['user']); } if($_SESSION['user']['agent'] != $_SERVER['HTTP_USER_AGENT']){ unset($_SESSION['user']); } } }

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • How to correctly relay TCP traffic between sockets?

    - by flukes1
    I'm trying to write some Python code that will establish an invisible relay between two TCP sockets. My current technique is to set up two threads, each one reading and subsequently writing 1kb of data at a time in a particular direction (i.e. 1 thread for A to B, 1 thread for B to A). This works for some applications and protocols, but it isn't foolproof - sometimes particular applications will behave differently when running through this Python-based relay. Some even crash. I think that this is because when I finish performing a read on socket A, the program running there considers its data to have already arrived at B, when in fact I - the devious man in the middle - have yet to send it to B. In a situation where B isn't ready to receive the data (whereby send() blocks for a while), we are now in a state where A believes it has successfully sent data to B, yet I am still holding the data, waiting for the send() call to execute. I think this is the cause of the difference in behaviour that I've found in some applications, while using my current relaying code. Have I missed something, or does that sound correct? If so, my real question is: is there a way around this problem? Is it possible to only read from socket A when we know that B is ready to receive data? Or is there another technique that I can use to establish a truly 'invisible' two-way relay between [already open & established] TCP sockets?

    Read the article

  • Adding web reference on client when using Net.TCP

    - by Marko
    Hi everyone... I am trying to using Net.TCP in my WCF Service, which is self hosted, when i try to add this service reference through web reference to my client, i am not able access the classes and methods of that service, can any have any idea to achieve this... How I can add web references in this case. My Service has one method (GetNumber) that returns int. WebService: public class WebService : IWebService { public int GetNumber(int num) { return num + 1; } } Service Contract code: [ServiceContract] public interface IWebService { [OperationContract] int GetNumber(int num); } WCF Service code: ServiceHost host = new ServiceHost(typeof(WebService)); host.AddServiceEndpoint(typeof(IWebService), new NetTcpBinding(), new Uri("net.tcp://" + Dns.GetHostName() + ":1255/WebService")); NetTcpBinding binding = new NetTcpBinding(); binding.TransferMode = TransferMode.Streamed; binding.ReceiveTimeout = TimeSpan.MaxValue; binding.MaxReceivedMessageSize = long.MaxValue; Console.WriteLine("{0}", Dns.GetHostName().ToString()); Console.WriteLine("Opening Web Service..."); host.Open(); Console.WriteLine("Web Service is running on port {0}",1255); Console.WriteLine("Press <ENTER> to EXIT"); Console.ReadLine(); This works fine. Only problem is how to add references of this service in my client application. I just want to send number and to receive an answer. Can anyone help me?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >