Search Results

Search found 44476 results on 1780 pages for 'wcf test client'.

Page 17/1780 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Why is my WCF Rest Service on IIS7 Authenticating TWICE!?!?

    - by TheAggie
    Ok, if someone could shed some light on this for me, I would greatly appreciate it. So here we go. I had a rest service running fine the other day but after I accidentally overwrote the web.config all hell broke loose. I've spent the past day and a half trying to sort things out but I can't seem to figure out what is missing or misplaced. So, I've designed this service around WCF Rest Contrib (http://wcfrestcontrib.codeplex.com)'s authentication process. Now, I can get this working fine on my localhost w/ the current web.config (minus the endpoint entry) but once I upload it to discountasp and select "basic authorization" in the ISS7 Manager, it appears that I'm getting authenticated twice! Once using my discount asp.net user/pass and then the next time using the application user/pass. Unfortunately I only provide one set of credentials and don't want to hard code my discountasp account info into the app. Like I said before, this worked fine a few days ago. Anyway. here is my web.config as it is now: <?xml version="1.0"?> <configuration> <connectionStrings> <add name="SQL2008_ConnectionString" connectionString="Data Source=sql2k8xx.discountasp.net;Initial Catalog=SQL2008_xx;Persist Security Info=True;User ID=SQL2008_xx_user;Password=myPass" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <httpRuntime maxRequestLength="204800" executionTimeout="3600"/> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <httpModules> <add name="ServiceAnonymityModule" type="WcfRestContrib.Web.ServiceAnonymityModule, WcfRestContrib"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ServiceAnonymityModule"/> <add name="ServiceAnonymityModule" type="WcfRestContrib.Web.ServiceAnonymityModule, WcfRestContrib"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> </handlers> </system.webServer> <system.diagnostics> <trace autoflush="true" /> </system.diagnostics> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="false"> <baseAddressPrefixFilters> <add prefix="http://www.mydomain.com/myServiceBaseAddress"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <extensions> <behaviorExtensions> <add name="webAuthentication" type="WcfRestContrib.ServiceModel.Configuration.WebAuthentication.ConfigurationBehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> <add name="errorHandler" type="WcfRestContrib.ServiceModel.Configuration.ErrorHandler.BehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> <add name="webFormatter" type="WcfRestContrib.ServiceModel.Configuration.WebDispatchFormatter.ConfigurationBehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> <add name="webErrorHandler" type="WcfRestContrib.ServiceModel.Configuration.WebErrorHandler.ConfigurationBehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> </behaviorExtensions> </extensions> <bindings> <customBinding> <binding name="HttpStreamedRest"> <httpTransport maxReceivedMessageSize="209715200" manualAddressing="true" /> </binding> <binding name="HttpsStreamedRest"> <httpsTransport maxReceivedMessageSize="209715200" manualAddressing="true" /> </binding> </customBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="Rest"> <webAuthentication requireSecureTransport="false" authenticationHandlerType="WcfRestContrib.ServiceModel.Dispatcher.WebBasicAuthenticationHandler, WcfRestContrib" usernamePasswordValidatorType="MyLibrary.Runtime.SecurityValidator, MyLibrary" source="MyRESTServiceRealm"/> <webFormatter> <formatters defaultMimeType="application/xml"> <formatter mimeTypes="application/xml,text/xml" type="WcfRestContrib.ServiceModel.Dispatcher.Formatters.PoxDataContract, WcfRestContrib"/> <formatter mimeTypes="application/json" type="WcfRestContrib.ServiceModel.Dispatcher.Formatters.DataContractJson, WcfRestContrib"/> <formatter mimeTypes="application/x-www-form-urlencoded" type="WcfRestContrib.ServiceModel.Dispatcher.Formatters.FormUrlEncoded, WcfRestContrib"/> </formatters> </webFormatter> <errorHandler errorHandlerType="WcfRestContrib.ServiceModel.Web.WebErrorHandler, WcfRestContrib"/> <webErrorHandler returnRawException="true" logHandlerType="MyLibrary.Runtime.LogHandler, MyLibrary" unhandledErrorMessage="An error has occured processing your request. Please contact technical support for further assistance."/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> So, whenever I upload this and change the ISS setting to Basic Authentication, it looks like it is trying to use the default handler for authentication as if I try to enter my web app user/pass, I get an error screen which has the following detailed information about the moduel/handler Detailed Error Information Module: IIS Web Core Notification: AuthenticateRequest Handler: svc-ISAPI-2.0 Error Code: 0x80070005 Requested URL: http://www.mydomain.com:80/MyService.../MyService.svc Physical Path: E:\web\xxxxxx\htdocs\MyServiceBaseAddress\MyService.svc Logon Method: Not yet determined Logon User: Not yet determined Now for the fun stuff... i tried providing my discountasp.net account username/password for kicks and sure enough it responded properly for any [OperationContract] which doesn't have [OperationAuthentication] defined (which is only one or two of the operations I have). I thought this was strange, so I looked at fiddler and saw something interesting. Whenever I try request a procedure with [OperationAuthentication] defined and provide my discountasp.net username/pass I get two different "WWW-Authenticate" headers back in Fiddler: WWW-Authenticate: Basic realm="MyRESTServiceRealm" WWW-Authenticate: Basic realm="www.mydomain.com" On the other hand, if I try to access the same procedures with only my application's user/pass, I only get the site's header: WWW-Authenticate: Basic realm="www.mydomain.com" My hypothesis is that for some reason I'm having to pass through the default "Basic Authorization" layer set by IIS before I can get to the application's "Custom Basic Authorization" layer. After verifying this by created an identical user/pass for my service that I use for my discountasp.net account, I was able to successfully pass both layers of authentication without any issues... so I think I can conclude that this is indeed the issue. Now how do I disable the default one? Do I need to do this in the IIS Manager, or in the web.config? Anyway, I have absolutely no idea how this is possible or what I need to do to resolve the issue, but I know that something is seriously out of whack. Any suggestions would be greatly appreciated! Thanks.

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • SharePoint 2010 Custom WCF Service - Windows and FBA Authentication

    - by e-rock
    I have SharePoint 2010 configured for Claims Based Authentication with both Windows and Forms Based Authentication (FBA) for external users. I also need to develop custom WCF Services. The issue is that I want Windows credentials passed into the WCF Service(s); however, I cannot seem to get the Windows credentials passed into the services. My custom WCF service appears to be using Anonymous authentication (which has to be enabled in IIS in order to display the FBA login screen). The example I have tried to follow is found at http://msdn.microsoft.com/en-us/library/ff521581.aspx. The WCF service gets deployed to _vti_bin (ISAPI folder). Here is the code for the .svc file <%@ ServiceHost Language="C#" Debug="true" Service="MyCompany.CustomerPortal.SharePoint.UI.ISAPI.MyCompany.Services.LibraryManagers.LibraryUploader, $SharePoint.Project.AssemblyFullName$" Factory="Microsoft.SharePoint.Client.Services.MultipleBaseAddressBasicHttpBindingServiceHostFactory, Microsoft.SharePoint.Client.ServerRuntime, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" CodeBehind="LibraryUploader.svc.cs" %> Here is the code behind for the .svc file [ServiceContract] public interface ILibraryUploader { [OperationContract] string SiteName(); } [BasicHttpBindingServiceMetadataExchangeEndpoint] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] public class LibraryUploader : ILibraryUploader { //just try to return site title right now… public string SiteName() { WindowsIdentity identity = ServiceSecurityContext.Current.WindowsIdentity; ClaimsIdentity claimsIdentity = new ClaimsIdentity(identity); return SPContext.Current.Web.Title; } } The WCF test client I have just to test it out (WPF app) uses the following code to call the WCF service... private void Button1Click(object sender, RoutedEventArgs e) { BasicHttpBinding binding = new BasicHttpBinding(); binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm; EndpointAddress endpoint = new EndpointAddress( "http://dev.portal.data-image.local/_vti_bin/MyCompany.Services/LibraryManagers/LibraryUploader.svc"); LibraryUploaderClient libraryUploader = new LibraryUploaderClient(binding, endpoint); libraryUploader.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation; MessageBox.Show(libraryUploader.SiteName()); } I am somewhat inexperienced with IIS security settings/configurations when it comes to Claims and trying to use both Windows and FBA. I am also inexperienced when it comes to WCF configurations for security. I usually develop internal biz apps and let Visual Studio decide what to use because security is rarely a concern.

    Read the article

  • WCF Duplex Interaction with Web Server

    - by Mark Struzinski
    Here is my scenario, and it is causing us a considerable amount of grief at the moment: We have a vendor web service which provides base level telephony functionality. This service has a SOAP api, which we are leveraging to build up a custom UI that is integrated into our in house web apps. The api functions on 2 levels. You make standard client calls into the service to initiate actions, such as Login, Place Call, Hang Up, etc. On a different thread, the service sends events back to the client to alert the user of things that are occurring on the system (agent successfully logged in, call was disconnected, etc). I implemented a WCF service to sit between the web server and the vendor service. This WCF service operates in duplex mode, establishing a 2 way connection with the web server. The web server makes outbound calls to the WCF service, which routes them to the vendor's web service. Events are received back to the WCF service, which passes them onto the web server via a callback channel on the WCF client. As events are received on the web server, they are placed into a hash table with the user's name as the key, and a .NET queue as the value to hold the event. Each event is enqueued to the agent who owns it. On a 2 second interval, the web page polls the web server via an ajax request to get new events for the logged in user. It hits the hash table for the user key, dequeues any events that are present, and serializes them back up to the web page. From there, they are processed in order and appropriate messages are displayed to the user. This implementation performs well in a single user scenario. The second I put more than 1 user on the system, I start getting frequent timeouts with the following CommunicationException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond We are running Windows Server 2008 R2 both servers. Both the web app and WCF service are running on .NET 3.5. The WCF service is running under the net.tcp protocol in duplex mode. The web app is ASP.NET MVC 2. Has anyone dealt with anything like this scenario? Is there a more efficient way (or a widely accepted pattern) to implement this?

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Do you charge a client for email and chat communication as a freelancer? [closed]

    - by skyork
    For a project that is billed by hours, should a freelancer charge the client for the amount of time he/she spends on email/chat correspondence? For example, the client sends an email to the the freelancer, outlining the requirements. Should the freelancer charge the client for the time during which he/she reads the email and writes a reply. The same goes for chat conversations for clarifying the requirements. In particular, if the freelancer's English is not very good, so that he/she spends extra time on understanding what the client wants and explaining him/herself (e.g. copying and pasting into Google Translate), should such time be charged to the client too?

    Read the article

  • How to configure custom binding to consume this WS secure Webservice using WCF?

    - by Soeteman
    Hello all, I'm trying to configure a WCF client to be able to consume a webservice that returns the following response message: Response message <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://myservice.wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:StatusResponse> <result> ... </result> </ns0:StatusResponse> </env:Body> </env:Envelope> To do this, I've constructed a custom binding (which doesn't work). I keep getting a "Security header is empty" message. My binding: <customBinding> <binding name="myCustomBindingForVestaServices"> <security authenticationMode="UserNameOverTransport" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11" securityHeaderLayout="Strict" includeTimestamp="false" requireDerivedKeys="true"> </security> <textMessageEncoding messageVersion="Soap11" /> <httpsTransport authenticationScheme="Negotiate" requireClientCertificate ="false" realm =""/> </binding> </customBinding> My request seems to be using the same SOAP and WS Security versions as the response, but use different namespace prefixes ("o" instead of "wsse"). Could this be the reason why I keep getting the "Security header is empty" message? Request message <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-d3b70d1f-0ebb-4a79-85e6-34f0d6aa3d0f-1"> <o:Username>user</o:Username> <o:Password>pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://myservice.wsdl"> <request xmlns="" xmlns:a="http://myservice.wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> ... </request> </getPrdStatus> </s:Body> </s:Envelope> How do I need to configure my WCF client binding to be able to consume this webservice? Any help greatly appreciated! Sander

    Read the article

  • Hosting WCF over Internet

    - by karthik
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem. Any help will be appreciated. Thanks Karthik

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • Self - hosted WCF server and SSL

    - by jitm
    Hello, There is self - hosted WCF server (Not IIS), and was generated certificates (on the Win Xp) using command line like makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureClient -sky exchange -pe makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureServer -sky exchange -pe These certificates was added to the server code like this: serviceCred.ServiceCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureServer"); serviceCred.ClientCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureClient"); After all previous operation I created simple client to check SSL connection to the server. Client configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IAdminContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://myhost:8002/Admin" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IAdminContract" contract="Admin.IAdminContract" name="BasicHttpBinding_IAdminContract" /> </client> </system.serviceModel> </configuration> Code: Admin.AdminContractClient client = new AdminContractClient("BasicHttpBinding_IAdminContract"); client.ClientCredentials.UserName.UserName = "user"; client.ClientCredentials.UserName.Password = "pass"; var result = client.ExecuteMethod() During execution I receiving next error: The provided URI scheme 'https' is invalid; expected 'http'.\r\nParameter name: via Question: How to enable ssl for self-hosted server and where should I set - up certificates for client and server ? Thanks.

    Read the article

  • Test Driven Development (TDD) in Visual Studio 2010- Microsoft Mondays

    - by Hosam Kamel
    November 14th , I will be presenting at Microsoft Mondays a session about Test Driven Development (TDD) in Visual Studio 2010 . Microsoft Mondays is program consisting of a series of Webcasts showcasing various Microsoft products and technologies. Each Monday we discuss a particular topic pertaining to development, infrastructure, Office tools, ERP, client/server operating systems etc. The webcast will be broadcast via Lync and can viewed from a web client. The idea behind the “Microsoft Mondays” program is to help you become more proficient in the products and technologies that you use and help you utilize their full potential.   Test Driven Development in Visual Studio 2010 Level – 300 (  Intermediate – Advanced ) Test Driven Development (TDD), also frequently referred to as Test Driven Design, is a development methodology where developers create software by first writing a unit test, then writing the actual system code to make the unit test pass.  The unit test can be viewed as a small specification around how the system should behave; writing it first helps the developer to focus on only writing enough code to make the test pass, thereby helping ensure a tight, lightweight system which is specifically focused meeting on the documented requirements. TDD follows a cadence of “Red, Green, Refactor.” Red refers to the visual display of a failing test – the test you write first will not pass because you have not yet written any code for it. Green refers to the step of writing just enough code in your system to make your unit test pass – your test runner’s UI will now show that test passing with a green icon. Refactor refers to the step of refactoring your code so it is tighter, cleaner, and more flexible. This cycle is repeated constantly throughout a TDD developer’s workday. Date:   November 14, 2011 Time:  10:00 a.m. – 11:00 a.m. (GMT+3)  http://www.eventbrite.com/event/2437620990/efbnen?ebtv=F   See you there! Hosam Kamel Originally posted at

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • Setting Up IRM Test Content

    - by martin.abrahams
    A feature of the 11g IRM Server that sometimes gets overlooked is the ability to set up some test content that any IRM user can access to verify that their IRM Desktop can reach the server, authenticate successfully, and render protected content successfully. Such test content is useful for new users, and in troubleshooting scenarios. Here's how to set up some test content... In the management console, go to IRM - Administration - Test Content, as shown. The console will display a list of test content - initially an empty list. Use the Add option to specify the URL of a document or image, and define one or more labels for the test content in whichever languages your users favour. Note that you do not need to seal the image or document in order to use it as test content. Nor do you need to set up any rights for the test content. The IRM Server will handle the sealing and rights assignment automatically such that all authenticated users are authorised to view the test content. Repeat this process for as many different types of content as you would like to offer for test purposes - perhaps a Word document, a PDF document, and an image. To keep things simple the first time I did this, I used the URL of one of the images in the IRM Server's UI - so there was no problem with the IRM Server being able to reach that image. Whatever content you want to use, the IRM Server needs to be able to reach it at the URL you specify. Using Test Content Open a browser and browse to the URL that the IRM Desktop normally uses to access the IRM Server, for example: http://irm11g.oracle.com/irm_desktop If you are not sure, you can find this URL in the Servers tab of the IRM Options dialog. Go to the Test tab, and you will see your test content listed. By opening one of the items, you can verify that your IRM Desktop is healthy and that you can authenticate to the IRM Server.

    Read the article

  • WCF REST based services authentication schemes

    - by FlySwat
    I have a simple authentication scheme for a set of semi-public REST API's we are building: /-----------------------\ | Client POST's ID/Pass | | to an Auth Service | \-----------------------/ [Client] ------------POST----------------------> [Service/Authenticate] | /-------------------------------\ | Service checks credentials | [Client] <---------Session Cookie------- | and generates a session token | | | in a cookie. | | \-------------------------------/ | [Client] -----------GET /w Cookie -------------> [Service/Something] | /----------------------------------\ | Client must pass session cookie | | with each API request | | or will get a 401. | \----------------------------------/ This works well, because the client never needs to do anything except receive a cookie, and then pass it along. For browser applications, this happens automatically by the browser, for non browser applications, it is pretty trivial to save the cookie and send it with each request. However, I have not figured out a good approach for doing the initial handshake from browser applications. For example, if this is all happening using a AJAX technique, what prevents the user from being able to access the ID/Pass the client is using to handshake with the service? It seem's like this is the only stumbling block to this approach and I'm stumped.

    Read the article

  • WCF Business logic handling

    - by Raj
    I have a WCF service that supports about 10 contracts, we have been supporting a client with all the business rules specific to this client now we have another client who will be using the exact same contracts (so we cannot change that) they will be calling the service exactly the same way the previous client called now the only way we can differentiate between the two clients is by one of the input parameters. Based on this input parameter we have to use a slightly different business logic – the logic for both the Client will be same 50% of the time the remainder will have different logic (across Business / DAL layers) . I don’t want to use if else statement in each of contract implementation to differentiate and reroute the logic also what if another client comes in. Is there a clean way of handling a situation like this. I am using framework 3.5. Like I said I cannot change any of the contracts (service / data contract ) or the current service calling infrastructure for the new client. Thanks

    Read the article

  • deadlock when using WCF Duplex Polling with Silverlight

    - by Kobi Hari
    Hi all. I have followed Tomek Janczuk's demonstration on silverlight tv to create a chat program that uses WCF Duplex Polling web service. The client subscribes to the server, and then the server initiates notifications to all connected clients to publish events. The Idea is simple, on the client, there is a button that allows the client to connect. A text box where the client can write a message and publish it, and a bigger text box that presents all the notifications received from the server. I connected 3 clients (in different browsers - IE, Firefox and Chrome) and it all works nicely. They send messages and receive them smoothly. The problem starts when I close one of the browsers. As soon as one client is out, the other clients get stuck. They stop getting notifications. I am guessing that the loop in the server that goes through all the clients and sends them the notifications is stuck on the client that is now missing. I tried catching the exception and removing it from the clients list (see code) but it still does not help. any ideas? The server code is as follows: using System; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Activation; using System.Collections.Generic; using System.Runtime.Remoting.Channels; namespace ChatDemo.Web { [ServiceContract] public interface IChatNotification { // this will be used as a callback method, therefore it must be one way [OperationContract(IsOneWay=true)] void Notify(string message); [OperationContract(IsOneWay = true)] void Subscribed(); } // define this as a callback contract - to allow push [ServiceContract(Namespace="", CallbackContract=typeof(IChatNotification))] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)] public class ChatService { SynchronizedCollection<IChatNotification> clients = new SynchronizedCollection<IChatNotification>(); [OperationContract(IsOneWay=true)] public void Subscribe() { IChatNotification cli = OperationContext.Current.GetCallbackChannel<IChatNotification>(); this.clients.Add(cli); // inform the client it is now subscribed cli.Subscribed(); Publish("New Client Connected: " + cli.GetHashCode()); } [OperationContract(IsOneWay = true)] public void Publish(string message) { SynchronizedCollection<IChatNotification> toRemove = new SynchronizedCollection<IChatNotification>(); foreach (IChatNotification channel in this.clients) { try { channel.Notify(message); } catch { toRemove.Add(channel); } } // now remove all the dead channels foreach (IChatNotification chnl in toRemove) { this.clients.Remove(chnl); } } } } The client code is as follows: void client_NotifyReceived(object sender, ChatServiceProxy.NotifyReceivedEventArgs e) { this.Messages.Text += string.Format("{0}\n\n", e.Error != null ? e.Error.ToString() : e.message); } private void MyMessage_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { this.client.PublishAsync(this.MyMessage.Text); this.MyMessage.Text = ""; } } private void Button_Click(object sender, RoutedEventArgs e) { this.client = new ChatServiceProxy.ChatServiceClient(new PollingDuplexHttpBinding { DuplexMode = PollingDuplexMode.MultipleMessagesPerPoll }, new EndpointAddress("../ChatService.svc")); // listen for server events this.client.NotifyReceived += new EventHandler<ChatServiceProxy.NotifyReceivedEventArgs>(client_NotifyReceived); this.client.SubscribedReceived += new EventHandler<System.ComponentModel.AsyncCompletedEventArgs>(client_SubscribedReceived); // subscribe for the server events this.client.SubscribeAsync(); } void client_SubscribedReceived(object sender, System.ComponentModel.AsyncCompletedEventArgs e) { try { Messages.Text += "Connected!\n\n"; gsConnect.Color = Colors.Green; } catch { Messages.Text += "Failed to Connect!\n\n"; } } And the web config is as follows: <system.serviceModel> <extensions> <bindingExtensions> <add name="pollingDuplex" type="System.ServiceModel.Configuration.PollingDuplexHttpBindingCollectionElement, System.ServiceModel.PollingDuplex, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> </bindingExtensions> </extensions> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <bindings> <pollingDuplex> <binding name="myPollingDuplex" duplexMode="MultipleMessagesPerPoll"/> </pollingDuplex> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/> <services> <service name="ChatDemo.Web.ChatService"> <endpoint address="" binding="pollingDuplex" bindingConfiguration="myPollingDuplex" contract="ChatDemo.Web.ChatService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel>

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

  • configuring linux console email client to check attachments

    - by Christopher
    I need to configure a IMAP4 capable (console-based) email client to - check and edit the name of an attachment ("contains umlauts?" - change character ä to ae) - delete emails that don't fit certain requirements (not PDF, DOC,... not from domain xyz.com) Whether the client can do everything by itself or can just trigger a script on incoming mail doesn't matter. Anyone have an idea with mail client would be suitable for such a task?

    Read the article

  • Free tools for analyzing client-server communication issues

    - by roberto
    Hi, we have some issues with a client-server based application and we would like to better understand client-server communication without going to the software company that sold the application. At least we would like to perform the analysis in parallel. Can you suggest to me a dummy proof application that we can easily get and install to analyze client-server traffic? Many thanks!

    Read the article

  • Free tools for analizing client-server comunication issues

    - by roberto
    Hi, we have some issues with a client-server based application and we would like to better understand client-server comunication without going to the software company that sold the application. At least we would like to perform the analysis in parallel. Can you suggest to me a dummy proof application that we can easily get and install to analise client-server traffic? Many thanks!

    Read the article

  • Web Servicet Client in JBOSS 5.1 with JDK6

    - by dcp
    This is a continuation of the question here: http://stackoverflow.com/questions/2435286/jboss-does-app-have-to-be-compiled-under-same-jdk-as-jboss-is-running-under It's different enough though that it required a new question. I am trying to use jdk6 to run JBOSS 5.1, and I downloaded the JDK6 version of JBOSS 5.1. This works fine and my EAR application deploys fine. However, when I want to run a web service client with code like this: public static void main(String[] args) throws Exception { System.out.println("creating the web service client..."); TestClient client = new TestClient("http://localhost:8080/tc_test_project-tc_test_project/TestBean?wsdl"); Test service = client.getTestPort(); System.out.println("calling service.retrieveAll() using the service client"); List<TestEntity> list = service.retrieveAll(); System.out.println("the number of elements in list retrieved using the client is " + list.size()); } I get the following exception: javax.xml.ws.WebServiceException: java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage at org.jboss.ws.core.jaxws.client.ClientImpl.handleRemoteException(ClientImpl.java:396) at org.jboss.ws.core.jaxws.client.ClientImpl.invoke(ClientImpl.java:302) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:170) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:150) Now, here's the really interesting part. If I change the JDK that my the code above is running under from JDK6 to JDK5, the exception above goes away! It's really strange. The only way I found for the code above to run under JDK6 was to take the JBOSS_HOME/lib/endorsed folder and copy it to JDK6_HOME/lib. This seems like it shouldn't be necessary, but it is. Is there any other way to make this work other than using the workaround I just described?

    Read the article

  • Wordpress content authoring tutorial for non-techie client....

    - by metal-gear-solid
    I made a website for client in wordpress and client will ad ur own content. Client doesn't know how to handle Wordpress and XHTML CSS. but he knows MS word 2007. Client is on remote location.Is there any easy to understand article/video tutorials to give to client on how he can understand wordpress admin and add content/images/video using editor? and how to disable unneeded things for client from wordpress admin ?

    Read the article

  • The server rejected the session-establishment request: WCF hosted on IIS

    - by Dave Hanna
    Background: I'm working on a project where we have about a dozen distinct WCF services implemented in an IIS application, communicating over net.tcp on the default port (808), using the Microsoft Net.Tcp Port Sharing Service. I recently added a self-test method to the base class of each of these services so that I could remotely hit the service and get back a status string verifying that it was in operation. We implement this app in a ladder of environments - Development, QA, UAT, and finally production. My problem: My test program, which instantiates a connection to each service in turn and invokes the self-test method, works fine on all the environments below production. We recently moved the app to production, and I'm getting a weird error that I can't explain: On the first of the services that I hit, I get back an exception: "The server at [URL] rejected the session-establishment request". All the other services respond fine. I initially thought there was something wrong with the particular service that was failing, but I tried rearranging the list of services into a different order, and it SEEMS to always be the first service that I hit that fails. (I say SEEMS because it think once in the early iterations of testing, I saw it happen on the second service that it hit. But I haven't been able to reproduce that.) I've looked at application startup delays, and that doesn't seem to be the problem, because I can come back and run the test again as soon as it finishes - a delay of only a minute or two - and get the same error. Also, in the lower level environments, there is a start up delay of probably 30 seconds to a minute, but the result still comes back as expected. I've tried accessing the services over http from INetManager, and I get intermittent failures on all the services - a particular service will return a yellow screen of death on on invocation, then come up with the expected link to the WSDL on the next one seconds later. I'm completely at a loss to explain this behavior, or how to resolve it. I've googled the error message, and not found anything helpful. It may be a configuration issue - the production servers are newly provisioned VM's, and we may not have the config exactly right (whereas all the lower level environments have been running this and other similar apps for some time), but I have not idea what to look for. I've looked at the properties of the app pool that the app is running on and compared it to the lower level environments without finding any differences. If somebody can point me in the right direction, you would have my undying gratitude.

    Read the article

  • Unable to access the WCF service over VPN!

    - by kurozakura
    Heres the scenario, im on a network A, and i use a vpn client to connect network B to access the webservice which can be accessed in network B.Even though im connect to network B , im unable to access the webservice link.Do i need to configure any settings. But if u r originally in network B and even though if u have connected to network A using vpn client, im able to access the webservice link. But the other way isnt working.

    Read the article

  • Comunication from Server to Client + Client LAN

    - by Filipe YaBa Polido
    I'm having some trouble with some network setup. I've tried OpenVPN, SocialVPN, Hamachi, still it is not working. This is my setup: Server A: NIC 1 with internet public address, NIC 2 to LAN Client B PC: NIC 1 (192.168.10.2) connects to router 192.168.10.1 Client B Device: 192.168.10.3 (Configured via software, can't do much here) Problem: Server A must connect to cliente device B. (I can install software needed at Client B PC). However... I can't change the router to some model with VPN like Draytek or Cisco :( OpenVPN fails at bridging, PC B can ping Server A, but Server A can't ping Device B, only PC B. What else can I do?!?!?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >