Search Results

Search found 1221 results on 49 pages for 'contract'.

Page 10/49 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • WCF Service error received when using TCP: "The message could not be dispatched..."

    - by StM
    I am new to creating WCF services. I have created a WCF web service in VS2008 that is running on IIS 7. When I use http the service works perfectly. When I configure the service for TCP and run I get the following error message. There was a communication problem. The message could not be dispatched because the service at the endpoint address 'net:tcp://elec:9090/CoordinateIdTool_Tcp/IdToolService.svc is unavailable for the protocol of the address. I have searched a lot of forums, including this one, for a resolution but nothing has worked. Everything appears to be set up correctly on IIS 7. WAS has been set up to run. The default web site has a net.tcp binding and the application has net.tcp under the enabled protocols. I am including what I think is the important part of the web.config from the host project and also the app.config from the client project I am using to test the service. Hopefully someone can spot my error. Thanks in advance for any help or recommendations that anyone can provide. Web.Config <bindings> <wsHttpBinding> <binding name="wsHttpBindingNoMsgs"> <security mode="None" /> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="CogIDServiceHost.ServiceBehavior" name="CogIDServiceLibrary.CogIdService"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingNoMsgs" contract="CogIDServiceLibrary.CogIdTool"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <endpoint name="CoordinateIdService_TCP" address="net.tcp://elec:9090/CoordinateIdTool_Tcp/IdToolService.svc" binding="netTcpBinding" bindingConfiguration="" contract="CogIDServiceLibrary.CogIdTool"> <identity> <dns value="localhost" /> </identity> </endpoint> </service> </services> <behaviors> <serviceBehaviors> <behavior name="CogIDServiceHost.ServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> App.Config <system.serviceModel> <diagnostics performanceCounters="Off"> <messageLogging logEntireMessage="true" logMalformedMessages="false" logMessagesAtServiceLevel="false" logMessagesAtTransportLevel="false" /> </diagnostics> <behaviors /> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_CogIdTool" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true" /> </security> </binding> <binding name="wsHttpBindingNoMsg"> <security mode="None"> <transport clientCredentialType="Windows" /> <message clientCredentialType="Windows" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://sdet/CogId_WCF/IdToolService.svc" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingNoMsg" contract="CogIdServiceReference.CogIdTool" name="IISHostWsHttpBinding"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="http://localhost:1890/IdToolService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_CogIdTool" contract="CogIdServiceReference.CogIdTool" name="WSHttpBinding_CogIdTool"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="http://elec/CoordinateIdTool/IdToolService.svc" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingNoMsg" contract="CogIdServiceReference.CogIdTool" name="IIS7HostWsHttpBinding_Elec"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="net.tcp://elec:9090/CoordinateIdTool_Tcp/IdToolService.svc" binding="netTcpBinding" bindingConfiguration="" contract="CogIdServiceReference.CogIdTool" name="IIS7HostTcpBinding_Elec" > <identity> <dns value="localhost"/> </identity> </endpoint> </client> </system.serviceModel>

    Read the article

  • Howto WCF Service HTTPS Binding and Endpoint Configuration in IIS with Load Balancer?

    - by Mike G
    We have a WCF service that is being hosted on a set of 12 machines. There is a load balancer that is a gateway to these machines. Now the site is setup as SSL; as in a user accesses it through using an URL with https. I know this much, the URL that addresses the site is https, but none of the servers has a https binding or is setup to require SSL. This leads me to believe that the load balancer handles the https and the connection from the balancer to the servers are unencrypted (this takes place behind the firewall so no biggie there). The problem we're having is that when a Silverlight client tries to access a WCF service it is getting a "Not Found" error. I've set up a test site along with our developer machines and have made sure that the bindings and endpoints in the web.config work with the client. It seems to be the case in the production environment that we get this error. Is there anything wrong with the following web.config? Should we be setting up how https is handled in a different manner? We're at a loss on this currently since I've tried every programmatic solution with endpoints and bindings. None of the solutions I have found deal with a load balancer in the manner we're dealing. Web.config service model info: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="SecureCRMCustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> <binding name="SecureAACustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> </customBinding> <mexHttpsBinding> <binding name="SecureMex" /> </mexHttpsBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <!--Defines the services to be used in the application--> <services> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior" name="TradePMR.OMS.Framework.Services.CRM.CRMService"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureCRMCustomBinding" contract="TradePMR.OMS.Framework.Services.CRM.CRMService" name="SecureCRMEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior" name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureAACustomBinding" contract="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation" name="SecureAAEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> </configuration> The ServiceReferences.ClientConfig looks like this: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="StandardAAEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureAAEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="StandardCRMEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureCRMEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <client> <endpoint address="https://Service2.svc" binding="customBinding" bindingConfiguration="SecureAAEndpoint" contract="AccountAggregationService.AccountAggregation" name="SecureAAEndpoint" /> <endpoint address="https://Service1.svc" binding="customBinding" bindingConfiguration="SecureCRMEndpoint" contract="CRMService.CRMService" name="SecureCRMEndpoint" /> </client> </system.serviceModel> </configuration> (The addresses are of no consequence since those are dynamically built so that they will point to a dev's machine or to the production server)

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • WCF and Firewalls

    - by Amitd
    Hi guys, As a part of learning WCF, I was trying to use a simple WCF client-server code . http://weblogs.asp.net/ralfw/archive/2007/04/14/a-truely-simple-example-to-get-started-with-wcf.aspx but I'm facing strange issues.I was trying out the following. Client(My) IP address is : 192.168.2.5 (internal behind firewall) Server IP address is : 192.168.50.30 port : 9050 (internal behind firewall) Servers LIVE/External IP (on internet ) : 121.225.xx.xx (accessible from internet) When I specify the above I.P address of server(192.168.50.30), the client connects successfully and can call servers methods. Now suppose if I want to give my friend (outside network/on internet) the client with server's live I.P, i get an ENDPOINTNOTFOUND exceptions. Surprisingly if I run the above client specifying LIVE IP(121.225.xx.xx) of server i also get the same exception. I tried to debug the problem but haven't found anything. Is it a problem with the company firewall not forwarding my request? or is it a problem with the server or client . Is something needed to be added to the server/client to overcome the same problem? Or are there any settings on the firewall that need to be changed like port forwarding? (our network admin has configured the port to be accessible from the internet.) is it a authentication issue? Code is available at . http://www.ralfw.de/weblog/wcfsimple.txt http://weblogs.asp.net/ralfw/archive/2007/04/14/a-truely-simple-example-to-get-started-with-wcf.aspx i have just separated the client and server part in separate assemblies.rest is same. using System; using System.Collections.Generic; using System.Text; using System.ServiceModel; namespace WCFSimple.Contract { [ServiceContract] public interface IService { [OperationContract] string Ping(string name); } } namespace WCFSimple.Server { [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class ServiceImplementation : WCFSimple.Contract.IService { #region IService Members public string Ping(string name) { Console.WriteLine("SERVER - Processing Ping('{0}')", name); return "Hello, " + name; } #endregion } public class Program { private static System.Threading.AutoResetEvent stopFlag = new System.Threading.AutoResetEvent(false); public static void Main() { ServiceHost svh = new ServiceHost(typeof(ServiceImplementation)); svh.AddServiceEndpoint( typeof(WCFSimple.Contract.IService), new NetTcpBinding(), "net.tcp://localhost:8000"); svh.Open(); Console.WriteLine("SERVER - Running..."); stopFlag.WaitOne(); Console.WriteLine("SERVER - Shutting down..."); svh.Close(); Console.WriteLine("SERVER - Shut down!"); } public static void Stop() { stopFlag.Set(); } } } namespace WCFSimple { class Program { static void Main(string[] args) { Console.WriteLine("WCF Simple Demo"); // start server System.Threading.Thread thServer = new System.Threading.Thread(WCFSimple.Server.Program.Main); thServer.IsBackground = true; thServer.Start(); System.Threading.Thread.Sleep(1000); // wait for server to start up // run client ChannelFactory<WCFSimple.Contract.IService> scf; scf = new ChannelFactory<WCFSimple.Contract.IService>( new NetTcpBinding(), "net.tcp://localhost:8000"); WCFSimple.Contract.IService s; s = scf.CreateChannel(); while (true) { Console.Write("CLIENT - Name: "); string name = Console.ReadLine(); if (name == "") break; string response = s.Ping(name); Console.WriteLine("CLIENT - Response from service: " + response); } (s as ICommunicationObject).Close(); // shutdown server WCFSimple.Server.Program.Stop(); thServer.Join(); } } } Any help?

    Read the article

  • Debug.Assert replacement for Phone and Store apps

    - by Daniel Moth
    I don’t know about you, but all my code is, and always has been, littered with Debug.Assert statements. I think it all started way back in my (short-lived, but impactful to me) Eiffel days, when I was applying Design by Contract. Anyway, I can’t live without Debug.Assert. Imagine my dismay when I upgraded my Windows Phone 7.x app (Translator By Moth) to Windows Phone 8 and discovered that my Debug.Assert statements would not display anything on the target and would not break in the debugger any longer! Luckily, the solution was simple and in this post I share it with you – feel free to teak it to meet your needs. Steps to use Add a new code file to your project, delete all its contents, and paste in the code from MyDebug.cs Perform a global search in your solution replacing Debug.Assert with MyDebug.Assert Build solution and test Now, I do not know why this functionality was broken, but I do know that it exhibits the same broken characteristics for Windows Store apps. There is a simple workaround there to use Contract.Assert which does display a message and offers an option to break in the debugger (although it doesn’t output the message to the Output window). Because I plan on code sharing between Phone and Windows 8 projects, I prefer to have the conditional compilation centralized, so I added the Contract.Assert workaround directly in MyDebug class, so that you can use this class for both platforms – enjoy and enhance! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • A DirectoryCatalog class for Silverlight MEF (Managed Extensibility Framework)

    - by Dixin
    In the MEF (Managed Extension Framework) for .NET, there are useful ComposablePartCatalog implementations in System.ComponentModel.Composition.dll, like: System.ComponentModel.Composition.Hosting.AggregateCatalog System.ComponentModel.Composition.Hosting.AssemblyCatalog System.ComponentModel.Composition.Hosting.DirectoryCatalog System.ComponentModel.Composition.Hosting.TypeCatalog While in Silverlight, there is a extra System.ComponentModel.Composition.Hosting.DeploymentCatalog. As a wrapper of AssemblyCatalog, it can load all assemblies in a XAP file in the web server side. Unfortunately, in silverlight there is no DirectoryCatalog to load a folder. Background There are scenarios that Silverlight application may need to load all XAP files in a folder in the web server side, for example: If the Silverlight application is extensible and supports plug-ins, there would be a /ClinetBin/Plugins/ folder in the web server, and each pluin would be an individual XAP file in the folder. In this scenario, after the application is loaded and started up, it would like to load all XAP files in /ClinetBin/Plugins/ folder. If the aplication supports themes, there would be a /ClinetBin/Themes/ folder, and each theme would be an individual XAP file too. The application would qalso need to load all XAP files in /ClinetBin/Themes/. It is useful if we have a DirectoryCatalog: DirectoryCatalog catalog = new DirectoryCatalog("/Plugins"); catalog.DownloadCompleted += (sender, e) => { }; catalog.DownloadAsync(); Obviously, the implementation of DirectoryCatalog is easy. It is just a collection of DeploymentCatalog class. Retrieve file list from a directory Of course, to retrieve file list from a web folder, the folder’s “Directory Browsing” feature must be enabled: So when the folder is requested, it responses a list of its files and folders: This is nothing but a simple HTML page: <html> <head> <title>localhost - /Folder/</title> </head> <body> <h1>localhost - /Folder/</h1> <hr> <pre> <a href="/">[To Parent Directory]</a><br> <br> 1/3/2011 7:22 PM 185 <a href="/Folder/File.txt">File.txt</a><br> 1/3/2011 7:22 PM &lt;dir&gt; <a href="/Folder/Folder/">Folder</a><br> </pre> <hr> </body> </html> For the ASP.NET Deployment Server of Visual Studio, directory browsing is enabled by default: The HTML <Body> is almost the same: <body bgcolor="white"> <h2><i>Directory Listing -- /ClientBin/</i></h2> <hr width="100%" size="1" color="silver"> <pre> <a href="/">[To Parent Directory]</a> Thursday, January 27, 2011 11:51 PM 282,538 <a href="Test.xap">Test.xap</a> Tuesday, January 04, 2011 02:06 AM &lt;dir&gt; <a href="TestFolder/">TestFolder</a> </pre> <hr width="100%" size="1" color="silver"> <b>Version Information:</b>&nbsp;ASP.NET Development Server 10.0.0.0 </body> The only difference is, IIS’s links start with slash, but here the links do not. Here one way to get the file list is read the href attributes of the links: [Pure] private IEnumerable<Uri> GetFilesFromDirectory(string html) { Contract.Requires(html != null); Contract.Ensures(Contract.Result<IEnumerable<Uri>>() != null); return new Regex( "<a href=\"(?<uriRelative>[^\"]*)\">[^<]*</a>", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant) .Matches(html) .OfType<Match>() .Where(match => match.Success) .Select(match => match.Groups["uriRelative"].Value) .Where(uriRelative => uriRelative.EndsWith(".xap", StringComparison.Ordinal)) .Select(uriRelative => { Uri baseUri = this.Uri.IsAbsoluteUri ? this.Uri : new Uri(Application.Current.Host.Source, this.Uri); uriRelative = uriRelative.StartsWith("/", StringComparison.Ordinal) ? uriRelative : (baseUri.LocalPath.EndsWith("/", StringComparison.Ordinal) ? baseUri.LocalPath + uriRelative : baseUri.LocalPath + "/" + uriRelative); return new Uri(baseUri, uriRelative); }); } Please notice the folders’ links end with a slash. They are filtered by the second Where() query. The above method can find files’ URIs from the specified IIS folder, or ASP.NET Deployment Server folder while debugging. To support other formats of file list, a constructor is needed to pass into a customized method: /// <summary> /// Initializes a new instance of the <see cref="T:System.ComponentModel.Composition.Hosting.DirectoryCatalog" /> class with <see cref="T:System.ComponentModel.Composition.Primitives.ComposablePartDefinition" /> objects based on all the XAP files in the specified directory URI. /// </summary> /// <param name="uri"> /// URI to the directory to scan for XAPs to add to the catalog. /// The URI must be absolute, or relative to <see cref="P:System.Windows.Interop.SilverlightHost.Source" />. /// </param> /// <param name="getFilesFromDirectory"> /// The method to find files' URIs in the specified directory. /// </param> public DirectoryCatalog(Uri uri, Func<string, IEnumerable<Uri>> getFilesFromDirectory) { Contract.Requires(uri != null); this._uri = uri; this._getFilesFromDirectory = getFilesFromDirectory ?? this.GetFilesFromDirectory; this._webClient = new Lazy<WebClient>(() => new WebClient()); // Initializes other members. } When the getFilesFromDirectory parameter is null, the above GetFilesFromDirectory() method will be used as default. Download the directory’s XAP file list Now a public method can be created to start the downloading: /// <summary> /// Begins downloading the XAP files in the directory. /// </summary> public void DownloadAsync() { this.ThrowIfDisposed(); if (Interlocked.CompareExchange(ref this._state, State.DownloadStarted, State.Created) == 0) { this._webClient.Value.OpenReadCompleted += this.HandleOpenReadCompleted; this._webClient.Value.OpenReadAsync(this.Uri, this); } else { this.MutateStateOrThrow(State.DownloadCompleted, State.Initialized); this.OnDownloadCompleted(new AsyncCompletedEventArgs(null, false, this)); } } Here the HandleOpenReadCompleted() method is invoked when the file list HTML is downloaded. Download all XAP files After retrieving all files’ URIs, the next thing becomes even easier. HandleOpenReadCompleted() just uses built in DeploymentCatalog to download the XAPs, and aggregate them into one AggregateCatalog: private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { using (StreamReader reader = new StreamReader(e.Result)) { string html = reader.ReadToEnd(); IEnumerable<Uri> uris = this._getFilesFromDirectory(html); Contract.Assume(uris != null); IEnumerable<DeploymentCatalog> deploymentCatalogs = uris.Select(uri => new DeploymentCatalog(uri)); deploymentCatalogs.ForEach( deploymentCatalog => { this._aggregateCatalog.Catalogs.Add(deploymentCatalog); deploymentCatalog.DownloadCompleted += this.HandleDownloadCompleted; }); deploymentCatalogs.ForEach(deploymentCatalog => deploymentCatalog.DownloadAsync()); } } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } // Exception handling. } In HandleDownloadCompleted(), if all XAPs are downloaded without exception, OnDownloadCompleted() callback method will be invoked. private void HandleDownloadCompleted(object sender, AsyncCompletedEventArgs e) { if (Interlocked.Increment(ref this._downloaded) == this._aggregateCatalog.Catalogs.Count) { this.OnDownloadCompleted(e); } } Exception handling Whether this DirectoryCatelog can work only if the directory browsing feature is enabled. It is important to inform caller when directory cannot be browsed for XAP downloading. private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { // No exception thrown when browsing directory. Downloads the listed XAPs. } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } WebException webException = error as WebException; if (webException != null) { HttpWebResponse webResponse = webException.Response as HttpWebResponse; if (webResponse != null) { // Internally, WebClient uses WebRequest.Create() to create the WebRequest object. Here does the same thing. WebRequest request = WebRequest.Create(Application.Current.Host.Source); Contract.Assume(request != null); if (request.CreatorInstance == WebRequestCreator.ClientHttp && // Silverlight is in client HTTP handling, all HTTP status codes are supported. webResponse.StatusCode == HttpStatusCode.Forbidden) { // When directory browsing is disabled, the HTTP status code is 403 (forbidden). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_ClientHttp, webException); } else if (request.CreatorInstance == WebRequestCreator.BrowserHttp && // Silverlight is in browser HTTP handling, only 200 and 404 are supported. webResponse.StatusCode == HttpStatusCode.NotFound) { // When directory browsing is disabled, the HTTP status code is 404 (not found). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_BrowserHttp, webException); } } } this.OnDownloadCompleted(new AsyncCompletedEventArgs(error, cancelled, this)); } Please notice Silverlight 3+ application can work either in client HTTP handling, or browser HTTP handling. One difference is: In browser HTTP handling, only HTTP status code 200 (OK) and 404 (not OK, including 500, 403, etc.) are supported In client HTTP handling, all HTTP status code are supported So in above code, exceptions in 2 modes are handled differently. Conclusion Here is the whole DirectoryCatelog’s looking: Please click here to download the source code, a simple unit test is included. This is a rough implementation. And, for convenience, some design and coding are just following the built in AggregateCatalog class and Deployment class. Please feel free to modify the code, and please kindly tell me if any issue is found.

    Read the article

  • Scheduled Deprecation of Legacy Obligation Features

    - by Wes Curtis
    The Obligation object in ETPM includes some functionality and tables that, to our knowledge, are not being used by customers and implementers are this time.  Removing this logic and the related tables should benefit the performance of and simplify logic executed during Obligation maintenance processing. The Release Notes included with ETPM v2.3.1 announced that the product plans to deprecate the functionality on Obligation for Contract Terms, Contract Quantities, Tax Exemptions, Terms & Conditions and Obligation Type Start Options.  Our plan is to remove this functionality in the next release of ETPM. We have already confirmed with most project teams that these features are not being used so the deprecation should have no impact on existing designs or process. If you think your project may be impacted by this deprecation, please review any Business Object that has been created for the Obligation maintenance object to make sure that no elements are being defined for any of the following child tables: -          CI_SA_CONTERM -          CI_SA_CONT_QTY -          CI_TOU_CONT_VAL -          CI_SA_TC   As part of this deprecation, the following administrative tables are being removed along with their related metadata: -          Contract Quantity Type -          Tax Exempt Type -          Terms and Conditions Please contact myself or the Oracle Tax Product Management team if your implementation has actually used these objects in their designs. We can discuss options to mitigate impacts of this planned deprecation.  We will continue to announce planned deprecations in the Release Notes for each release and will contact project teams ahead of time to confirm that these deprecations will have little to no impact on our customers.

    Read the article

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • Solved: Operation is not valid due to the current state of the object

    - by ChrisD
    We use public static methods decorated with [WebMethod] to support our Ajax Postbacks.   Recently, I received an error from a UI developing stating he was receiving the following error when attempting his post back: {   "Message": "Operation is not valid due to the current state of the object.",   "StackTrace": "   at System.Web.Script.Serialization.ObjectConverter.ConvertDictionaryToObject(IDictionary`2 dictionary, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.ObjectConverter.ConvertObjectToTypeInternal(Object o, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.ObjectConverter.ConvertObjectToTypeMain(Object o, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n   at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer serializer, String input, Type type, Int32 depthLimit)\r\n   at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n   at System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext context, JavaScriptSerializer serializer)\r\n   at System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData methodData, HttpContext context)\r\n   at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)",   "ExceptionType": "System.InvalidOperationException" }   Goggling this error brought me little support.  All the results talked about increasing the aspnet:MaxJsonDeserializerMembers value to handle larger payloads.  Since 1) I’m not using the asp.net ajax model and 2) the payload is very small, this clearly was not the cause of my issue. Here’s the payload the UI developer was sending to the endpoint: {   "FundingSource": {     "__type": "XX.YY.Engine.Contract.Funding.EvidenceBasedFundingSource,  XX.YY.Engine.Contract",     "MeansType": 13,     "FundingMethodName": "LegalTender",   },   "AddToProfile": false,   "ProfileNickName": "",   "FundingAmount": 0 } By tweaking the JSON I’ve found the culprit. Apparently the default JSS Serializer used doesn’t like the assembly name in the __type value.  Removing the assembly portion of the type name resolved my issue. { "FundingSource": { "__type": "XX.YY.Engine.Contract.Funding.EvidenceBasedFundingSource", "MeansType": 13, "FundingMethodName": "LegalTender", }, "AddToProfile": false, "ProfileNickName": "", "FundingAmount": 0 }

    Read the article

  • Project Showcase: SaaS Web Apps Hits a Home Run with New SCMS Database

    - by Webgui
    We love seeing projects from start to finish, and we’re happy to share the latest example with you. Who: SaaS Web Apps – they use Software as a Service to create web applications that look and feel like desktop applications. What: SaaS Web Apps needed to build a Sports Contract Management System (SCMS) for one of its customers, Premier Stinson Sports. Why: The SCMS database is used for collecting, analyzing and recording college coach and athletic directors’ employment and contract data. The Challenge: Premier Stinson Sports works with a number of partners, each with its own needs and unique requirements. For example, USA Today uses the system to provide cutting edge news analysis while The National Sports Law Institute of Marquette University Law School uses it to for the latest sports contract data and student analysis. In addition, the system needed to be secure due to the sensitivity of the data; it was essential that the user security and permissions be easily configurable. As always, performance was a key factor, especially with the intense reporting and analytical capabilities for this project. Because of this, most of the processing had to be done on a dedicated server but the project called for the richness and responsiveness of a desktop application. The Solution: To execute the project, SaaS Web Apps used APS.Net-based Visual WebGui from Gizmox, combined with SQL Server 2008 and SQL Reporting Services. This combination resulted in a quick deployment for SaaS Web Apps’ customers. The Result: The completed project gave each partner the scalability and availability of a web application with the performance and security of a desktop application. As an example, USA Today pulls data from this database to give readers the latest sports stats – Salary analysis of 2010 Football Bowl Subdivision Coaches. And here’s a screenshot of the database itself. Great work, SaaS Web Apps!

    Read the article

  • How are the conceptual pairs Abstract/Concrete, Generic/Specific, and Complex/Simple related to one another in software architecture?

    - by tjb1982
    (= 2 (+ 1 1)) take the above. The requirement of the '=' predicate is that its arguments be comparable. Any two structures are comparable in this case, and so the contract/requirement is pretty generic. The '+' predicate requires that its arguments be numbers. That's more specific. (socket domain type protocol) the arguments here are much more specific (even though the arguments are still just numbers and the function itself returns a file descriptor, which is itself an int), but the arguments are more abstract, and the implementation is built up from other functions whose abstractions are less abstract, which are themselves built from less and less abstract abstractions. To the point where the requirements are something like move from one location to another, observe whether the switch at that location is on or off, turn the switch on or off, or leave it the same, etc. But are functions also less and less complex the less abstract they are? And is there a relationship between the number and range of arguments of a function and the complexity of its implementation, as you go from more abstract to less abstract, and vice versa? (= 2 (+ 1 1) 2r10) the '=' predicate is more generic than the '+' predicate, and thus could be more complex in its implementation. The '+' predicate's contract is less generic, and so could be less complex in its implementation. Is this even a little correct? What about the 'socket' function? Each of those arguments is a number of some kind. What they represent, though, is something more elaborate. It also returns a number (just like the others do), which is also a representation of something conceptually much more elaborate than a number. To boil it down, I'm asking if there is a relationship between the following dimensions, and why: Abstract/Concrete Complex/Simple Generic/Specific And more specifically, do different configurations of these dimensions have a specific, measurable impact on the number and range of the arguments (i.e., the contract) of a function?

    Read the article

  • Ajax call to WCF service returns 12031 ERROR_INTERNET_CONNECTION_RESET on new endpoint

    - by cyrix86
    I have extended a WCF service with new functionality in the form of a second service contract. The service.cs now implements both contracts. I have added another endpoint to expose the new contract operations. Here is my web.config relating to the service <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding" /> </basicHttpBinding> <webHttpBinding> <binding name="XmlHttpBinding"/> </webHttpBinding> </bindings> <services> <service name="MyNamespace.MyService" behaviorConfiguration="MyServiceBehavior"> <!-- Service Endpoints --> <endpoint address="xmlHttp1" behaviorConfiguration="XmlHttpBehavior" binding="webHttpBinding" bindingConfiguration="XmlHttpBinding" contract="MyNamespace.IContract1" /> <endpoint address="xmlHttp2" binding="webHttpBinding" behaviorConfiguration="XmlHttpBehavior" bindingConfiguration="XmlHttpBinding" contract="MyNamespace.IContract2" /> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding" contract="MyNamespace.IContract1" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="XmlHttpBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> From Javascript, calling 'http://server/wcfServiceApp/MyService.svc/xmlHttp1/Method1' still works fine. Calling 'http://server/wcfServiceApp/MyService.svc/xmlHttp2/Method2' returns the 12031 error. There must be something simple I'm not doing, any help is appreciated.

    Read the article

  • Error message: "Two different contracts have the same ConfigurationName" when downloading wsdl from

    - by rwwilden
    I get the following error message when I try to use svcutil to generate a client proxy for a xamlx file that is hosted by AppFabric beta 2: Two different contracts have the same ConfigurationName I understand the message, however, I cannot find its cause or how to fix it. I'm following the 'Introduction to Workflow Services' lab from the VS2010RC training kit. The web application has two services: SubmitApplication.xamlx and EducationScreening.xamlx. I'm not sure why but both of them have four endpoints. If I take a look via the AppFabric Dashboard in IIS Mgmt Studio: basicHttpBinding (Contract: *) (Type: Application(Default)) netNamedPipeBinding (Contract: System.ServiceModel.Activities.IWorkflowInstanceManagement) (Type: System (workflowControlEndpoint)) netNamedPipeBinding (Contract: *) (Type: Application (Default)) serviceMetadataHttpGetBinding (Contract: serviceMetadataHttpGetContract) (Type: System (serviceMetadataEndpoint)) When taking a look at the SubmitApplication.xamlx in a browser, I see the following stacktrace: [InvalidOperationException: Two different contracts have the same ConfigurationName.] System.ServiceModel.Activities.WorkflowServiceHost.CreateDescription(IDictionary`2& implementedContracts) +361 System.ServiceModel.ServiceHostBase.InitializeDescription(UriSchemeKeyedCollection baseAddresses) +174 System.ServiceModel.Activities.WorkflowServiceHost.InitializeDescription(WorkflowService serviceDefinition, UriSchemeKeyedCollection baseAddresses) +82 System.ServiceModel.Activities.WorkflowServiceHost.InitializeFromConstructor(WorkflowService serviceDefinition, Uri[] baseAddresses) +206 System.ServiceModel.Activities.Activation.WorkflowServiceHostFactory.CreateWorkflowServiceHost(WorkflowService service, Uri[] baseAddresses) +43 System.ServiceModel.Activities.Activation.WorkflowServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) +974 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +1423 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +50 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +1132 [ServiceActivationException: The service '/HRApplicationServices/SubmitApplication.xamlx' cannot be activated due to an exception during compilation. The exception message is: Two different contracts have the same ConfigurationName..] System.Runtime.AsyncResult.End(IAsyncResult result) +889824 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +179150 System.Web.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) +107 Can anyone tell me what I'm doing wrong? I haven't configured any of the bindings myself. The BasicHttpBinding is what you get by default in .NET 4 when hosting a service inside a web application. The other bindings are configured by AppFabric. I can't find their configuration anywhere. Kind regards, Ronald Wildenberg

    Read the article

  • Multiple Base Addresses and Multiple Endpoints in WCF

    - by mnhab
    I'm using two bindings TCP and HTTP. I want to give mex data on both bindings. What I want is that the mexHttpBinding only exposes the HTTP services while the mexTcpBinding exposes TCP services only. Or is this possible that I access stats service only from HTTP binding and the eventLogging service from TCP? For Example: For TCP I should only have net.tcp://localhost:9001/ABC/mex net.tcp://localhost:9001/ABC/eventLogging For HTTP http://localhost:9002/ABC/stats http://localhost:9002/ABC/mex When I connect with any of the base address (using the WCF Test Client) I'm able to access all the services? Like when I connect with net.tcp://localhost:9001/ABC I'm able to use the services which are offered on the HTTP binding. Why is that so? <system.serviceModel> <services> <service behaviorConfiguration="ABCServiceBehavior" name="ABC.Data.DataServiceWCF"> <endpoint address="eventLogging" binding="netTcpBinding" contract="ABC.Campaign.IEventLoggingService" /> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <endpoint address="stats" binding="basicHttpBinding" contract="ABC.Data.IStatsService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9001/ABC" /> <add baseAddress="http://localhost:9002/ABC" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ABCServiceBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • ORM Persistence by Reachability violates Aggregate Root Boundaries?

    - by Johannes Rudolph
    Most common ORMs implement persistence by reachability, either as the default object graph change tracking mechanism or an optional. Persistence by reachability means the ORM will check the aggregate roots object graph and determines wether any objects are (also indirectly) reachable that are not stored inside it's identity map (Linq2Sql) or don't have their identity column set (NHibernate). In NHibernate this corresponds to cascade="save-update", for Linq2Sql it is the only supported mechanism. They do both, however only implement it for the "add" side of things, objects removed from the aggregate roots graph must be marked for deletion explicitly. In a DDD context one would use a Repository per Aggregate Root. Objects inside an Aggregate Root may only hold references to other Aggregate Roots. Due to persistence by reachability it is possible this other root will be inserted in the database event though it's corresponding repository wasn't invoked at all! Consider the following two Aggregate Roots: Contract and Order. Request is part of the Contract Aggregate. The object graph looks like Contract->Request->Order. Each time a Contractor makes a request, a corresponding order is created. As this involves two different Aggregate Roots, this operation is encapsulated by a Service. //Unit Of Work begins Request r = ...; Contract c = ContractRepository.FindSingleByKey(1); Order o = OrderForRequest(r); // creates a new order aggregate r.Order = o; // associates the aggregates c.Request.Add(r); ContractRepository.SaveOrUpdate(c); // OrderAggregate is reachable and will be inserted Since this Operation happens in a Service, I could still invoke the OrderRepository manually, however I wouldn't be forced to!. Persistence by reachability is a very useful feature inside Aggregate Roots, however I see no way to enforce my Aggregate Boundaries.

    Read the article

  • Run WCF ServiceHost with multiple contracts

    - by Sam
    Running a ServiceHost with a single contract is working fine like this: servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(), "net.tcp://127.0.0.1:800/MyApp/MyService1"); servicehost.Open(); Now I'd like to add a second (3rd, 4th, ...) contract. My first guess would be to just add more endpoints like this: servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(), "net.tcp://127.0.0.1:800/MyApp/MyService1"); servicehost.AddServiceEndpoint(typeof(IMyService2), new NetTcpBinding(), "net.tcp://127.0.0.1:800/MyApp/MyService2"); servicehost.Open(); But of course this does not work, since in the creation of ServiceHost I can either pass MyService1 as parameter or MyService2 - so I can add a lot of endpoints to my service, but all have to use the same contract, since I only can provide one implementation? I got the feeling I'm missing the point, here. Sure there must be some way to provide an implementation for every endpoint-contract I add, or not?

    Read the article

  • When am I ready for contracting work?

    - by Kirk Broadhurst
    What skills are required to work on a temp / contracting basis, and how does a developer know when they are ready to work in these circumstances? I have some colleagues who are suggesting contracting work is 'the way to go'; the pay is significantly better. It appears that when a permanent position pays (X times $10k) per year, the corresponding contract position pays almost $X per hour - which is close to twice as much. I look forward to doing this type of work as an experienced expert, but worry that by doing so right now I'd be turning away learning and development opportunities. My assumptions about contract work are the following: less / zero money invested on training and development less concern for job satisfaction and learning ("they won't be here in 12 months") possibly less concern about the overall quality of the project ("we won't be here in 12 months") There are similar questions on stack overflow at the moment but what I've really looking for is: What does the developer give up by moving to contract work? Is it preferable to have structured learning in a permanent position rather than seat-of-the-pants learning in a contract position? What skill leve should the contractor have before making the move? Is there still the same kind of growth in contracting that there is in permanent positions? Rightly or wrongly, I see this as a choice between $ (contracting) and learning / development (permanent). Is this fair?

    Read the article

  • WCF - separating service contracts and partial deriving?

    - by dwhittenburg
    So, I've seperated my WCF service contracts into discrete contracts for re-use. I use to have IOneServiceContract that contained 3 functions: Function1, Function2, Function3. I've seperated this service contract into two discrete service contracts: IServiceContract1 and IServiceContract2. IServiceContract1 contains Function1 and IServiceContract2 contains Function2 and Function3. This will allow me to re-use the discrete IServiceContract1 and/or IServiceContract2 to build a new service contract that represents the contract for the public service. Knowing this...and hopefully I haven't messed up the description so that you can't follow the rest... I have two services IService1 and IService2. IService1 implements IServiceContract1 and IServiceContract2. This works perfect as IService1 needs to implement all of the functions: Function1, Function2, Function3. IService2 however doesn't need to implement all of the functions of IServiceContract2, only Function1. Is there a way for IService2 to partially implement the contract? I know that sounds ridiculous. Is the correct way to handle this situation to try and logically separate IServiceContract2 so that IService2 only has to implement the pieces that it needs? Thanks

    Read the article

  • how to parametrize an import in a View?

    - by Gianluca Colucci
    Hello everybody, I am looking for some help and I hope that some good soul out there will be able to give me a hint :) In the app I am building, I have a View. When an instance of this View gets created, it "imports" (MEF) the corresponding ViewModel. Here some code: public partial class ContractEditorView : Window { public ContractEditorView () { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import(ViewModelTypes.ContractEditorViewModel)] public object ViewModel { set { DataContext = value; } } } and here is the export for the ViewModel: [PartCreationPolicy(CreationPolicy.NonShared)] [Export(ViewModelTypes.ContractEditorViewModel)] public class ContractEditorViewModel: ViewModelBase { public ContractEditorViewModel() { _contract = new Models.Contract(); } } Now, this works if I want to open a new window to create a new contract... But it does not if I want to use the same window to edit an existing contract... In other words, I would like to add a second construct both in the View and in the ViewModel: the original constructor would accept no parameters and therefore it would create a new contract; the new construct - the one I'd like to add - I'd like to pass an ID so that I can load a given entity... What I don't understand is how to pass this ID to the ViewModel import... Thanks in advance, Cheers, Gianluca.

    Read the article

  • flush XML tags having certain value

    - by azathoth
    Hello all I have a webservice that returns an XML like this: <?xml version="1.0" encoding="UTF-8"?> <contractInformation> <contract> <idContract type="int">87191827</idContract> <region type="varchar">null</region> <dueDate type="varchar">null</dueDate> <contactName type="varchar">John Smith</contactName> </contract> </contractInformation> I want to empty every tag that contains the string null, so it would look like this: <?xml version="1.0" encoding="UTF-8"?> <contractInformation> <contract> <idContract type="int">87191827</idContract> <region type="varchar"/> <dueDate type="varchar"/> <contactName type="varchar">John Smith</contactName> </contract> </contractInformation> How do I accomplish that by using XSLT?

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Contracting as a Software Developer in the UK

    - by Frez
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Having had some 15 years’ experience of working as a software contractor, I am often asked by developers who work as permanent employees (permies) about the pros and cons of working as a software consultant through my own limited company and whether the move would be a good one for them. Whilst it is possible to contract using other financial vehicles such as umbrella companies, this article will only consider limited companies as that is what I have experience of using. Contracting or consultancy requires a different mind-set from being a permanent member of staff, and not all developers are capable of this shift in attitude. Whilst you can look forward to an increase in the money you take home, there are real risks and expenses you would not normally be exposed to as a permie. So let us have a look at the pros and cons: Pros: More money There is no doubt that whilst you are working on contracts you will earn significantly more than you would as a permanent employee. Furthermore, working through a limited company is more tax efficient. Less politics You really have no need to involve yourself in office politics. When the end of the day comes you can go home and not think or worry about the power struggles within the company you are contracted to. Your career progression is not tied to the company. Expenses from gross income All your expenses of trading as a business will come out of your company’s gross income, i.e. before tax. This covers travelling expenses provided you have not been at the same client/location for more than two years, internet subscriptions, professional subscriptions, software, hardware, accountancy services and so on. Cons: Work is more transient Contracts typically range from a couple of weeks to a year, although will most likely start at 3 months. However, most contracts are extended either because the project you have been brought in to help with takes longer to deliver than expected, the client decides they can use you on other aspects of the project, or the client decides they would like to use you on other projects. The temporary nature of the work means that you will have down-time between contracts while you secure new opportunities during which time your company will have no income. You may need to attend several interviews before securing a new contract. Accountancy expenses Your company is a separate entity and there are accountancy requirements which, unless you like paperwork, means your company will need to appoint an accountant to prepare your company’s accounts. It may also be worth purchasing some accountancy software, so talk to your accountant about this as they may prefer you to use a particular software package so they can integrate it with their systems. VAT You will need to register your company for VAT. This is tax neutral for you as the VAT you charge your clients you will pass onto the government less any VAT you are reclaiming from expenses, but it is additional paperwork to undertake each quarter. It is worth checking out the Fixed Rate VAT Scheme that is available, particularly after the initial expenses of setting up your company are over. No training Clients take you on based on your skills, not to train you when they will lose that investment at the end of the contract, so understand that it is unlikely you will receive any training funded by a client. However, learning new skills during a contract is possible and you may choose to accept a contract on a lower rate if this is guaranteed as it will help secure future contracts. No financial extras You will have no free pension, life, accident, sickness or medical insurance unless you choose to purchase them yourself. A financial advisor can give you all the necessary advice in this area, and it is worth taking seriously. A year after I started as a consultant I contracted a serious illness, this kept me off work for over two months, my client was very understanding and it could have been much worse, so it is worth considering what your options might be in the case of illness, death and retirement. Agencies Whilst it is possible to work directly for end clients there are pros and cons of working through an agency.  The main advantage is cash flow, you invoice the agency and they typically pay you within a week, whereas working directly for a client could have you waiting up to three months to be paid. The downside of working for agencies, especially in the current difficult times, is that they may go out of business and you then have difficulty getting the money you are owed. Tax investigation It is possible that the Inland Revenue may decide to investigate your company for compliance with tax law. Insurance is available to cover you for this. My personal recommendation would be to join the PCG as this insurance is included as a benefit of membership, Professional Indemnity Some agencies require that you are covered by professional indemnity insurance; this is a cost you would not incur as a permie. Travel Unless you live in an area that has an abundance of opportunities, such as central London, it is likely that you will be travelling further, longer and with more expense than if you were permanently employed at a local company. This not only affects you monetarily, but also your quality of life and the ability to keep fit and healthy. Obtaining finance If you want to secure a mortgage on a property it can be more difficult or expensive, especially if you do not have three years of audited accounts to show a mortgage lender.   Caveat This post is my personal opinion and should not be used as a definitive guide or recommendation to contracting and whether it is suitable for you as an individual, i.e. I accept no responsibility if you decide to take up contracting based on this post and you fare badly for whatever reason.

    Read the article

  • jQuery Templates with ASP.NET MVC

    - by hajan
    In my three previous blogs, I’ve shown how to use Templates in your ASPX website. Introduction to jQuery TemplatesjQuery Templates - tmpl(), template() and tmplItem()jQuery Templates - {Supported Tags}Now, I will show one real-world example which you may use it in your daily work of developing applications with ASP.NET MVC and jQuery. In the following example I will use Pubs database so that I will retrieve values from the authors table. To access the data, I’m using Entity Framework. Let’s pass throughout each step of the scenario: 1. Create new ASP.NET MVC Web application 2. Add new View inside Home folder but do not select a master page, and add Controller for your View 3. BODY code in the HTML <body>     <div>         <h1>Pubs Authors</h1>         <div id="authorsList"></div>     </div> </body> As you can see  in the body we have only one H1 tag and a div with id authorsList where we will append the data from database.   4. Now, I’ve created Pubs model which is connected to the Pub database and I’ve selected only the authors table in my EDMX model. You can use your own database. 5. Next, lets create one method of JsonResult type which will get the data from database and serialize it into JSON string. public JsonResult GetAuthors() {     pubsEntities pubs = new pubsEntities();     var authors = pubs.authors.ToList();     return Json(authors, JsonRequestBehavior.AllowGet); } So, I’m creating object instance of pubsEntities and get all authors in authors list. Then returning the authors list by serializing it to JSON using Json method. The JsonRequestBehaviour.AllowGet parameter is used to make the GET requests from the client become allowed. By default in ASP.NET MVC 2 the GET is not allowed because of security issue with JSON hijacking.   6. Next, lets create jQuery AJAX function which will call the GetAuthors method. We will use $.getJSON jQuery method. <script language="javascript" type="text/javascript">     $(function () {         $.getJSON("GetAuthors", "", function (data) {             $("#authorsTemplate").tmpl(data).appendTo("#authorsList");         });     }); </script>   Once the web page is downloaded, the method will be called. The first parameter of $.getJSON() is url string in our case the method name. The second parameter (which in the example is empty string) is the key value pairs that will be send to the server, and the third function is the callback function or the result which is going to be returned from the server. Inside the callback function we have code that renders data with template which has id #authorsTemplate and appends it to element which has #authorsList ID.   7. The jQuery Template <script id="authorsTemplate" type="text/html">     <div id="author">         ${au_lname} ${au_fname}         <div id="address">${address}, ${city}</div>         <div id="contractType">                     {{if contract}}             <font color="green">Has contract with the publishing house</font>         {{else}}             <font color="red">Without contract</font>         {{/if}}         <br />         <em> ${printMessage(state)} </em>         <br />                     </div>     </div> </script> As you can see, I have tags containing fields (au_lname, au_fname… etc.) that corresponds to the table in the EDMX model which is the same as in the database. One more thing to note here is that I have printMessage(state) function which is called inside ${ expression/function/field } tag. The printMessage function <script language="javascript" type="text/javascript">     function printMessage(s) {         if (s=="CA") return "The author is from California";         else return "The author is not from California";     } </script> So, if state is “CA” print “The author is from California” else “The author is not from California”   HERE IS THE COMPLETE ASPX CODE <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server">     <title>Database Example :: jQuery Templates</title>     <style type="text/css">         body           {             font-family:Verdana,Arial,Courier New, Sans-Serif;             color:Black;             padding:2px, 2px, 2px, 2px;             background-color:#FF9640;         }         #author         {             display:block;             float:left;             text-decoration:none;             border:1px solid black;             background-color:White;             padding:20px 20px 20px 20px;             margin-top:2px;             margin-right:2px;             font-family:Verdana;             font-size:12px;             width:200px;             height:70px;}         #address           {             font-style:italic;             color:Blue;             font-size:12px;             font-family:Verdana;         }         .author_hover {background-color:Yellow;}     </style>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>     <script language="javascript" type="text/javascript">         function printMessage(s) {             if (s=="CA") return "The author is from California";             else return "The author is not from California";         }     </script>     <script id="authorsTemplate" type="text/html">         <div id="author">             ${au_lname} ${au_fname}             <div id="address">${address}, ${city}</div>             <div id="contractType">                         {{if contract}}                 <font color="green">Has contract with the publishing house</font>             {{else}}                 <font color="red">Without contract</font>             {{/if}}             <br />             <em> ${printMessage(state)} </em>             <br />                         </div>         </div>     </script>     <script language="javascript" type="text/javascript">         $(function () {             $.getJSON("GetAuthors", "", function (data) {                 $("#authorsTemplate").tmpl(data).appendTo("#authorsList");             });         });     </script> </head>     <body>     <div id="title">Pubs Authors</div>     <div id="authorsList"></div> </body> </html> So, in the complete example you also have the CSS style I’m using to stylize the output of my page. Here is print screen of the end result displayed on the web page: You can download the complete source code including examples shown in my previous blog posts about jQuery templates and PPT presentation from my last session I had in the local .NET UG meeting in the following DOWNLOAD LINK. Do let me know your feedback. Regards, Hajan

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Cannot connect to one of my WCF services, not even with telnet

    - by Ecyrb
    I have six wcf services that I'm hosting in a windows service. Everything works great on my machine (Windows 7) but when I try it in production (Windows Server 2003) I cannot connect to one of my six services, ReportsService. I figured I must have a typo, but everything looks right. I've even rewritten that section of the config file just to be sure. I've turned on WCF tracing, but it never shows the call to my service; nothing helpful in there. I tried connecting to the port (9005) with telnet, but it failed. I can connect to all other services (ports 9001-4 and 9006) just fine. I thought that maybe there was a problem with port 9005, so I changed it to 9007 and still couldn't connect. I had one of my working services host on 9005 and it actually worked fine. So I'm pretty sure there's nothing wrong with the port or any firewall settings. Whatever port I tell ReportsService to use fails. Now I'm out of ideas. It seems like it's not hosting that one service, but I cannot get any information about why or what's wrong. Any ideas on what I could try to get that information? Or what might be wrong? The unhandled System.ServiceModel.EndpointNotFoundException I get when running my client is: Could not connect to net.tcp://localhost:9005/ReportsService. The connection attempt lasted for a time span of 00:00:01.0937430. TCP error code 10061: No connection could be made because the target machine actively refused it 172.0.0.1:9005. . My host's config file contains: <!-- Snipped other services to simplify for you. --> <endpoint binding="netTcpBinding" bindingConfiguration="customTcpBinding" contract="ServiceContracts.IReportsService" /> <endpoint binding="netTcpBinding" bindingConfiguration="customTcpBinding" contract="ServiceContracts.IUpdateData" /> IReportService is the one I'm having trouble with. I get a proxy to IReportsService with the following code, where Server is the name of the hosting machine: return new ChannelFactory<IReportsService>("").CreateChannel(new EndpointAddress(string.Format("net.tcp://{0}:9005/ReportsService", Server))); My client config file contains: <system.serviceModel> <bindings> <netTcpBinding> <binding name="customTcpBinding" maxReceivedMessageSize="2147483647"> <readerQuotas maxNameTableCharCount="2147483647" maxStringContentLength="2147483647"/> <security mode="None"/> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> <serviceThrottling maxConcurrentCalls="30" maxConcurrentInstances="30" maxConcurrentSessions="1000" /> </behavior> </serviceBehaviors> </behaviors> <services> <!-- Snipped other services to simplify for you. --> <service behaviorConfiguration="ServiceBehavior" name="WcfService.ReportsService"> <endpoint address="ReportsService" binding="netTcpBinding" bindingConfiguration="customTcpBinding" contract="ServiceContracts.IReportsService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9005" /> </baseAddresses> </host> </service> <service behaviorConfiguration="ServiceBehavior" name="WcfService.UpdateData"> <endpoint address="UpdateData" binding="netTcpBinding" bindingConfiguration="customTcpBinding" contract="ServiceContracts.IUpdateData" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9006" /> </baseAddresses> </host> </service> </services> </system.serviceModel> I've tried to keep things simple with the code snippets above, but if you would like to see more just ask and I'd be happy to provide anything that'll help.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >