Search Results

Search found 5769 results on 231 pages for 'wcf routing'.

Page 75/231 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • WCF: How can I send data while gracefully closing a connection?

    - by mafutrct
    I've got a WCF service that offers a Login method. A client is required to call this method (due to it being the only IsInitiating=true). This method should return a string that describes the success of the call in any case. If the login failed, the connection should be closed. The issue is with the timing of the close. I'd like to send the return value, then immediately close the connection. string Login (string name, string pass) { if (name != pass) { OperationContext.Current.Channel.Close (); return "fail"; } else { return "yay"; } } The MSDN states that calling Close on the channel causes an ICommunicationObject to gracefully transition from the Opened state to the Closed state. The Close method allows any unfinished work to be completed before returning. For example, finish sending any buffered messages). This did not work for me (or my understanding is wrong), as the close is executed immediately - WCF does not wait for the Login method to finish executing and return a string but closes the connection earlier. Therefore I assume that calling Close does not wait for the running method to finish. Now, how can I still return a value, then close?

    Read the article

  • WCF identity when moving from dev to prod. environment

    - by Anders Abel
    I have a web service developed with WCF. In the development environment the endpoint has the following identity section under the endpoint configuration. <identity> <dns value="myservice.devdomain.local" /> </identity> myservice.devdomain.local is the dns name used to reach the development version of the service. The binding used is: <basicHttpBinding> <binding name ="myBinding"> <security mode ="TransportCredentialOnly"> <transport clientCredentialType="Windows"/> </security> </binding> </basicHttpBinding> I am about to put this into production. The binding will be the same, but the address will be a new production address myservice.proddomain.local. I have planned to change the dns value in the configuration to myservice.proddomain.local in the production environment. However this MSDN article on WCF Identity makes me worried about the impact on the clients when I change the identity. There are two clients - one .NET and one Java using this service. Both of those have been developed against the dev instance of the service. The idea is to just reconfigure the endpoint used by the clients, without reloading the WSDL. But if the identity is somehow part of the WSDL and the identity changes when deploying to prod that might not work. Will the new identity in the prod version cause issues for the clients that were developed using the dev wsdl? Do the Java and the .NET clients handle this differently?

    Read the article

  • Need some help/advice on WCF Per-Call Service and NServiceBus interop.

    - by Alexey
    I have WCF Per-Call service wich provides data for clients and at the same time is integrated with NServiceBus. All statefull objects are stored in UnityContainer wich is integrated into custom service host. NServiceBus is configured in service host and uses same container as service instances. Every client has its own instance context(described by Juval Lowy in his book in chapter about Durable Services). If i need to send request over bus I just use some kind of dispatcher and wait response using Thread.Sleep().Since services are per-call this is ok afaik. But I am confused a bit about messages from bus, that service must handle and provide them to clients. For some data like stock quotes I just update some kind of statefull object and and then, when clients invoke GetQuotesData() just provide data from this object. But there are numerous service messages like new quote added and etc. At this moment I have an idea to implement something like "Postman daemon" =)) and store this type of messages in instance context. Then client will invoke "GetMail()",recieve those messages and parse them. Problem is that NServiceBus messages are "Interface based" and I cant pass them over WCF, so I need to convert them to types derieved from some abstract class. Dunno what is best way to handle this situation. Will be very gratefull for any advice on this. Thanks in advance.

    Read the article

  • Idiomatic default sort using WCF RIA, Entity Framework 4, Silverlight 4?

    - by Duncan Bayne
    I've got two Silverlight 4.0 ComboBoxes; the second displays the children of the entity selected in the first: <ComboBox Name="cmbThings" ItemsSource="{Binding Path=Things,Mode=TwoWay}" DisplayMemberPath="Name" SelectionChanged="CmbThingsSelectionChanged" /> <ComboBox Name="cmbChildThings" ItemsSource="{Binding Path=SelectedThing.ChildThings,Mode=TwoWay}" DisplayMemberPath="Name" /> The code behind the view provides a (simple, hacky) way to databind those ComboBoxes, by loading Entity Framework 4.0 entities through a WCF RIA service: public EntitySet<Thing> Things { get; private set; } public Thing SelectedThing { get; private set; } protected override void OnNavigatedTo(NavigationEventArgs e) { var context = new SortingDomainContext(); context.Load(context.GetThingsQuery()); context.Load(context.GetChildThingsQuery()); Things = context.Things; DataContext = this; } private void CmbThingsSelectionChanged(object sender, SelectionChangedEventArgs e) { SelectedThing = (Thing) cmbThings.SelectedItem; if (PropertyChanged != null) { PropertyChanged.Invoke(this, new PropertyChangedEventArgs("SelectedThing")); } } public event PropertyChangedEventHandler PropertyChanged; What I'd like to do is have both combo boxes sort their contents alphabetically, and I'd like to specify that behaviour in the XAML if at all possible. Could someone please tell me what is the idiomatic way of doing this with the SL4 / EF4 / WCF RIA technology stack?

    Read the article

  • How can I get at the raw bytes of the request in WCF?

    - by Gregory Higley
    For logging purposes, I want to get at the raw request sent to my RESTful web service implemented in WCF. I have already implemented IDispatchMessageInspector. In my implementation of AfterReceiveRequest, I want to spit out the raw bytes of the message even (and especially) if the content of the message is invalid. This is for debugging purposes. My service works perfectly already, but it is often helpful when working through problems with clients who are trying to call the service to know what it was they sent, i.e., the raw bytes. For example, let's say that instead of sending a well-formed XML document, they post the string "your mama" to my service endpoint. I want to see that that's what they did. Unfortunately using MessageBuffer::CreateBufferedCopy() won't work unless the contents of the message are already well-formed XML. Here's (roughly) what I already have in my implementation of AfterReceiveRequest: // The immediately following line raises an exception if the message // does not contain valid XML. This is uncool because I want // the raw bytes regardless of whether they are valid or not. using (MessageBuffer buffer = request.CreateBufferedCopy(Int32.MaxValue)) { using (MemoryStream stream = new MemoryStream()) using (StreamReader reader = new StreamReader(stream)) { buffer.WriteMessage(stream); stream.Position = 0; Trace.TraceInformation(reader.ReadToEnd()); } request = buffer.CreateMessage(); } My guess here is that I need to get at the raw request before it becomes a Message. This will most likely have to be done at a lower level in the WCF stack than an IDispatchMessageInspector. Anyone know how to do this?

    Read the article

  • Strategy for WCF server with .Net clients and Android clients?

    - by D.H.
    I am using WCF to write a server that should be able to communicate with .Net clients, Android clients and possibly other types of clients. The main type of client is a desktop application that will be written in .Net. This client will usually be on the same intranet as the server. It will make an initial call to the server to get the current state of the system and will then receive updates from the server whenever a value changes. These updates are frequent, perhaps once a second. The Android clients will connect over the Internet. This client is also interested in updates, but it is not as critical as for the desktop client so a (less frequent) polling scenario might be acceptable. All clients will have to login to use the services, and when connecting over the Internet the connection should be secure. I am familiar with WCF but I am not sure what bindings are most appropriate for the scenario and what security solution to use. Also, I have not used Android, but I would like to make it as simple as possible for the person implementing the Android client to consume my services. So, what is my strategy?

    Read the article

  • Why do WCF clients depend on the app.config file?

    - by routeNpingme
    Like a lot of things, I'm sure there's a good reason for this, so please help me understand... Why, by default, do WCF services store settings in app.config? This has been so frustrating trying to work with multiple Silverlight class libraries. These class libraries are supposed to be completely independent from each other, and this dependency on the app.config seems to cause the following headaches: Single Responsibility Principle - I should be able to add a reference to a class library and go. If that class library uses a service reference, this idea is shot before I even start coding against it. Muddy Configuration - To get other libraries to work, I have to copy and paste the service configurations into the "main" application configs. If an endpoint changes in any way, I can't just worry about a new version of that class DLL - I have to worry about anything that uses it, too. Complex Alternatives - Programmatically creating the endpoint isn't pretty. Period. There has to be a better way. Why doesn't WCF at least separate the service configurations into a ServiceName.config or something that gets copied to an output directory. What am I missing? How do you deal with this?

    Read the article

  • How can I intercept an exception occurred during serialization in WCF?

    - by bonomo
    I have a legit data object with all data contract / data member attributes. For some reason the WCF service crashes after the operation has completed and the result is passed as a return value. I believe it has something to do with WCF not being able to serialize that result properly. The test client doesn't say anything specific: The underlying connection was closed: The connection was closed unexpectedly. Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at IFacade.PickSecurities(String pattern, Int32 atMost) at FacadeClient.PickSecurities(String pattern, Int32 atMost) Inner Exception: The underlying connection was closed: The connection was closed unexpectedly. at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) I am in control of creating the instance of the service using a customized service host factory. I know I can set up trace listeners and check the logs, but it's a lot of hassle to do. So I would rather handle it explicitly on the server at the time it happens. So I how can I intercept that exception programmatically and return an appropriate fault meassage?

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • Ubuntu Server Cannot Route to the Internet

    - by ejes
    I've been having this problem for weeks now, and I can't seem to figure out the problem. My server can route the local network and serves it well, however it cannot access the internet. It can't be the router because everything else on this lan can route through the router. I've even switched the ethernet port. Any help would be appreciated. I've tried all the usual places, anyway, here are the configs: root@uhs:~# uname -a Linux uhs 3.0.0-16-generic-pae #28-Ubuntu SMP Fri Jan 27 19:24:01 UTC 2012 i686 i686 i386 GNU/Linux root@uhs:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface # auto eth1 # iface eth1 inet dhcp auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 root@uhs:~# ping -c 4 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=0.334 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.339 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.324 ms 64 bytes from 192.168.0.1: icmp_req=4 ttl=64 time=0.339 ms --- 192.168.0.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.324/0.334/0.339/0.006 ms root@uhs:~# ping -c 4 209.85.145.103 PING 209.85.145.103 (209.85.145.103) 56(84) bytes of data. --- 209.85.145.103 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3023ms root@uhs:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:0c:6e:a0:92:6e inet addr:192.168.0.3 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:6eff:fea0:926e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13131114 errors:0 dropped:0 overruns:0 frame:0 TX packets:10540297 errors:0 dropped:0 overruns:5 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3077922794 (3.0 GB) TX bytes:3827489734 (3.8 GB) Interrupt:10 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7721 errors:0 dropped:0 overruns:0 frame:0 TX packets:7721 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:551950 (551.9 KB) TX bytes:551950 (551.9 KB) root@uhs:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 root@uhs:~# # PRETEND Traceroute root@uhs:~# for i in {1..30}; do ping -t $i -c 1 209.85.145.103; done | grep "Time to live exceeded" root@uhs:~#

    Read the article

  • Operation times out trying to SSH outside LAN i.e. from internet to LAN no connection is established

    - by Pelle L
    I run Ubuntu 12.04 and have no success connecting with SSH from "Internet". The router is a TL-MR3420 which is set up to forward requests to one of the NIC's on ubuntu machine (which has in total 3 NICs). I can SSH from a client on the "local" network/LAN. The forward mechanism in the router seems to work. If I stop SSH service on the Ubuntu machine and instead start one on the windows machine - it works like a charm. I do not use the Std port 22 but that shouldn't be an issue as far as I understand - sine it works on the same port on the win machine. Since my public IS isn't static I use a dynDNS service but as said earlier the same setup works from the win machine. The router is located on 192.168.0.1 The Ubuntu NICs has the following IP: eth2 192.168.0.100 , eth1 192.168.0.101 , eth0 192.168.0.102 and I have forwarded the "outside" request to 192.168.0.100 In regards for firewall settings on the Ubuntu machine I have disabled the ufw and the command ufw status give status: inactive. I don't now it this is relevant information but teh command iptables --list give: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have tried to catch traffic with help of wireshark (a tool I'm not too used to use) and it seems as a few (3?) "requests" actually reaches the NIC but ... nothing happens. The syslog does not show any entries during these attempts. Perhaps it could be some routing issues but I have reached my level of competence and are stuck ... all help and support to get this sorted out is much appreciated. I'm new to Linux so please do not assume I have a configuration that is correct - but as I wrote earlier - if the client that initiate SSH is on the LAN it all works. PS:I have also tried to get VPN (PPP) working from Internet with no success - once again VPN works on the windows machine ... so my best guess is that this is related to how the ubuntu machine handles (IP) traffic and not the TL-MR3420 router or other network issues.

    Read the article

  • Posting data from multiple servers routing through one server to client server

    - by Swaroop Kundeti
    I have 5 webservers behind Load balancer and we have a client server at other end. Client has white listed my 5 webserver public ip so that my webservers will post a file to the client server. Here the problem is my webservers is going to increase and i cannot always ask client to make my new webserver ip's white list. So i would like to make my infra this way, my webservers will post data to the client server routing from a single server. Like assume that web-1 is main server and the remaining 4 web servers will post data to client server routing through main web-1. I was told that this can be achieved by doing IP Tunneling. But i have no idea how to do that. Would be great for any kind of help.

    Read the article

  • Routing table with two NIC adapters in libvirt/KVM

    - by lzap
    I created a virtual NAT network (192.168.100.0/24 network) in my libvirt and new guest with two interfaces - one in this network, one as bridged (10.34.1.0/24 network) to the local LAN. The reason for that is I need to have my own virtual network for my DHCP/TFTP/DNS testing and still want to access my guest externally from my LAN. On both networks I have working DHCP, both giving them IP addresses. When I setup NAT port forwarding (e.g. for ssh), I can connect to the eth0 (virtual network), everything is fine. But when I try to access the eth1 via bridged interface, I have no response. I guess I have problem with my routing table - outgoing packets are routed to the virtual NAT network (which has access to the machine I am connecting from - I can ping it). But I am not sure if this setup is correct. I think I need to add something to my routing table. # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:B4:A7:5F inet addr:192.168.100.14 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feb4:a75f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16468 errors:0 dropped:27 overruns:0 frame:0 TX packets:6081 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22066140 (21.0 MiB) TX bytes:483249 (471.9 KiB) Interrupt:11 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 52:54:00:DE:16:21 inet addr:10.34.1.111 Bcast:10.34.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fede:1621/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4911 (4.7 KiB) TX bytes:9 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.34.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0 Network I am trying to connect from is different than network the hypervisor is connected to: 10.36.0.0. But it is accessible from that network. So I tried to add new route rule: route add -net 10.36.0.0 netmask 255.255.0.0 dev eth1 And it is not working. I thought setting correct interface would be sufficient. What is needed to get my packets coming through?

    Read the article

  • how're routing tables populated?

    - by Robbie Mckennie
    i've been reading "tcp/ip illustrated" and i started reading about ip forwarding. all about how you can receive a datagram and work out where to send it next based on the desination ip and your routing table. but what confused me is how (in a home network setting) the table itself is populated. is there a lower layer protocol at work here? does it come along with dhcp? or is it simply based on the ip address and netmask of each interface? i do know (from other books) that in the early days of ethernet one had to set up routing tables by hand, but i know i didn't do that.

    Read the article

  • Routing / binding 128 to one server

    - by Andrew
    I have a Ubuntu server with 128 ip's (static external ips 86.xx.xx.16), and I want to crawl pages thru different ip's. The gateway is xx.xxx.xxx.1, the main ip is xx.xxx.xxx.16, and the other 128 ip's are xx.xxx.xxx.129/255. I tried this configuration in /etc/network/interfaces but I doesn't work. It work if I remove the gateway for the aliases eth0:0 and eth0:1. I think this is routing problem. auto lo iface lo inet loopback auto eth0 auto eth0:0 auto eth0:1 iface eth0 inet static address xx.xxx.xxx.16 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:0 inet static address xx.xxx.xxx.129 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:1 inet static address xx.xxx.xxx.130 netmask 255.255.255.128 gateway xx.xxx.xxx.1 Also, please tell me how to "reset" every changes that I made in networking and routing. Thank you

    Read the article

  • Save password in WCF adapter binding file

    - by Edmund Zhao
    Binding file for WCF Adapter doesn't save the password no matter it is generated by "Add Generated Items..." wizard in Visual Studio or "Export Bindings..." in administration console. It is by design dut to the consideration of security, but it is very annoying especially when you import bindings which contain multiple WCF send ports. The way to aviod retyping password everytime after an import is to edit the binding file before import. Here is what needs to be done. 1. Find the following string:     &lt;Password vt="1" /&gt; "&lt;" means "<", "&gt;" means ">", "vt" means "Variable Type", variable type 1 is "NULL", so the above string can be translated to "<Password/>" 2. Replace it with:     &lt;Password vt="8"&gt;MyPassword&lt;/Password&gt;    variable type 8 is "string", the above string can be transalted to "<Password>MyPassword</Password>"   Binding file uses a lot of character entity references for XML character encoding purpose. For a list of the special charactor entiy references, you can check from here. ...Edmund Zhao

    Read the article

  • WCF - Automatically create ServiceHost for multiple services

    - by Rajesh Pillai
    WCF - Automatically create ServiceHost for multiple services Welcome back readers!  This blog post is about a small tip that may make working with WCF servicehost a bit easier, if you have lots of services and you need to quickly host them for testing. Recently I was encountered a situation where we were faced to create multiple service host quickly for testing.  Here is the code snippet which is pretty self explanatory.  You can put this code in your service host which in this case is  a console application. class Program   {       static void Main(string[] args)       { // Stores all hosts           List<ServiceHost> hosts = new List<ServiceHost>();           try           { // Get the services element from the serviceModel element in the config file               var section = ConfigurationManager.GetSection("system.serviceModel/services") as ServicesSection;               if (section != null)               {                   foreach (ServiceElement element in section.Services)                   { // NOTE : If the assembly is in another namespace, provide a fully qualified name here in the form // <typename, namespace> // For e.g. Business.Services.CustomerService, Business.Services                       var serviceType = Type.GetType(element.Name); // Get the typeName                        var host = new ServiceHost(serviceType);                       hosts.Add(host); // Add to the host collection                       host.Open(); // Open the host                   }               }               Console.ReadLine();           }           catch (Exception e)           {               Console.WriteLine(e.Message);               Console.ReadLine();           }           finally           {               foreach (ServiceHost host in hosts)               {                   if (host.State == CommunicationState.Opened)                   {                       host.Close();                   }                   else                   {                       host.Abort();                   }               }           }       }   } I hope you find this useful.  You can make this as a windows service if required.

    Read the article

  • Tellago is still hiring….

    - by gsusx
    Tellago 's SOA practice is rapidly growing and we are still hiring. In that sense, we are looking to for Connected Systems (WCF, BizTalk, WF) experts who are passionate about building game changing solutions with the latest Microsoft technologies. You will be working alongside technology gurus like DonXml , Pablo Cibraro or Dwight Goins . If you are interested and not afraid of working with a bunch of crazy people ;)please drop me a line at jesus dot rodriguez at tellago dot com. Hope to hear from...(read more)

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • Routes for IIS Classic and Integrated Mode

    - by imran_ku07
         Introduction:             ASP.NET MVC Routing feature makes it very easy to provide clean URLs. You just need to configure routes in global.asax file to create an application with clean URLs. In most cases you define routes works in IIS 6, IIS 7 (or IIS 7.5) Classic and Integrated mode. But in some cases your routes may only works in IIS 7 Integrated mode, like in the case of using extension less URLs in IIS 6 without a wildcard extension map. So in this article I will show you how to create different routes which works in IIS 6 and IIS 7 Classic and Integrated mode.       Description:             Let's say that you need to create an application which must work both in Classic and Integrated mode. Also you have no control to setup a wildcard extension map in IIS. So you need to create two routes. One with extension less URL for Integrated mode and one with a URL with an extension for Classic Mode.   routes.MapRoute( "DefaultClassic", // Route name "{controller}.aspx/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); routes.MapRoute( "DefaultIntegrated", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults );               Now you have set up two routes, one for Integrated mode and one for Classic mode. Now you only need to ensure that Integrated mode route should only match if the application is running in Integrated mode and Classic mode route should only match if the application is running in Classic mode. For making this work you need to create two custom constraint for Integrated and Classic mode. So replace the above routes with these routes,     routes.MapRoute( "DefaultClassic", // Route name "{controller}.aspx/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { mode = new ClassicModeConstraint() }// Constraints ); routes.MapRoute( "DefaultIntegrated", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { mode = new IntegratedModeConstraint() }// Constraints );            The first route which is for Classic mode adds a ClassicModeConstraint and second route which is for Integrated mode adds a IntegratedModeConstraint. Next you need to add the implementation of these constraint classes.     public class ClassicModeConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { return !HttpRuntime.UsingIntegratedPipeline; } } public class IntegratedModeConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { return HttpRuntime.UsingIntegratedPipeline; } }             HttpRuntime.UsingIntegratedPipeline returns true if the application is running on Integrated mode; otherwise, it returns false. So routes for Integrated mode only matched when the application is running on Integrated mode and routes for Classic mode only matched when the application is not running on Integrated mode.       Summary:             During developing applications, sometimes developers are not sure that whether this application will be host on IIS 6 or IIS 7 (or IIS 7.5) Integrated mode or Classic mode. So it's a good idea to create separate routes for both Classic and Integrated mode so that your application will use extension less URLs where possible and use URLs with an extension where it is not possible to use extension less URLs. In this article I showed you how to create separate routes for IIS Integrated and Classic mode. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Tellago keeps hiring

    - by gsusx
    Tellago keeps growing and hiring very aggressively. We were recently received the American Business Award to the best company in the United States, under a 100 people, in the computer services industry ( More details about that in a future post J ) We are currently looking for architects to join our SOA and SharePoint practices. If you are a brilliant developer or architect with expertise on technologies such as WCF, WF or BizTalk Server, you are passionate about technologies and crazy enough to...(read more)

    Read the article

  • Telephone.com

    - by jrice
    Check Telephone.com our new website using .netframework 3.5 You can now add your own twist to telephone.com and personalize your messaging style by writing your own SMS applications to implement any feature you would like to add to your messaging experience using our wcf rest API Regards

    Read the article

  • Learning Issued Token in Federated Service

    - by Lijo
    I would like to learn federated WCF service. I have the following in my system. • Windows XP • Visual Studio 2010 Express • SQL Server 2008 Express Is it possible to create a federated service sample with this infrastructure? Is there any article for that? UPDATE Federation: http://msdn.microsoft.com/en-us/library/ms730908.aspx Federation Sample: http://msdn.microsoft.com/en-us/library/aa355045.aspx

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • Silverlight Reporting Application Part 3.5 - Prism Background and WCF RIA [Series Intermission]

    Taking a step back before I dive into the details and full-on coding fun, I wanted to once again respond to a comment on my last post to clear up some things in regards to how I'm setting up my project and some of the choices I've made. Aka, thanks Ben. :) Prism Project Setup For starters, I'm not the ideal use case for a Prism application. In most cases where you've got a one-man team, Prism can be overkill as it is more intended for large teams who are geographically dispersed or in applications that have a larger scale than my Recruiting application in which you'll greatly benefit from modularity, delayed loading of xaps, etc. What Prism offers, though, is a manner for handling UI, commands, and events with the idea that, through a modular approach in which no parts really need to know about one another, I can update this application bit by bit as hiring needs change or requirements differ between offices without having to worry that changing something in the Jobs module will break something in, say, the Scheduling module. All that being said, here's a look at how our project breakdown for Recruit (MVVM/Prism implementation) looks: This could be a little misleading though, as each of those modules is actually another project in the overall Recruit solution. As far as what the projects actually are, that looks a bit like this: Recruiting Solution Recruit (Shell up there) - Main Silverlight Application .Web - Default .Web application to host the Silverlight app Infrastructure - Silverlight Class Library Project Modules - Silverlight Class Library Projects Infrastructure &Modules The Infrastructure project is probably something you'll see to some degree in any composite application. In this application, it is going to contain custom commands (you'll see the joy of these in a post or two down the road), events, helper classes, and any custom classes I need to share between different modules. Think of this as a handy little crossroad between any parts of your application. Modules on the other hand are the bread and butter of this application. Besides the shell, which holds the UI skeleton, and the infrastructure, which holds all those shared goodies, the modules are self-contained bundles of functionality to handle different concerns. In my scenario, I need a way to look up and edit Jobs, Applicants, and Schedule interviews, a Notification module to handle telling the user when different things are happening (i.e., loading from database), and a Menu to control interaction and moving between different views. All modules are going to follow the following pattern: The module class will inherit from IModule and handle initialization and loading the correct view into the correct region, whereas the Views and ViewModels folders will contain paired Silverlight user controls and ViewModel class backings. WCF RIA Services Since we've got all the projects in a single solution, we did not have to go the route of creating a WCR RIA Services Class Library. Every module has it's WCF RIA link back to the main .Web project, so the single Linq-2-SQL (yes, I said Linq-2-SQL, but I'll soon be switching to OpenAccess due to the new visual designer) context I'm using there works nicely with the scope of my project. If I were going for completely separating this project out and doing different, dynamically loaded elements, I'd probably go for the separate class library. Hope that clears that up. In the future though, I will be using that in a project that I've got in the "when I've got enough time to work on this" pipeline, so we'll get into that eventually- and hopefully when WCF RIA is in full release! Why Not use Silverlight Navigation/Business Template? The short answer- I'm a creature of habit, and having used Silverlight for a few years now, I'm used to doing lots of things manually. :) Plus, starting with a blank slate of a project I'm able to set up things exactly as I want them to be. In this case, rather than the navigation frame we would see in one of the templates, the MainRegion/ContentControl is working as our main navigation window. In many cases I will use theSilverlight navigation template to start things off, however in this case I did not need those features so I opted out of using that. Next time when I actually hit post #4, we're going to get into the modules and starting to get functionality into this application. Next week is also release week for the Q1 2010 release, so be sure to check out our annualWebinar Week (I might be biased, but Wednesday is my favorite out of the group). Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >