Search Results

Search found 62087 results on 2484 pages for 'net framework extended'.

Page 640/2484 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Invariant code contracts – using class-wide contracts

    - by DigiMortal
    It is possible to define invariant code contracts for classes. Invariant contracts should always hold true whatever member of class is called. In this posting I will show you how to use invariant code contracts so you understand how they work and how they should be tested. This is my randomizer class I am using to demonstrate code contracts. I added one method for invariant code contracts. Currently there is one contract that makes sure that random number generator is not null. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer() { }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     }       [ContractInvariantMethod]     private void ObjectInvariant()     {         Contract.Invariant(_generator != null);     } } Invariant code contracts are define in methods that have ContractInvariantMethod attribute. Some notes: It is good idea to define invariant methods as private. Don’t call invariant methods from your code because code contracts system does not allow it. Invariant methods are defined only as place where you can keep invariant contracts. Invariant methods are called only when call to some class member is made! The last note means that having invariant method and creating Randomizer object with null as argument does not automatically generate exception. We have to call at least one method from Randomizer class. Here is the test for generator. You can find more about contracted code testing from my posting Code Contracts: Unit testing contracted code. There is also explained why the exception handling in test is like it is. [TestMethod] [ExpectedException(typeof(Exception))] public void Should_fail_if_generator_is_null() {     try     {         var randomizer = new Randomizer(null);         randomizer.GetRandomFromRangeContracted(1, 4);     }     catch (Exception ex)     {         throw new Exception(ex.Message, ex);     } } Try out this code – with unit tests or with test application to see that invariant contracts are checked as soon as you call some member of Randomizer class.

    Read the article

  • LINQ: Single vs. First

    - by Paulo Morgado
    I’ve witnessed and been involved in several discussions around the correctness or usefulness of the Single method in the LINQ API. The most common argument is that you are querying for the first element on the result set and an exception will be thrown if there’s more than one element. The First method should be used instead, because it doesn’t throw if the result set has more than one item. Although the documentation for Single states that it returns a single, specific element of a sequence of values, it actually returns THE single, specific element of a sequence of ONE value. One you use the Single method in your code you are asserting that your query will result in a scalar result instead of a result set of arbitrary length. On the other hand, the documentation for First states that it returns the first element of a sequence of arbitrary length. Imagine you want to catch a taxi. You go the the taxi line and catch the FIRST one, no matter how many are there. On the other hand, if you go the the parking lot to get your car, you want the SINGLE one specific car that’s yours. If your “query” “returns” more than one car, it’s an exception. Either because it “returned” not only your car or you happen to have more than one car in that parking lot. In either case, you can only drive one car at once and you’ll need to refine your “query”.

    Read the article

  • Don’t miss a thing when you going the Mix, DevConnections, Tech-ed or PDC conferences.

    - by albertpascual
    Besides all sessions and courses found in the agenda there are events happening around that you will miss, those events are being published and index in this iPhone & iPad app for you to find the parties or external events around the conference that otherwise you will miss. Download it for free here if you are going to the Mix, DevConnections, TechEd or Pdc this year. http://itunes.apple.com/us/app/eventmeetup/id421597442?mt=8&ls=1 Cheers Al

    Read the article

  • Is recursion really bad?

    - by dotneteer
    After my previous post about the stack space, it appears that there is perception from the feedback that recursion is bad and we should avoid deep recursion. After writing a compiler, I know that the modern computer and compiler are complex enough and one cannot automatically assume that a hand crafted code would out-perform the compiler optimization. The only way is to do some prototype to find out. So why recursive code may not perform as well? Compilers place frames on a stack. In additional to arguments and local variables, compiles also need to place frame and program pointers on the frame, resulting in overheads. So why hand-crafted code may not performance as well? The stack used by a compiler is a simpler data structure and can grow and shrink cleanly. To replace recursion with out own stack, our stack is allocated in the heap that is far more complicated to manage. There could be overhead as well if the compiler needs to mark objects for garbage collection. Compiler also needs to worry about the memory fragmentation. Then there is additional complexity: CPUs have registers and multiple levels of cache. Register access is a few times faster than in-CPU cache access and is a few 10s times than on-board memory access. So it is up to the OS and compiler to maximize the use of register and in-CPU cache. For my particular problem, I did an experiment to rewrite my c# version of recursive code with a loop and stack approach. So here are the outcomes of the two approaches:   Recursive call Loop and Stack Lines of code for the algorithm 17 46 Speed Baseline 3% faster Readability Clean Far more complex So at the end, I was able to achieve 3% better performance with other drawbacks. My message is never assuming your sophisticated approach would automatically work out better than a simpler approach with a modern computer and compiler. Gage carefully before committing to a more complex approach.

    Read the article

  • Windows Azure PowerShell for Node.js

    - by shiju
    The Windows Azure PowerShell for Node.js is a command-line tool that  allows the Node developers to build and deploy Node.js apps in Windows Azure using Windows PowerShell cmdlets. Using Windows Azure PowerShell for Node.js, you can develop, test, deploy and manage Node based hosted service in Windows Azure. For getting the PowerShell for Node.js, click All Programs, Windows Azure SDK Node.js and run  Windows Azure PowerShell for Node.js, as Administrator. The followings are the few PowerShell cmdlets that lets you to work with Node.js apps in Windows Azure Create New Hosted Service New-AzureService <HostedServiceName> The below cmdlet will created a Windows Aazure hosted service named NodeOnAzure in the folder C:\nodejs and this will also create ServiceConfiguration.Cloud.cscfg, ServiceConfiguration.Local.cscfg and ServiceDefinition.csdef and deploymentSettings.json files for the hosted service. PS C:\nodejs> New-AzureService NodeOnAzure The below picture shows the files after creating the hosted service Create Web Role Add-AzureNodeWebRole <RoleName> The following cmdlet will create a hosted service named MyNodeApp along with web.config file. PS C:\nodejs\NodeOnAzure> Add-AzureNodeWebRole MyNodeApp The below picture shows the files after creating the web role app. Install Node Module npm install <NodeModule> The following command will install Node Module Express onto your web role app. PS C:\nodejs\NodeOnAzure\MyNodeApp> npm install Express Run Windows Azure Apps Locally in the Emulator Start-AzureEmulator -launch The following cmdlet will create a local package and run Windows Azure app locally in the emulator PS C:\nodejs\NodeOnAzure\MyNodeApp> Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator The following cmdlet will stop your Windows Azure in the emulator. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings The following cmdlet will redirect to Windows Azure portal where we can download Windows Azure publish settings PS C:\nodejs\NodeOnAzure\MyNodeApp> Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> The following cmdlet will import the publish settings file from the location c:\nodejs PS C:\nodejs\NodeOnAzure\MyNodeApp>  Import-AzurePublishSettings c:\nodejs\shijuvar.publishSettings Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> The following cmdlet will publish the app to Windows Azure with name “NodeOnAzure” in the location Southeast Asia. Please keep in mind that the service name should be unique. PS C:\nodejs\NodeOnAzure\MyNodeApp> Publish-AzureService –name NodeonAzure –location "Southeast Asia” –launch Stop Windows Azure Service Stop-AzureService The following cmdlet will stop your service which you have deployed previously. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureService Remove Windows Azure Service Remove-AzureService The following cmdlet will remove your service from Windows Azure. PS C:\nodejs\NodeOnAzure\MyNodeApp> Remove-AzureService Quick Summary for PowerShell cmdlets Create  a new Hosted Service New-AzureService <HostedServiceName> Create a Web Role Add-AzureNodeWebRole <RoleName> Install Node Module npm install <NodeModule> Running Windows Azure Apps Locally in Emulator Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> Stop Windows Azure Service Stop-AzureService Remove Windows Azure Service Remove-AzureService

    Read the article

  • Mercurial Conversion from Team Foundation Server

    - by mhawley
    I’m using Twitter. Follow me @matthawley One of my many (almost) daily tasks when working on the CodePlex platform since releasing Mercurial as a supported version control system, is converting projects from Team Foundation Server (TFS) to Mercurial. I'm happy to say that of all the conversions I have done since mid-January, the success rate of migrating full source history is about 95%. To get to this success point, I have had to learn and refine several techniques utilizing a few different tools… (read more)

    Read the article

  • Remove page flicker in IE8

    - by webbes
    In this pet project of mine I have two large background images. Unfortunately this means that IE will display a nasty page flicker on each request. Implementing AJAX for getting rid of it, is overkill. I solved this in a different way....(read more)

    Read the article

  • NHibernate Tools

    - by Ricardo Peres
    Felice Pollano is the author of a two great new tools for working with NHibernate: NH Workbench: an IDE for writing HQL queries against a model db2hbm: generation of .hbm.xml files from a database (currently only SQL Server, more to come) I suggest you give them a try and give Felix your feedback!

    Read the article

  • NHibernate Pitfalls: Fetch and Paging

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate allows you to force loading additional references (many to one, one to one) or collections (one to many, many to many) in a query. You must know, however, that this is incompatible with paging. It’s easy to see why. Let’s say you want to get 5 products starting on the fifth, you can issue the following LINQ query: 1: session.Query<Product>().Take(5).Skip(5).ToList(); Will product this SQL in SQL Server: 1: SELECT 2: TOP (@p0) product1_4_, 3: name4_, 4: price4_ 5: FROM 6: (select 7: product0_.product_id as product1_4_, 8: product0_.name as name4_, 9: product0_.price as price4_, 10: ROW_NUMBER() OVER( 11: ORDER BY 12: CURRENT_TIMESTAMP) as __hibernate_sort_row 13: from 14: product product0_) as query 15: WHERE 16: query.__hibernate_sort_row > @p1 17: ORDER BY If, however, you wanted to bring as well the associated order details, you might be tempted to try this: 1: session.Query<Product>().Fetch(x => x.OrderDetails).Take(5).Skip(5).ToList(); Which, in turn, will produce this SQL: 1: SELECT 2: TOP (@p0) product1_4_0_, 3: order1_3_1_, 4: name4_0_, 5: price4_0_, 6: order2_3_1_, 7: product3_3_1_, 8: quantity3_1_, 9: product3_0__, 10: order1_0__ 11: FROM 12: (select 13: product0_.product_id as product1_4_0_, 14: orderdetai1_.order_detail_id as order1_3_1_, 15: product0_.name as name4_0_, 16: product0_.price as price4_0_, 17: orderdetai1_.order_id as order2_3_1_, 18: orderdetai1_.product_id as product3_3_1_, 19: orderdetai1_.quantity as quantity3_1_, 20: orderdetai1_.product_id as product3_0__, 21: orderdetai1_.order_detail_id as order1_0__, 22: ROW_NUMBER() OVER( 23: ORDER BY 24: CURRENT_TIMESTAMP) as __hibernate_sort_row 25: from 26: product product0_ 27: left outer join 28: order_detail orderdetai1_ 29: on product0_.product_id=orderdetai1_.product_id 30: ) as query 31: WHERE 32: query.__hibernate_sort_row > @p1 33: ORDER BY 34: query.__hibernate_sort_row; However, because of the JOIN, what happens is that, if your products have more than one order details, you will get several records – one per order detail – per product, which means that pagination will be broken. There is an workaround, which forces you to write your LINQ query in another way: 1: session.Query<OrderDetail>().Where(x => session.Query<Product>().Select(y => y.ProductId).Take(5).Skip(5).Contains(x.Product.ProductId)).Select(x => x.Product).ToList() Or, using HQL: 1: session.CreateQuery("select od.Product from OrderDetail od where od.Product.ProductId in (select p.ProductId from Product p skip 5 take 5)").List<Product>(); The generated SQL will then be: 1: select 2: product1_.product_id as product1_4_, 3: product1_.name as name4_, 4: product1_.price as price4_ 5: from 6: order_detail orderdetai0_ 7: left outer join 8: product product1_ 9: on orderdetai0_.product_id=product1_.product_id 10: where 11: orderdetai0_.product_id in ( 12: SELECT 13: TOP (@p0) product_id 14: FROM 15: (select 16: product2_.product_id, 17: ROW_NUMBER() OVER( 18: ORDER BY 19: CURRENT_TIMESTAMP) as __hibernate_sort_row 20: from 21: product product2_) as query 22: WHERE 23: query.__hibernate_sort_row > @p1 24: ORDER BY 25: query.__hibernate_sort_row); Which will get you what you want: for 5 products, all of their order details.

    Read the article

  • LLBLGen Pro feature highlights: model views

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) To be able to work with large(r) models, it's key you can view subsets of these models so you can have a better, more focused look at them. For example because you want to display how a subset of entities relate to one another in a different way than the list of entities. LLBLGen Pro offers this in the form of Model Views. Model Views are views on parts of the entity model of a project, and the subsets are displayed in a graphical way. Additionally, one can add documentation to a Model View. As Model Views are displaying parts of the model in a graphical way, they're easier to explain to people who aren't familiar with entity models, e.g. the stakeholders you're interviewing for your project. The documentation can then be used to communicate specifics of the elements on the model view to the developers who have to write the actual code. Below I've included an example. It's a model view on a subset of the entities of AdventureWorks. It displays several entities, their relationships (both relational and inheritance relationships) and also some specifics gathered from the interview with the stakeholder. As the information is inside the actual project the developer will work with, the information doesn't have to be converted back/from e.g .word documents or other intermediate formats, it's the same project. This makes sure there are less errors / misunderstandings. (of course you can hide the docked documentation pane or dock it to another corner). The Model View can contain entities which are placed in different groups. This makes it ideal to group entities together for close examination even though they're stored in different groups. The Model View is a first-class citizen of the code-generator. This means you can write templates which consume Model Views and generate code accordingly. E.g. you can write a template which generates a service per Model View and exposes the entities in the Model View as a single entity graph, fetched through a method. (This template isn't included in the LLBLGen Pro package, but it's easy to write it up yourself with the built-in template editor). Viewing an entity model in different ways is key to fully understand the entity model and Model Views help with that.

    Read the article

  • Hosting WCF service in Windows Service

    - by DigiMortal
    When building Windows services we often need a way to communicate with them. The natural way to communicate to service is to send signals to it. But this is very limited communication. Usually we need more powerful communication mechanisms with services. In this posting I will show you how to use service-hosted WCF web service to communicate with Windows service. Create Windows service Suppose you have Windows service created and service class is named as MyWindowsService. This is new service and all we have is default code that Visual Studio generates. Create WCF service Add reference to System.ServiceModel assembly to Windows service project and add new interface called IMyService. This interface defines our service contracts. [ServiceContract] public interface IMyService {     [OperationContract]     string SayHello(int value); } We keep this service simple so it is easy for you to follow the code. Now let’s add service implementation: [ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)] public class MyService : IMyService {     public string SayHello(int value)     {         return string.Format("Hello, : {0}", value);     } } With ServiceBehavior attribute we say that we need only one instance of WCF service to serve all requests. Usually this is more than enough for us. Hosting WCF service in Windows Service Now it’s time to host our WCF service and make it available in Windows service. Here is the code in my Windows service: public partial class MyWindowsService : ServiceBase {     private ServiceHost _host;     private MyService _server;       public MyWindowsService()     {         InitializeComponent();     }       protected override void OnStart(string[] args)     {         _server = new MyService();         _host = new ServiceHost(_server);         _host.Open();     }       protected override void OnStop()     {         _host.Close();     } } Our Windows service now hosts our WCF service. WCF service will be available when Windows service is started and it is taken down when Windows service stops. Configuring WCF service To make WCF service usable we need to configure it. Add app.config file to your Windows service project and paste the following XML there: <system.serviceModel>   <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />   <services>     <service name="MyWindowsService.MyService" behaviorConfiguration="def">       <host>         <baseAddresses>           <add baseAddress="http://localhost:8732/MyService/"/>         </baseAddresses>       </host>       <endpoint address="" binding="wsHttpBinding" contract="MyWindowsService.IMyService">       </endpoint>       <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>     </service>   </services>   <behaviors>     <serviceBehaviors>       <behavior name="def">         <serviceMetadata httpGetEnabled="True"/>         <serviceDebug includeExceptionDetailInFaults="True"/>       </behavior>     </serviceBehaviors>   </behaviors> </system.serviceModel> Now you are ready to test your service. Install Windows service and start it. Open your browser and open the following address: http://localhost:8732/MyService/ You should see your WCF service page now. Conclusion WCF is not only web applications fun. You can use WCF also as self-hosted service. Windows services that lack good communication possibilities can be saved by using WCF self-hosted service as it is the best way to talk to service. We can also revert the context and say that Windows service is good host for our WCF service.

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • Google Storage for Developers…

    - by joelvarty
    I noticed this today and it seems to be a service that will compete with Amazon S3 and Microsoft’s Azure Blob storage. It’s only open to US developers for now, but I have one burning question: can we transfer directly from Google Storage to another Google service (like YouTube, Docs, etc) without incurring any transfer charges?  The even bigger question is whether all of the APIs will be updated to include this new service and to better amalgamate the existing app services with this one, since storage is so central to everything, it seems to beg the question. via Daring Fireball more later - joel

    Read the article

  • Globalization, Localization And Why My Application Stopped Launching

    - by Paulo Morgado
    When I was localizing a Windows Phone application I was developing, I set the argument on the constructor of the AssemblyCultureAttribute for the neutral culture (en-US in this particular case) for my application. As it was late at night (or early in the dawn ) I went to sleep and, on the next day, the application wasn’t launching although it compiled just fine. I’ll have to confess that it took me a couple of nights to figure out what I had done to my application. Have you figured out what I did wrong? The documentation for the AssemblyCultureAttribute states that: The attribute is used by compilers to distinguish between a main assembly and a satellite assembly. A main assembly contains code and the neutral culture's resources. A satellite assembly contains only resources for a particular culture, as in [assembly:AssemblyCultureAttribute("de")]. Putting this attribute on an assembly and using something other than the empty string ("") for the culture name will make this assembly look like a satellite assembly, rather than a main assembly that contains executable code. Labeling a traditional code library with this attribute will break it, because no other code will be able to find the library's entry points at runtime. So, what I did was marking the once main assembly as a satellite assembly for the en-US culture which made it impossible to find its entry point. To set the the neutral culture for the assembly resources I should haveused (and eventually did) the NeutralResourcesLanguageAttribute. According to its documentation: The NeutralResourcesLanguageAttribute attribute informs the ResourceManager of the application's default culture, and also informs the ResourceManager that the default culture's resources are found in the main application assembly. When looking up resources in the same culture as the default culture, the ResourceManager automatically uses the resources located in the main assembly instead of searching for a satellite assembly. This improves lookup performance for the first resource you load, and can reduce your working set.

    Read the article

  • Regular Expressions. Remember it, write it, test it.

    - by outcoldman
    I should say that I’m fan of regular expressions. Whenever I see the problem, which I can solve with Regex, I felt a burning desire to do it and going to write new test for new regex. Previously I had installed SharpDevelop Studio just for good regular expression tool in it (Why VS doesn’t have one?). But now I’m a little wiser, and for each Regex I write a separate test. I find it difficult to remember the syntax of regular expressions (I don’t write them very often); I always forget which character is responsible for the beginning of the line, etc. So I use external small and easy articles like this “Regular expressions - An introduction”. Now I want to show you little samples of regular expressions and want to show you how to test these samples. Read more... (redirect to http://outcoldman.ru)

    Read the article

  • Web Development Trends: Mobile First, Data-Oriented Development, and Single Page Applications

    - by dwahlin
    I recently had the opportunity to give a keynote talk at an Intel conference about key trends in the world of Web development that I feel teams should be taking into account with projects. It was a lot of fun and I had the opportunity to talk with a lot of different people about projects they’re working on. There are a million things that could be covered for this type of talk (HTML5 anyone?) but I only had 60 minutes and couldn’t possibly cover them all so I decided to focus on 3 key areas: mobile, data-oriented development, and SPAs. The talk was geared toward introducing people (many who weren’t Web developers) to topics such as mobile first development (demos showed a few tools to help here), responsive design techniques, data binding techniques that can simplify code, and Single Page Application (SPA) benefits. Links to code demos shown during the presentation can be found at the end of the slide deck. Web Development Trends - What's New in the World of Web Development by Dan Wahlin

    Read the article

  • Using Amazon S3/Cloudfront and Encoding.com to deliver web video – step by step for iPhone/iPod/iPad

    - by joelvarty
      The Amazon AWS newsletter for May 2010 had a great link in it to this article by encoding.com on how you can use they service to encode your video for multi-format, multi-bandwidth streaming to many devices, including iPhone, iPad, and Flash with H264.   This looks like it doesn’t actually take advantage of CloudFront streaming, but merely splits your encoded files into the available chunks and includes all of the M3U8 files that point to the different bitrates and such.   This looks like a pretty sweet service in general, especially since they seem to have an API as well, so that may be very useful to those of you out there looking to host video. more later – joel

    Read the article

  • Getting WCF Bindings and Behaviors from any config source

    - by cibrax
    The need of loading WCF bindings or behaviors from different sources such as files in a disk or databases is a common requirement when dealing with configuration either on the client side or the service side. The traditional way to accomplish this in WCF is loading everything from the standard configuration section (serviceModel section) or creating all the bindings and behaviors by hand in code. However, there is a solution in the middle that becomes handy when more flexibility is needed. This solution involves getting the configuration from any place, and use that configuration to automatically configure any existing binding or behavior instance created with code.  In order to configure a binding instance (System.ServiceModel.Channels.Binding) that you later inject in any endpoint on the client channel or the service host, you first need to get a binding configuration section from any configuration file (you can generate a temp file on the fly if you are using any other source for storing the configuration).  private BindingsSection GetBindingsSection(string path) { System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration( new System.Configuration.ExeConfigurationFileMap() { ExeConfigFilename = path }, System.Configuration.ConfigurationUserLevel.None); var serviceModel = ServiceModelSectionGroup.GetSectionGroup(config); return serviceModel.Bindings; }   The BindingsSection contains a list of all the configured bindings in the serviceModel configuration section, so you can iterate through all the configured binding that get the one you need (You don’t need to have a complete serviceModel section, a section with the bindings only works).  public Binding ResolveBinding(string name) { BindingsSection section = GetBindingsSection(path); foreach (var bindingCollection in section.BindingCollections) { if (bindingCollection.ConfiguredBindings.Count > 0 && bindingCollection.ConfiguredBindings[0].Name == name) { var bindingElement = bindingCollection.ConfiguredBindings[0]; var binding = (Binding)Activator.CreateInstance(bindingCollection.BindingType); binding.Name = bindingElement.Name; bindingElement.ApplyConfiguration(binding); return binding; } } return null; }   The code above does just that, and also instantiates and configures the Binding object (System.ServiceModel.Channels.Binding) you are looking for. As you can see, the binding configuration element contains a method “ApplyConfiguration” that receives the binding instance that needs to be configured. A similar thing can be done for instance with the “Endpoint” behaviors. You first get the BehaviorsSection, and then, the behavior you want to use.  private BehaviorsSection GetBehaviorsSection(string path) { System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration( new System.Configuration.ExeConfigurationFileMap() { ExeConfigFilename = path }, System.Configuration.ConfigurationUserLevel.None); var serviceModel = ServiceModelSectionGroup.GetSectionGroup(config); return serviceModel.Behaviors; }public List<IEndpointBehavior> ResolveEndpointBehavior(string name) { BehaviorsSection section = GetBehaviorsSection(path); List<IEndpointBehavior> endpointBehaviors = new List<IEndpointBehavior>(); if (section.EndpointBehaviors.Count > 0 && section.EndpointBehaviors[0].Name == name) { var behaviorCollectionElement = section.EndpointBehaviors[0]; foreach (BehaviorExtensionElement behaviorExtension in behaviorCollectionElement) { object extension = behaviorExtension.GetType().InvokeMember("CreateBehavior", BindingFlags.InvokeMethod | BindingFlags.NonPublic | BindingFlags.Instance, null, behaviorExtension, null); endpointBehaviors.Add((IEndpointBehavior)extension); } return endpointBehaviors; } return null; }   In this case, the code for creating the behavior instance is more tricky. First of all, a behavior in the configuration section actually represents a set of “IEndpoint” behaviors, and the behavior element you get from the configuration does not have any public method to configure an existing behavior instance. This last one only contains a protected method “CreateBehavior” that you can use for that purpose. Once you get this code implemented, a client channel can be easily configured as follows  var binding = resolver.ResolveBinding("MyBinding"); var behaviors = resolver.ResolveEndpointBehavior("MyBehavior"); SampleServiceClient client = new SampleServiceClient(binding, new EndpointAddress(new Uri("http://localhost:13749/SampleService.svc"), new DnsEndpointIdentity("localhost"))); foreach (var behavior in behaviors) { if(client.Endpoint.Behaviors.Contains(behavior.GetType())) { client.Endpoint.Behaviors.Remove(behavior.GetType()); } client.Endpoint.Behaviors.Add(behavior); }   The code above assumes that a configuration file (in any place) with a binding “MyBinding” and a behavior “MyBehavior” exists. That file can look like this,  <system.serviceModel> <bindings> <basicHttpBinding> <binding name="MyBinding"> <security mode="Transport"></security> </binding> </basicHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="MyBehavior"> <clientCredentials> <windows/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel>   The same thing can be done of course in the service host if you want to manually configure the bindings and behaviors.  

    Read the article

  • WCF RIA Services DomainContext Abstraction Strategies–Say That 10 Times!

    - by dwahlin
    The DomainContext available with WCF RIA Services provides a lot of functionality that can help track object state and handle making calls from a Silverlight client to a DomainService. One of the questions I get quite often in our Silverlight training classes (and see often in various forums and other areas) is how the DomainContext can be abstracted out of ViewModel classes when using the MVVM pattern in Silverlight applications. It’s not something that’s super obvious at first especially if you don’t work with delegates a lot, but it can definitely be done. There are various techniques and strategies that can be used but I thought I’d share some of the core techniques I find useful. To start, let’s assume you have the following ViewModel class (this is from my Silverlight Firestarter talk available to watch online here if you’re interested in getting started with WCF RIA Services): public class AdminViewModel : ViewModelBase { BookClubContext _Context = new BookClubContext(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, null); } private void LoadBooksCallback(LoadOperation<Book> books) { Books = new ObservableCollection<Book>(books.Entities); } } Notice that BookClubContext is being used directly in the ViewModel class. There’s nothing wrong with that of course, but if other ViewModel objects need to load books then code would be duplicated across classes. Plus, the ViewModel has direct knowledge of how to load data and I like to make it more loosely-coupled. To do this I create what I call a “Service Agent” class. This class is responsible for getting data from the DomainService and returning it to a ViewModel. It only knows how to get and return data but doesn’t know how data should be stored and isn’t used with data binding operations. An example of a simple ServiceAgent class is shown next. Notice that I’m using the Action<T> delegate to handle callbacks from the ServiceAgent to the ViewModel object. Because LoadBooks accepts an Action<ObservableCollection<Book>>, the callback method in the ViewModel must accept ObservableCollection<Book> as a parameter. The callback is initiated by calling the Invoke method exposed by Action<T>: public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, callback); } public void LoadBooksCallback(LoadOperation<Book> lo) { //Check for errors of course...keeping this brief var books = new ObservableCollection<Book>(lo.Entities); var action = (Action<ObservableCollection<Book>>)lo.UserState; action.Invoke(books); } } This can be simplified by taking advantage of lambda expressions. Notice that in the following code I don’t have a separate callback method and don’t have to worry about passing any user state or casting any user state (the user state is the 3rd parameter in the _Context.Load method call shown above). public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), (lo) => { var books = new ObservableCollection<Book>(lo.Entities); callback.Invoke(books); }, null); } } A ViewModel class can then call into the ServiceAgent to retrieve books yet never know anything about the DomainContext object or even know how data is loaded behind the scenes: public class AdminViewModel : ViewModelBase { ServiceAgent _ServiceAgent = new ServiceAgent(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _ServiceAgent.LoadBooks(LoadBooksCallback); } private void LoadBooksCallback(ObservableCollection<Book> books) { Books = books } } You could also handle the LoadBooksCallback method using a lambda if you wanted to minimize code just like I did earlier with the LoadBooks method in the ServiceAgent class.  If you’re into Dependency Injection (DI), you could create an interface for the ServiceAgent type, reference it in the ViewModel and then inject in the object to use at runtime. There are certainly other techniques and strategies that can be used, but the code shown here provides an introductory look at the topic that should help get you started abstracting the DomainContext out of your ViewModel classes when using WCF RIA Services in Silverlight applications.

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • What is ADO ?

    - by Aamir Hasan
    What is ADO? ADO is a Microsoft technologyADO stands for ActiveX Data ObjectsADO is a Microsoft Active-X componentADO is automatically installed with Microsoft IISADO is a programming interface to access data in a databaseAccessing a Database from an ASP Page The common way to access a database from inside an ASP page is to: Create an ADO connection to a databaseOpen the database connectionCreate an ADO recordsetOpen the recordsetExtract the data you need from the recordsetClose the recordsetClose the connectionExample  <%set conn=Server.CreateObject("ADODB.Connection")conn.Provider="Microsoft.Jet.OLEDB.4.0"conn.Open(Server.Mappath("/db/northwind.mdb"))set rs = Server.CreateObject("ADODB.recordset")rs.Open "Select * from Customers", conndo until rs.EOF    for each x in rs.Fields       Response.Write(x.name)       Response.Write(" = ")       Response.Write(x.value & "<br />")    next    Response.Write("<br />")    rs.MoveNextlooprs.closeconn.close%> 

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >