Search Results

Search found 11140 results on 446 pages for 'side scroller'.

Page 100/446 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • “Being Agile” Means No Documentation, Right?

    - by jesschadwick
    Ask most software professionals what Agile is and they’ll probably start talking about flexibility and delivering what the customer wants.  Some may even mention the word “iterations”.  But inevitably, they’ll say at some point that it means less or even no documentation.  After all, doesn’t creating, updating, and circulating painstakingly comprehensive documentation that everyone and their mother have officially signed off on go against the very core of Agile?  Of course it does!  But really, they’re missing the point! Read The Agile Manifesto. (No, seriously - read it now. It’s short. I’ll wait.)  It’s essentially a list of values.  More specifically, it’s a right-side/left-side weighted list of values:  “Value this over that”. Many people seem to get the impression that this is really a “good vs. bad” list and that those values on the right side are evil and should essentially be tossed on the floor.  This leads to the conclusion that in order to be Agile we must throw away our fancy expensive tools, document as little as possible, and scoff at the idea of a project plan.  This conclusion is quite convenient because it essentially means “less work, more productivity!” (particularly in regards to the documentation and project planning).  I couldn’t disagree with this conclusion more. My interpretation of the Manifesto targets “over” as the operative word.  It’s not just a list of right vs. wrong or good vs. bad.  It’s a list of priorities.  In other words, none of the concepts on the list should be removed from your development lifecycle – they are all important… just not equally important.  This is not a unique interpretation, in fact it says so right at the end of the manifesto! So, the next time your team sits down to tackle that big new project, don’t make the first order of business to outlaw all meetings, documentation, and project plans.  Instead, collaborate with both your team and the business members involved (you do have business members sitting in the room, directly involved in the project planning, right?) and determine the bare minimum that will allow all of you to work and communicate in the best way possible.  This often means that you can pick and choose which parts of the Agile methodologies and process work for your particular project and end up with an amalgamation of Waterfall, Agile, XP, SCRUM and whatever other methodologies the members of your team have been exposed to (my favorite is “SCRUMerfall”). The biggest implication of this is that there is no one way to implement Agile.  There is no checklist with which you can tick off boxes and confidently conclude that, “Yep, we’re Agile™!”  In fact, depending on your business and the members of your team, moving to Agile full-bore may actually be ill-advised.  Such a drastic change just ends up taking everyone out of their comfort zone which they inevitably fall back into by the end of the project.  This often results in frustration to the point that Agile is abandoned altogether because “we just need to ship something!”  Needless to say, this is far more devastating to a project. Instead, I offer this approach: keep it simple and take it slow.  If your business members or customers are only involved at the beginning phases and nowhere to be seen until the project is delivered, invite them to your daily meetings; encourage them to keep up to speed on what’s going on on a daily basis and provide feedback.  If your current process is heavy on the documentation, try to reduce it as opposed to eliminating it outright.  If you need a “TPS Change Request” signed in triplicate with a 5-day “cooling off period” before a change is implemented, try a simple bug tracking system!  Tighten the feedback loop! Finally, at the end of every “iteration” (whatever that means to you, as long as it’s relatively frequent), take as much time as you can spare (even if it’s an hour or so) and perform some kind of retrospective.  Learn from your mistakes.  Figure out what’s working for you and what’s not, then fix it.  Before you know it you’ve got a handful of iterations and/or projects under your belt and you sit down with your team to realize that, “Hey, this is working - we’re pretty Agile!”  After all, Agile is a Zen journey.  It’s a destination that you aim for, not force, and even if you never reach true “enlightenment” that doesn’t mean your team can’t be exponentially better off from merely taking the journey.

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • On Life and TechEd&hellip;

    - by MOSSLover
    I haven’t been writing here I know.  I am very sorry, but I am just too busy trying to make my personal life just as good as my SharePoint life.  So I was at TechEd again helping with the hands on labs, but I had a very long couple of weeks.  I will have a long week again.  I started out going to my friend, Randy Walker’s, wedding and I ended up at my grandmother’s condo in Fort Lauderdale.  It was a very trying week.  I had to drive 5 1/2 hours to the wedding from St. Louis and back.  So Randy is an awesome person.  He is a great guy and he was the first person ever to nominate me for MVP.  I knew it probably wasn’t going anywhere, but it was the thought that count.  I met Randy in 2008 at Tulsa Techfest.  We had fun jamming out to Rockband.  I knew he was good people back then.  He has let me vent and I have let him vent over countless personal problems.  He has always been a great friend.  So it was a no brainer when I decided to go to his wedding no matter how much driving or stress or lack of sleep it was worth it.  I am incredibly happy for him to finally find a diamond amongst a lot of coal.  To take part in his celebration was so awesome.  I thank him again for letting me participate in this ceremony. Now after Randy’s wedding I drove 5 1/2 hours landed in St. Louis, fed a cat an asthma pill hidden in wet cat food, and slept for 4 hours.  I immediately saw my best friend who dropped me off at the airport and proceeded to TechEd.  I slept 1 hour on each flight, then ended up working a 3 hour shift at TechEd.  The rest of the week was a haze of connecting with people and sleeping very little.  I got to see my friend Tasha and her husband Casey plus a billion other people.  It was a great week and then I got a call from my grandmother.  It turns out her husband was admitted to the hospital. My grandparents on my dad's side have been divorced since the 60s, which means I never got to see them together.  I always felt like they never cliqued.  When I was a kid we would always spend half our time in Chicago at grandma’s and half our time at grandpa’s houses.  We would hang out with my grandpa’s wife Bobbi and my grandma’s husband Leo.  My cousin’s always called Leo by Pappa and my brother and I would use Leo.  My cousins lived in Chicago up until my cousin Gavi was born then they moved to Philadelphia.  I remember complaining to my dad that we never visited anywhere cool just Chicago and Kansas City.  I also remember Leo teaching me and my brother, Sam, how to climb a tree and play tennis on the back of the apartment wall.  My grandfather’s was kind of stuffy and boring, but we always enjoyed my grandmother’s.  She had games and Disney Movies and toys.  Leo always made it a ton of fun to visit.  I’m not sure a lot of people knew this fact, but when I was 16 years old I saw my grandfather on my mother’s side slowly die of congestive heart failure in a nursing home.  When I was 18 I saw my grandmother on my mother’s side slowly die of an infection over a 30 day period.  I hate hospitals and I hate nursing homes, but sometimes you have to suck in your gut and be strong.  I tried to do that this weekend and I hope I did an ok job.  I really hope things get better, but if they don’t I really appreciate everything he has given me in life.  I am a terrible tennis player and I haven’t climbed a tree in ages, but they both encouraged me when encouragement from my parents was lacking.  It was Father’s Day today and I have to at least pay tribute to Leo Morris, because he has been a good person to me all my life.  Technorati Tags: TechEd,Grandparents,Father's Day

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • Posting from ASP.NET WebForms page to another URL

    - by hajan
    Few days ago I had a case when I needed to make FORM POST from my ASP.NET WebForms page to an external site URL. More specifically, I was working on implementing Simple Payment System (like Amazon, PayPal, MoneyBookers). The operator asks to make FORM POST request to a given URL in their website, sending parameters together with the post which are computed on my application level (access keys, secret keys, signature, return-URL… etc). So, since we are not allowed nesting another form inside the <form runat=”server”> … </form>, which is required because other controls in my ASPX code work on server-side, I thought to inject the HTML and create FORM with method=”POST”. After making some proof of concept and testing some scenarios, I’ve concluded that I can do this very fast in two ways: Using jQuery to create form on fly with the needed parameters and make submit() Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post Both ways seemed fine. 1. Using jQuery to create FORM html code and Submit it. Let’s say we have ‘PAY NOW’ button in our ASPX code: <asp:Button ID="btnPayNow" runat="server" Text="Pay Now" /> Now, if we want to make this button submit a FORM using POST method to another website, the jQuery way should be as follows: <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.js" type="text/javascript"></script> <script type="text/javascript">     $(function () {         $("#btnPayNow").click(function (event) {             event.preventDefault();             //construct htmlForm string             var htmlForm = "<form id='myform' method='POST' action='http://www.microsoft.com'>" +                 "<input type='hidden' id='name' value='hajan' />" +             "</form>";             //Submit the form             $(htmlForm).appendTo("body").submit();         });     }); </script> Yes, as you see, the code fires on btnPayNow click. It removes the default button behavior, then creates htmlForm string. After that using jQuery we append the form to the body and submit it. Inside the form, you can see I have set the htttp://www.microsoft.com URL, so after clicking the button you should be automatically redirected to the Microsoft website (just for test, of course for Payment I’m using Operator's URL). 2. Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post The C# code behind should be something like this: public void btnPayNow_Click(object sender, EventArgs e) {     string Url = "http://www.microsoft.com";     string formId = "myForm1";     StringBuilder htmlForm = new StringBuilder();     htmlForm.AppendLine("<html>");     htmlForm.AppendLine(String.Format("<body onload='document.forms[\"{0}\"].submit()'>",formId));     htmlForm.AppendLine(String.Format("<form id='{0}' method='POST' action='{1}'>", formId, Url));     htmlForm.AppendLine("<input type='hidden' id='name' value='hajan' />");     htmlForm.AppendLine("</form>");     htmlForm.AppendLine("</body>");     htmlForm.AppendLine("</html>");     HttpContext.Current.Response.Clear();     HttpContext.Current.Response.Write(htmlForm.ToString());     HttpContext.Current.Response.End();             } So, with this code we create htmlForm string using StringBuilder class and then just write the html to the page using HttpContext.Current.Response.Write. The interesting part here is that we submit the form using JavaScript code: document.forms["myForm1"].submit() This code runs on body load event, which means once the body is loaded the form is automatically submitted. Note: In order to test both solutions, create two applications on your web server and post the form from first to the second website, then get the values in the second website using Request.Form[“input-field-id”] I hope this was useful post for you. Regards, Hajan

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • Azure Mobile Services: available modules

    - by svdoever
    Azure Mobile Services has documented a set of objects available in your Azure Mobile Services server side scripts at their documentation page Mobile Services server script reference. Although the documented list is a nice list of objects for the common things you want to do, it will be sooner than later that you will look for more functionality to be included in your script, especially with the new provided feature that you can now create your custom API’s. If you use GIT it is now possible to add any NPM module (node package manager module, say the NuGet of the node world), but why include a module if it is already available out of the box. And you can only use GIT with Azure Mobile Services if you are an administrator on your Azure Mobile Service, not if you are a co-administrator (will be solved in the future). Until now I did some trial and error experimentation to test if a certain module was available. This is easiest to do as follows:   Create a custom API, for example named experiment. In this API use the following code: exports.get = function (request, response) { var module = "nonexistingmodule"; var m = require(module); response.send(200, "Module '%s' found.", module); }; You can now test your service with the following request in your browser: https://yourservice.azure-mobile.net/api/experiment If you get the result: {"code":500,"error":"Error: Internal Server Error"} you know that the module does not exist. In your logs you will find the following error: Error in script '/api/experiment.json'. Error: Cannot find module 'nonexistingmodule' [external code] atC:\DWASFiles\Sites\yourservice\VirtualDirectory0\site\wwwroot\App_Data\config\scripts\api\experiment.js:3:13[external code] If you require an existing (undocumented) module like the OAuth module in the following code, you will get success as a result: exports.get = function (request, response) { var module = "oauth"; var m = require(module); response.send(200, "Module '" + module + "' found."); }; If we look at the standard node.js documentation we see an extensive list of modules that can be used from your code. If we look at the list of files available in the Azure Mobile Services platform as documented in the blog post Azure Mobile Services: what files does it consist of? we see a folder node_modules with many more modules are used to build the Azure Mobile Services functionality on, but that can also be utilized from your server side node script code: apn - An interface to the Apple Push Notification service for Node.js. dpush - Send push notifications to Android devices using GCM. mpns - A Node.js interface to the Microsoft Push Notification Service (MPNS) for Windows Phone. wns - Send push notifications to Windows 8 devices using WNS. pusher - Node library for the Pusher server API (see also: http://pusher.com/) azure - Windows Azure Client Library for node. express - Sinatra inspired web development framework. oauth - Library for interacting with OAuth 1.0, 1.0A, 2 and Echo. Provides simplified client access and allows for construction of more complex apis and OAuth providers. request - Simplified HTTP request client. sax - An evented streaming XML parser in JavaScript sendgrid - A NodeJS implementation of the SendGrid Api. sqlserver – In node repository known as msnodesql - Microsoft Driver for Node.js for SQL Server. tripwire - Break out from scripts blocking node.js event loop. underscore - JavaScript's functional programming helper library. underscore.string - String manipulation extensions for Underscore.js javascript library. xml2js - Simple XML to JavaScript object converter. xmlbuilder - An XML builder for node.js. As stated before, many of these modules are used to provide the functionality of Azure Mobile Services platform, and in general should not be used directly. On the other hand, I needed OAuth badly to authenticate to the new v1.1 services of Twitter, and was very happy that a require('oauth') and a few lines of code did the job. Based on the above modules and a lot of code in the other javascript files in the Azure Mobile Services platform a set of global objects is provided that can be used from your server side node.js script code. In future blog posts I will go into more details with respect to how this code is built-up, all starting at the node.js express entry point app.js.

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • BizTalk &ndash; Routing failure on Delivery Notifications (BizTalk 2006 R2 to 2013)

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/11/11/biztalk-routing-failure-on-delivery-notifications.aspxThis is a detailed explanation of a something I posted a few month ago on stackoverflow, concerning a weird behavior (a bug, really…) of the delivery notifications in BizTalk. Reminder: what are delivery notifications Mechanism BizTalk has the ability to automatically publish positive acknowledgments (ACK) when it has succeeded transmitting a message or negative acknowledgments (NACK) in case of a transmission failure. Orchestrations can use delivery notifications to subscribe to those ACKs and NACKs in order to know if a message sent on a one-way send port has been successfully transmitted. Delivery Notifications can be “activated” in two ways: The most common and easy way is to set the Delivery Notification property of a logical send port (in the orchestration designer) to Transmitted: Another way is to set the BTS.AckRequired context property of the message to be sent to true: NOTE: fundamentally, those methods are strictly equivalent since the fact of setting the Delivery Notification to Transmitted on the send port only tells BizTalk the BTS.AckRequired context property has to be set to true on the outgoing message. Related context properties ACKs and NACKs have a common set of propoted context properties, which are : Propriété Description AckType Equals ACK when successful or NACK otherwise AckID MessageID of the message concerned by the acknowledgment AckOwnerID InstanceID of the instance associated with the acknowledgment AckSendPortID ID of the send port AckSendPortName Name of the send port AckOutboundTransportLocation URI of the send port AckReceivePortID ID of the port the message came from AckReceivePortName Name of the port the message came from AckInboundTransportLocation URI of the port the message came from Detailed behavior The way Delivery Notifications are handled by BizTalk is peculiar compared to the standard behavior of the Message Box: if no active subscription exists for the acknowledgment, it is simply discarded. The direct consequence of this is that there can be no routing failure for an acknowledgment, and an acknowledgment cannot be suspended. Moreover, when a message is sent to a send port where Delivery Notification = Transmitted, a correlation set is initialized and a correlation token is attached to the message (Context property: CorrelationToken). This correlation token will also be attached to the acknowledgment. So when the acknowledgment is issued, it is automatically routed to the source orchestration. Finally, when a NACK is received by the source orchestration, a DeliveryFailureException is thrown, which can be caught in Catch section. Context of the problem Consider this scenario: In an orchestration, Delivery Notifications are activated on a One-Way send port In case of a transmission failure, the messaging instance is suspended and the orchestration catches an exception (DeliveryFailureException). When the exception is caught, the orchestration does some logging and then terminates (thanks to a Terminate shape). So that leaves only the suspended messaging instance, waiting to be resumed. Symptoms Once the problem that caused the transmission failure is solved, the messaging instance is resumed. Considering what was said in the reminder, we would expect the instance to complete, leaving no active or suspended instance. Nevertheless, the result is that the messaging instance is once more suspended, this time because of a routing failure: The routing failure report shows that the suspended message has the following attached properties: Explanation Those properties clearly indicate that the message being suspended is an acknowledgment (ACK in this case), which was published in the message box and was supended because no subscribers were found. This makes sense, since the source orchestration was terminated before we resumed the messaging instance. So its subscription to the acknowledgments was no longer active when the ACK was published, which explains the routing failure. But this behavior is in direct contradiction with what was said earlier: an acknowledgment must be discarded when no subscriber is found and therefore should not be suspended. Cause It is indeed an outright bug, which appeared with the SP1 of BizTalk 2006 R2 and was never corrected since then: not in the next 4 CUs, not in BizTalk 2009, not in 2010 and not event in 2013 – though I haven’t tested CU1 and CU2 for this last edition, but I bet there is nothing to be expected from those CUs (on this particular point). Side effects This bug can have pretty nasty side effects: this behavior can be propagated to other ports, due to routing mechanisms. For instance: you have configured the ESB Toolkit and have activated the “Enable routing failure for failed messages”. The result will be that the ESB Exception SQL send port will also try and publish ACKs or NACKs concerning its own messaging instances. In itself, this is already messy, but remember that those acknowledgments will also have the source correlation token attached to them… See how far it goes? Well, actually there is more: in SQL send ports, transactions will be rolled back because of the routing failure (I guess it also happens with other adapters - like Oracle, but I haven’t tested them). Again, think of what happens when the send port is the ESB Exception send port: your BizTalk box is going mad, but you have no idea since no exception can be written in the exception database! All of this can be tricky to diagnose, I can tell you that… Solution There is no real solution, only a work-around, but it won’t solve all of the problems and side effects. The idea is to create an orchestration which subscribes to all acknowledgments. That is to say: The message type of the incoming message will be XmlDocument The BTS.AckType property exists The logical receive port will use direct binding By doing so, all acknowledgments will be consumed by an instance of this orchestration, thus avoiding the routing failure. Here is an example of what this orchestration could look like: In order not to pollute the HAT and the DTA Db (after all, this orchestration is only meant to be a palliative to some faulty internal BizTalk mechanism, so there should be no trace of its execution), all tracking must be deactivated:

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Vendors: Partners or Salespeople?

    - by BuckWoody
    I got a great e-mail from a friend that asked about how he could foster a better relationship with his vendors. So many times when you work with a vendor it’s more of a used-car sales experience than a partnership – but you can actually make your vendor more of a partner, as long as you both set some ground-rules at the start. Sit down with your vendor, and have a heart-to-heart talk with them, explain that they won’t win every time, but that you’re willing to work with them in an honest way on both sides. Here’s the advice I sent him verbatim. I hope this post generates lots of comments from both customers and vendors. I don’t expect that you’ve had a great experience with your Microsoft reps, but I happen to work with some of the best sales teams in the business, and our clients tell us that all the time. “The key to this relationship is to keep the audience really small. Ideally there should be one person from your side that is responsible for the relationship, and one from the vendor’s side. Each responsible person should have the authority to make decisions, and to bring in other folks as needed for a given topic, project or decision.   For Microsoft, this is called an “Account Manager” – they aren’t technical, they aren’t sales. They “own” a relationship with a company. They learn what the company does, who does it, and how. They are responsible to understand what the challenges in your company are. While they don’t know the bits and bytes of everything we sell, they know what each thing does, and who to talk to about it. I get a call from an Account Manager every week that has pre-digested an issue at an organization and says to me: “I need you to set up an architectural meeting with their technical staff to get a better read on how we can help with problem X.” I do that and then report back to the Account Manager what we learned.  All through this process there’s the atmosphere of a “team”, not a “sales opportunity” per se. I’ve even recommended that the firm use a rival product, and I’ve never gotten push-back on that decision from my Account Managers.   But that brings up an interesting point. Someone pays an Account Manager and pays me. They expect something in return. At some point, you have to buy something. Not every time, not every situation – sometimes it’s just helping you with what you already bought from us. But the point is that you can’t expect lots of love and never spend any money. That’s the way business works.   Finally, don’t view the vendor as someone with their hand in your pocket – somebody that’s just trying to sell you something and doesn’t care if they ever see you again – unless they deserve it. There are plenty of “love them and leave them” companies out there, and you may have even had this experience with us, but that isn’t the case in the firms I work with. In fact, my customers get a questionnaire that asks them that exact question. “How many times have you seen your account team? Did you like your interaction with them? Can they do better?” My raises, performance reviews and general standing in my group are based on the answers the company gives.  Ask your vendor if they measure their sales and support teams this way – if not, seek another vendor to partner with.   Partnering with someone is a big deal. It involves time and effort on your part, and on the vendor’s part. If either of you isn’t pulling your weight, it just won’t work. You have every right to expect them to treat you as a partner, and they have the same right for your side.” Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • jQuery Templates, Data Link

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Query Templates, Data Link, and Globalization I am sure you must have read Scott Guthrie’s blog post about jQuery support and officially supporting jQuery's templating, data linking and globalization, if not here it is: jQuery Templating Since we are an open source shop and use jQuery and jQuery plugins extensively to say the least, decided to look into the templating a bit and see what data linking is all about. For those not familiar with those terms here is the summary, plenty of material out there on what it is, but here is what in my experience it means: jQuery Templating: A templating engine that allows you to specify a client-side template where you indicate which properties/tags you want dynamically updated. You in a sense specify which parts of the html is dynamic and since it is pluggable you are able to use tools data jQuery data linking and others to let it sync up your template with data. What makes it more powerful is that you can easily work with rows of data, adding and removing rows. Once the template has been generated, which you do dynamically on a client-side event, you then append/inject the resulting template somewhere in your DOM, like for example you would get a JSON object from the database, map it to your template, it populates the template with your data in the indicated places, and then let’s say for example append it to a row in a table. I have not found it that useful for lets say a single record of data since you could easily just get a partial view from the server via an html type ajax call. It really shines when you dynamically add/remove rows from a list in the DOM. I have not found an alternative that meets the functionality of the jQuery template and helps of course that Microsoft officially supports it. In future versions of the jQuery plug-in it may even ship as part of the standard jQuery library and with future versions of Visual Studio. jQuery Data Linking: In short I was fascinated by it initially by how with one line of code I can sync up my JSON object with my form elements. That's where my enthusiasm stopped. It was one-line to let is deal with syncing up your form with your JSON object, but it is not bidirectional as they state and I tried all the work arounds they suggested and none of them work. The problem is that when you update your JSON object it DOES NOT sync it up with your form. In an example, accounts are being edited client side by selecting the account from a list by clicking on the row, it then fetches the entire account JSON object via ajax json-type call and then refreshes the form with the account’s details from the new JSON object. What is the use of syncing up my JSON with the form if I still have to programmatically sync up my new JSON object with each DOM property?! So you may ask: “what is the alternative”? Good question and the same one I was pondering, maybe I can just use it for keeping my from n sync with my JSON object so I can post that JSON object back to the server and update my database. That’s when I discovered Knockout: Knockout It addresses the issues mentioned above and also supports event handling through the observer pattern. Not wanting to go into detail here, Steve Sanderson, the creator of Knockout, has already done a terrific job of that, thanks Steve for a great plug-in! Best of all it integrates perfectly with the jQuery Templating engine as well. I have not found an alternative to this plugin that supports the depth and width of functionality and would recommend it to anyone. The only drawback is the embedded html attributes (data-bind=””) tags that you have to add to the HTML, in my opinion tying your behavior to your HTML, where I like to separate behavior from HTML as well as CSS, so the HTML is purely to define content, not styling or behavior. But there are plusses to this as well and also a nifty work around to this that I will just shortly mention here with an example. Instead of data binding an html tag with knockout event handling like so:  <%=Html.TextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   Do: <%=Html.DataBoundTextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   The html extension above then takes care of the internals and you could then swap Knockout for something else if you want to inside the extension and keep the HTML plugin agnostic. Here is what the extension looks like, you can easily build a whole library to support all kinds of data binding options from this:      public static class HtmlExtensions       {         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }       }   Hope this helps in making a decision when and where to consider jQuery templating, data linking and Knockout.

    Read the article

  • ASP.NET Multi-Select Radio Buttons

    - by Ajarn Mark Caldwell
    “HERESY!” you say, “Radio buttons are for single-select items!  If you want multi-select, use checkboxes!”  Well, I would agree, and that is why I consider this a significant bug that ASP.NET developers need to be aware of.  Here’s the situation. If you use ASP:RadioButton controls on your WebForm, then you know that in order to get them to behave properly, that is, to define a group in which only one of them can be selected by the user, you use the Group attribute and set the same value on each one.  For example: 1: <asp:RadioButton runat="server" ID="rdo1" Group="GroupName" checked="true" /> 2: <asp:RadioButton runat="server" ID="rdo2" Group="GroupName" /> With this configuration, the controls will render to the browser as HTML Input / Type=radio tags and when the user selects one, the browser will automatically deselect the other one so that only one can be selected (checked) at any time. BUT, if you user server-side code to manipulate the Checked attribute of these controls, it is possible to set them both to believe that they are checked. 1: rdo2.Checked = true; // Does NOT change the Checked attribute of rdo1 to be false. As long as you remain in server-side code, the system will believe that both radio buttons are checked (you can verify this in the debugger).  Therefore, if you later have code that looks like this 1: if (rdo1.Checked) 2: { 3: DoSomething1(); 4: } 5: else 6: { 7: DoSomethingElse(); 8: } then it will always evaluate the condition to be true and take the first action.  The good news is that if you return to the client with multiple radio buttons checked, the browser tries to clean that up for you and make only one of them really checked.  It turns out that the last one on the screen wins, so in this case, you will in fact end up with rdo2 as checked, and if you then make a trip to the server to run the code above, it will appear to be working properly.  However, if your page initializes with rdo2 checked and in code you set rdo1 to checked also, then when you go back to the client, rdo2 will remain checked, again because it is the last one and the last one checked “wins”. And this gets even uglier if you ever set these radio buttons to be disabled.  In that case, although the client browser renders the radio buttons as though only one of them is checked the system actually retains the value of both of them as checked, and your next trip to the server will really frustrate you because the browser showed rdo2 as checked, but your DoSomething1() routine keeps getting executed. The following is sample code you can put into any WebForm to test this yourself. 1: <body> 2: <form id="form1" runat="server"> 3: <h1>Radio Button Test</h1> 4: <hr /> 5: <asp:Button runat="server" ID="cmdBlankPostback" Text="Blank Postback" /> 6: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7: <asp:Button runat="server" ID="cmdEnable" Text="Enable All" OnClick="cmdEnable_Click" /> 8: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9: <asp:Button runat="server" ID="cmdDisable" Text="Disable All" OnClick="cmdDisable_Click" /> 10: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 11: <asp:Button runat="server" ID="cmdTest" Text="Test" OnClick="cmdTest_Click" /> 12: <br /><br /><br /> 13: <asp:RadioButton ID="rdoG1R1" GroupName="Group1" runat="server" Text="Group 1 Radio 1" Checked="true" /><br /> 14: <asp:RadioButton ID="rdoG1R2" GroupName="Group1" runat="server" Text="Group 1 Radio 2" /><br /> 15: <asp:RadioButton ID="rdoG1R3" GroupName="Group1" runat="server" Text="Group 1 Radio 3" /><br /> 16: <hr /> 17: <asp:RadioButton ID="rdoG2R1" GroupName="Group2" runat="server" Text="Group 2 Radio 1" /><br /> 18: <asp:RadioButton ID="rdoG2R2" GroupName="Group2" runat="server" Text="Group 2 Radio 2" Checked="true" /><br /> 19:  20: </form> 21: </body> 1: protected void Page_Load(object sender, EventArgs e) 2: { 3:  4: } 5:  6: protected void cmdEnable_Click(object sender, EventArgs e) 7: { 8: rdoG1R1.Enabled = true; 9: rdoG1R2.Enabled = true; 10: rdoG1R3.Enabled = true; 11: rdoG2R1.Enabled = true; 12: rdoG2R2.Enabled = true; 13: } 14:  15: protected void cmdDisable_Click(object sender, EventArgs e) 16: { 17: rdoG1R1.Enabled = false; 18: rdoG1R2.Enabled = false; 19: rdoG1R3.Enabled = false; 20: rdoG2R1.Enabled = false; 21: rdoG2R2.Enabled = false; 22: } 23:  24: protected void cmdTest_Click(object sender, EventArgs e) 25: { 26: rdoG1R2.Checked = true; 27: rdoG2R1.Checked = true; 28: } 29: 30: protected void Page_PreRender(object sender, EventArgs e) 31: { 32:  33: } After you copy the markup and page-behind code into the appropriate files.  I recommend you set a breakpoint on Page_Load as well as cmdTest_Click, and add each of the radio button controls to the Watch list so that you can walk through the code and see exactly what is happening.  Use the Blank Postback button to cause a postback to the server so you can inspect things without making any changes. The moral of the story is: if you do server-side manipulation of the Checked status of RadioButton controls, then you need to set ALL of the controls in a group whenever you want to change one.

    Read the article

  • Why bother writing an Windows 8 app?

    - by Dennis Vroegop
    So you want to know more about development for Window 8. Great! There are lots of reasons you should be excited about this. Since I don’t know why YOU are interested in this, I’ll make a list of reasons people can choose from. (as a side note: whenever I talk about Win8 development I am referring to the Metro Style / WinRt side of things. Apps for the ‘classic’ desktop side of Win8 on Intel are business as usual…) So… Why would you care about making an app for Windows 8? 1. It’s cool. Let’s not beat around the bush: if you like development for a hobby then you’ll love to work on this new platform. You can create apps in a relative short time (short time as in compared to writing a new CRM system) and that makes it great for a hobby product. 2. You’ll stand out. Hey, we all need an ego boost every now and then. We all need to feel special. So if you can manage to be one of the first to have you app in the Store then you’ll likely to be noticed. Just close your eyes for a moment and image you standing in a bar. It’s crowded, and then you casually say “Oh yeah, I just had my app certified and it’s in the Win8 store now”. People will stop talking, will offer you drinks and beautiful women / gorgeous man / furry creatures from Alpha Centauri (whatever your preferences are) will propose. Or maybe not. Anyway…. 3. Make some cash! IDC predicts there will be about 350,000,000 Windows 8 licenses sold in the next year. Think about that number. 350,000,000. And they all have access to the Store. Where you’re app will be. With one little click they can select it, download and somehow magically $1.00 or $2.00 from their bank account is transferred to yours. Now, I am not saying that all of those people will download and buy your app but what if only 1% of them did? Remember: there aren’t that many apps available yet….. 4. Learn. Creating new small apps is a great way to learn new stuff. Yes, you could read about it (on this blog for instance) but the only way to learn something is to do it. So be prepared for the future and learn something new by doing it.Write an app! Now! 5. The biggie (for me at least): it’s fun. Even if you remove the points above it’s still fun to write for these devices and this platform. Now some of you will say : “But why not write a great app for IOS or Android?” I think this is a valid question. Of course the novelty of the platform wears out and points 2 and 3 from above list will not be as relevant as it is today. But still 1 4 and 5 remain. And don’t forget: if you already work on the Microsoft platform it’s not that hard to learn this new Win8 stuff. If you have done some XAML development (be it WPF or Silverlight) you are almost there in becoming a good Win8 developer. So you’ll be more productive much sooner than when you have to learn Objective C or Java. Even if you’re a HTML / Javascript developer (I say developer here, not designer) you’ll be up to speed on Win8 development pretty soon. Yes, you, that funky Web Developer who lives and breathes HTML5, CSS3 and JavaScript / Node.Js / JQuery: you too can be a Win8 developer. A first class Win8 developer! So.. Download the stuff you need from http://dev.windows.com install Windows 8 and Visual Studio 12 and by the time you’re ready I’ll be working on the next article: how to do all this? Happy coding!

    Read the article

  • Windows 8 for productivity?

    - by Charles Young
    At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible. With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume. Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far? Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu. What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance. The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive. The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears. Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows. Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.

    Read the article

  • Jolicloud is a Nifty New OS for Your Netbook

    - by Matthew Guay
    Want to breathe new life into your netbook?  Here’s a quick look at Jolicloud, a unique new Linux based OS that lets you use your netbook in a whole new way. Netbooks have been an interesting category of computers.  When they were first released, most netbooks came with a stripped down Linux based operating system designed to let you easily access the internet first and foremost.  Consumers wanted more from their netbooks, so full OSes such as Windows XP and Ubuntu became the standard on netbooks.  Microsoft worked hard to get Windows 7 working great on netbooks, and today most netbooks run Windows 7 great.  But the Linux community hasn’t stood still either, and Jolicloud is proof of that.  Jolicloud is a unique OS designed to bring the best of both webapps and standard programs to your netbook.   Keep reading to see if this is the perfect netbook OS for you. Getting Started Installing Jolicloud on your netbook is easy thanks to a the Jolicloud Express installer for Windows.  Since many netbooks run Windows by default, this makes it easy to install Jolicloud.  Plus, your Windows install is left untouched, so you can still easily access all your Windows files and programs. Download and run the roughly 700Mb installer (link below) just as a normal installer in Windows. This will first extract the needed files. Click Get started to install Jolicloud on your netbook. Enter a username, password, and nickname for your computer.  Please note that the username must be all lowercase, and the nickname should not contain spaces or special characters.   Now you can review the default installation settings.  By default it will take up 39Gb and install on your C:\ drive in English.  If you wish to change this, click Change. We chose to install it on the D: drive on this netbook, as its harddrive was already partitioned into two parts.  Click Save when your settings are all correct, and then click Next in the previous window. Jolicloud will prepare for the installation.  This took about 5 minutes in our test.  Click Next when this is finished. Click Restart now to install and run Jolicloud. When your netbook reboots, it will initialize the Jolicloud setup. It will then automatically finish the installation.  Just sit back and wait; there’s nothing for you to do right now.  The installation took about 20 minutes in our test. Jolicloud will automatically reboot when the setup is finished. Once it’s rebooted, you’re ready to go!  Enter the username, then the password, that you chose earlier when you were installing Jolicloud from Windows. Welcome to your Jolicloud desktop! Hardware Support We installed Jolicloud on a Samsung N150 netbook with an Atom N450 processor, 1Gb Ram, 250Gb harddrive, and WiFi b/g/n with Bluetooth.  Amazingly, once Jolicloud was installed, everything was ready to use.  No drivers to install, no settings to hassle with, it was all installed and set up perfectly.  Power settings worked great, and closing the netbook put it to sleep just like in Windows. WiFi drivers have typically been difficult to find and install on Linux, but Jolicloud had our netbook’s wifi working immediately.  To get online, simply click the Wireless icon on the top right, and select the wireless network you want to connect to. Jolicloud will let you know when it is signed on. Wired Lan networking was also seamless; simply connect your cable and you’re ready to go.  The webcam and touchpad also worked perfectly directly.  The only thing missing was multitouch; this touchpad has two finger scroll, pinch zoom, and other nice multitouch features in Windows, but in Julicloud it only functioned as a standard touchpad.  It did have tap to click activated by default, as well as right-side scrolling, which is nice. Jolicloud also supported our video card without any extra work.  The native resolution was already selected, and the only problem we had with the screen was that there was no apparent way to change the brightness.  This is not a major problem, but would be nice to have.  The Samsung N150 has Intel GMA3150 integrated graphics, and Jolicloud promises 1080p HD video on it.  It did playback 720p H.264 video flawlessly without installing anything extra, but it stuttered on full 1080p HD (which is the exact same as this netbook’s video playback in Windows 7 – 720p works great, but it stutters on 1080p).  We would be excited to see full HD on this netbook, but 720p is definitely fine for most stuff.   Jolicloud supports a wide range of netbooks, and based on our experience we would expect it to work as good on any supported hardware.  Check out the list of supported netbooks to see if your netbook is supported; if not, it still may work but you may have to install special drivers. Jolicloud’s performance was very similar to Windows 7 on our netbook.  It boots in about 30 seconds, and apps load fairly quickly.  In general, we couldn’t tell much difference in performance between Jolicloud and Windows 7, though this isn’t a problem since Windows 7 runs great on the current generation of netbooks. Using Jolicloud Ready to start putting Jolicloud to use?  Your fresh Jolicloud install you can run several built-in apps, such as Firefox, a calculator, and the chat client Pidgin.  It also has a media player and file viewer installed, so you can play MP3s or MPG videos, or read PDF ebooks without installing anything extra.  It also has Flash player installed so you can watch videos online easily. You can also directly access all of your files from the right side of your home screen.  You can even access your Windows files; in our test, the 116.9 GB Media was C: from Windows.  Select it to browse and open any file you had saved in Windows. You may need to enter your password to access it. Once you’re authenticated it, you’ll see all of your Windows files and folders.  Your User files (Documents, Music, Videos, etc.) will be in the Users folder. And, you can easily add files from removable media such as USB flash drives and memory cards.  Jolicloud recognized a flash drive we tested with no trouble at all. Add new apps But, the best part about Jolicloud is that it makes it very easy to install new apps.  Click the Get Started button on your homescreen. You’ll first need to create an account.  You can then use this same account on another netbook if you wish, and your settings will automatically be synced between the two. You can either signup using your Facebook account, …or you can sign up the traditional way with your email address, name, and password.  If you sign up this way, you will need to confirm your email address before your account will be finished. Now, choose your netbook model from the list, and enter a name for your computer. And that’s it!  You’ll now see the Jolicloud dashboard, which will show you updates and notifications from friends who also use Jolicloud. Click the App directory to find new apps for your netbook.  Here you will find a variety of webapps, such as Gmail, along with native applications, such as Skype, that you can install on your netbook.  Simply click the Install button on the right to add the app to your netbook. You will be prompted to enter your system password, and then the app will install without any further input.   Once an app is installed, a check mark will appear beside its name.  You can remove it by clicking the Remove button, and it will uninstall seamlessly. Webapps, such as Gmail, actually run in in a Chrome-powered window that lets the webapp run full screen.  This gives the webapps a native feel, but actually they’re just running the same as they would in a standard web browser.   The Jolicloud Interface Most apps run maximized, and there is no way to run them smaller.  This in general works good, since with small screens most apps need to run full-screen anyhow. Smaller apps, such as a calculator or the Pidgin chat client, run in a window just like they do on other operating systems. You can switch to another app that’s running by selecting it’s icon on the top left, or you can go back to the home screen by clicking the home screen.  If you’re finished with an program, simply click the red X button on the top right of the window when you’re running it. Or, you can switch between programs using standard keyboard shortcuts such as Alt-tab. The default page on the home screen is the favorites page, and all of your other programs are orginized in their own sections on the left hand side.  But, if you want to add one of these to your favorites page, simply right-click on it and select Add to Favorites. When you’re done for the day, you can simply close your netbook to put it to sleep.  Or, if you want to shut down, just press the Quit button on the bottom right of the home screen and then select Shut Down. Booting Jolicloud When you install Jolicloud, it will set itself as the default operating system.  Now, when you boot your netbook, it will show you a list of installed operating systems.  You can select either Windows or Jolicloud, but if you don’t make a selection it will boot into Jolicloud after waiting 10 seconds. If you’d perfer to boot into Windows by default, you can easily change this.  First, boot your netbook in to Windows.  Open the start menu, right-click on the Computer button, and select Properties.   Click the “Advanced system settings” link on the left side. Click the Settings button in the Startup and Recovery section. Now, select Windows as the default operating system, and click Ok.  Your netbook will now boot into Windows by default, but will give you 10 seconds to choose to boot into Jolicloud when you start your computer. Or, if you decided you don’t want Jolicloud, you can easily uninstall it from within Windows. Please note that this will also remove any files you may have saved in Jolicloud, so be sure to copy them to your Windows drive before uninstalling. To uninstall Jolicloud from within Windows, open Control Panel, and select Uninstall a Program. Scroll down to select Jolicloud, and click Uninstall/Change. Click Yes to confirm that you want to uninstall Jolicloud. After a few moments, it will let you know that Jolicloud has been uninstalled.  You’re netbook is now back the same as it was before you installed Jolicloud, with only Windows installed. Closing Whether you’re wanting to replace your current OS on your netbook or would simply like to try out a fresh new Linux version on your netbook, Jolicloud is a great option for you.  We were very impressed by it’s solid hardware support and the ease of installing new apps in Jolicloud.  Rather than simply giving us a standard OS, Jolicloud offers a unique way to use your netbook with native programs and webapps.  And whether you’re an IT pro or are a new computer user, Jolicloud was easy enough to use that anyone can do it.  Give it a try, and let us know what your favorite netbook OS is! Link Download Jolicloud for your netbook Similar Articles Productive Geek Tips How To Change XSplash Themes in Ubuntu 9.10Verify the Integrity of Windows Vista System FilesMonitor Multiple Logs in a Single Shell with MultiTail for LinuxHide Some or All of the GUI Bars in FirefoxAsk the Readers: Do You Use a Laptop, Desktop, or Both? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically

    Read the article

  • Announcing Entity Framework Code-First (CTP5 release)

    - by ScottGu
    This week the data team released the CTP5 build of the new Entity Framework Code-First library.  EF Code-First enables a pretty sweet code-centric development workflow for working with data.  It enables you to: Develop without ever having to open a designer or define an XML mapping file Define model objects by simply writing “plain old classes” with no base classes required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping I’m a big fan of the EF Code-First approach, and wrote several blog posts about it this summer: Code-First Development with Entity Framework 4 (July 16th) EF Code-First: Custom Database Schema Mapping (July 23rd) Using EF Code-First with an Existing Database (August 3rd) Today’s new CTP5 release delivers several nice improvements over the CTP4 build, and will be the last preview build of Code First before the final release of it.  We will ship the final EF Code First release in the first quarter of next year (Q1 of 2011).  It works with all .NET application types (including both ASP.NET Web Forms and ASP.NET MVC projects). Installing EF Code First You can install and use EF Code First CTP5 using one of two ways: Approach 1) By downloading and running a setup program.  Once installed you can reference the EntityFramework.dll assembly it provides within your projects.      or: Approach 2) By using the NuGet Package Manager within Visual Studio to download and install EF Code First within a project.  To do this, simply bring up the NuGet Package Manager Console within Visual Studio (View->Other Windows->Package Manager Console) and type “Install-Package EFCodeFirst”: Typing “Install-Package EFCodeFirst” within the Package Manager Console will cause NuGet to download the EF Code First package, and add it to your current project: Doing this will automatically add a reference to the EntityFramework.dll assembly to your project:   NuGet enables you to have EF Code First setup and ready to use within seconds.  When the final release of EF Code First ships you’ll also be able to just type “Update-Package EFCodeFirst” to update your existing projects to use the final release. EF Code First Assembly and Namespace The CTP5 release of EF Code First has an updated assembly name, and new .NET namespace: Assembly Name: EntityFramework.dll Namespace: System.Data.Entity These names match what we plan to use for the final release of the library. Nice New CTP5 Improvements The new CTP5 release of EF Code First contains a bunch of nice improvements and refinements. Some of the highlights include: Better support for Existing Databases Built-in Model-Level Validation and DataAnnotation Support Fluent API Improvements Pluggable Conventions Support New Change Tracking API Improved Concurrency Conflict Resolution Raw SQL Query/Command Support The rest of this blog post contains some more details about a few of the above changes. Better Support for Existing Databases EF Code First makes it really easy to create model layers that work against existing databases.  CTP5 includes some refinements that further streamline the developer workflow for this scenario. Below are the steps to use EF Code First to create a model layer for the Northwind sample database: Step 1: Create Model Classes and a DbContext class Below is all of the code necessary to implement a simple model layer using EF Code First that goes against the Northwind database: EF Code First enables you to use “POCO” – Plain Old CLR Objects – to represent entities within a database.  This means that you do not need to derive model classes from a base class, nor implement any interfaces or data persistence attributes on them.  This enables the model classes to be kept clean, easily testable, and “persistence ignorant”.  The Product and Category classes above are examples of POCO model classes. EF Code First enables you to easily connect your POCO model classes to a database by creating a “DbContext” class that exposes public properties that map to the tables within a database.  The Northwind class above illustrates how this can be done.  It is mapping our Product and Category classes to the “Products” and “Categories” tables within the database.  The properties within the Product and Category classes in turn map to the columns within the Products and Categories tables – and each instance of a Product/Category object maps to a row within the tables. The above code is all of the code required to create our model and data access layer!  Previous CTPs of EF Code First required an additional step to work against existing databases (a call to Database.Initializer<Northwind>(null) to tell EF Code First to not create the database) – this step is no longer required with the CTP5 release.  Step 2: Configure the Database Connection String We’ve written all of the code we need to write to define our model layer.  Our last step before we use it will be to setup a connection-string that connects it with our database.  To do this we’ll add a “Northwind” connection-string to our web.config file (or App.Config for client apps) like so:   <connectionStrings>          <add name="Northwind"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\northwind.mdf;User Instance=true"          providerName="System.Data.SqlClient" />   </connectionStrings> EF “code first” uses a convention where DbContext classes by default look for a connection-string that has the same name as the context class.  Because our DbContext class is called “Northwind” it by default looks for a “Northwind” connection-string to use.  Above our Northwind connection-string is configured to use a local SQL Express database (stored within the \App_Data directory of our project).  You can alternatively point it at a remote SQL Server. Step 3: Using our Northwind Model Layer We can now easily query and update our database using the strongly-typed model layer we just built with EF Code First. The code example below demonstrates how to use LINQ to query for products within a specific product category.  This query returns back a sequence of strongly-typed Product objects that match the search criteria: The code example below demonstrates how we can retrieve a specific Product object, update two of its properties, and then save the changes back to the database: EF Code First handles all of the change-tracking and data persistence work for us, and allows us to focus on our application and business logic as opposed to having to worry about data access plumbing. Built-in Model Validation EF Code First allows you to use any validation approach you want when implementing business rules with your model layer.  This enables a great deal of flexibility and power. Starting with this week’s CTP5 release, EF Code First also now includes built-in support for both the DataAnnotation and IValidatorObject validation support built-into .NET 4.  This enables you to easily implement validation rules on your models, and have these rules automatically be enforced by EF Code First whenever you save your model layer.  It provides a very convenient “out of the box” way to enable validation within your applications. Applying DataAnnotations to our Northwind Model The code example below demonstrates how we could add some declarative validation rules to two of the properties of our “Product” model: We are using the [Required] and [Range] attributes above.  These validation attributes live within the System.ComponentModel.DataAnnotations namespace that is built-into .NET 4, and can be used independently of EF.  The error messages specified on them can either be explicitly defined (like above) – or retrieved from resource files (which makes localizing applications easy). Validation Enforcement on SaveChanges() EF Code-First (starting with CTP5) now automatically applies and enforces DataAnnotation rules when a model object is updated or saved.  You do not need to write any code to enforce this – this support is now enabled by default.  This new support means that the below code – which violates our above rules – will automatically throw an exception when we call the “SaveChanges()” method on our Northwind DbContext: The DbEntityValidationException that is raised when the SaveChanges() method is invoked contains a “EntityValidationErrors” property that you can use to retrieve the list of all validation errors that occurred when the model was trying to save.  This enables you to easily guide the user on how to fix them.  Note that EF Code-First will abort the entire transaction of changes if a validation rule is violated – ensuring that our database is always kept in a valid, consistent state. EF Code First’s validation enforcement works both for the built-in .NET DataAnnotation attributes (like Required, Range, RegularExpression, StringLength, etc), as well as for any custom validation rule you create by sub-classing the System.ComponentModel.DataAnnotations.ValidationAttribute base class. UI Validation Support A lot of our UI frameworks in .NET also provide support for DataAnnotation-based validation rules. For example, ASP.NET MVC, ASP.NET Dynamic Data, and Silverlight (via WCF RIA Services) all provide support for displaying client-side validation UI that honor the DataAnnotation rules applied to model objects. The screen-shot below demonstrates how using the default “Add-View” scaffold template within an ASP.NET MVC 3 application will cause appropriate validation error messages to be displayed if appropriate values are not provided: ASP.NET MVC 3 supports both client-side and server-side enforcement of these validation rules.  The error messages displayed are automatically picked up from the declarative validation attributes – eliminating the need for you to write any custom code to display them. Keeping things DRY The “DRY Principle” stands for “Do Not Repeat Yourself”, and is a best practice that recommends that you avoid duplicating logic/configuration/code in multiple places across your application, and instead specify it only once and have it apply everywhere. EF Code First CTP5 now enables you to apply declarative DataAnnotation validations on your model classes (and specify them only once) and then have the validation logic be enforced (and corresponding error messages displayed) across all applications scenarios – including within controllers, views, client-side scripts, and for any custom code that updates and manipulates model classes. This makes it much easier to build good applications with clean code, and to build applications that can rapidly iterate and evolve. Other EF Code First Improvements New to CTP5 EF Code First CTP5 includes a bunch of other improvements as well.  Below are a few short descriptions of some of them: Fluent API Improvements EF Code First allows you to override an “OnModelCreating()” method on the DbContext class to further refine/override the schema mapping rules used to map model classes to underlying database schema.  CTP5 includes some refinements to the ModelBuilder class that is passed to this method which can make defining mapping rules cleaner and more concise.  The ADO.NET Team blogged some samples of how to do this here. Pluggable Conventions Support EF Code First CTP5 provides new support that allows you to override the “default conventions” that EF Code First honors, and optionally replace them with your own set of conventions. New Change Tracking API EF Code First CTP5 exposes a new set of change tracking information that enables you to access Original, Current & Stored values, and State (e.g. Added, Unchanged, Modified, Deleted).  This support is useful in a variety of scenarios. Improved Concurrency Conflict Resolution EF Code First CTP5 provides better exception messages that allow access to the affected object instance and the ability to resolve conflicts using current, original and database values.  Raw SQL Query/Command Support EF Code First CTP5 now allows raw SQL queries and commands (including SPROCs) to be executed via the SqlQuery and SqlCommand methods exposed off of the DbContext.Database property.  The results of these method calls can be materialized into object instances that can be optionally change-tracked by the DbContext.  This is useful for a variety of advanced scenarios. Full Data Annotations Support EF Code First CTP5 now supports all standard DataAnnotations within .NET, and can use them both to perform validation as well as to automatically create the appropriate database schema when EF Code First is used in a database creation scenario.  Summary EF Code First provides an elegant and powerful way to work with data.  I really like it because it is extremely clean and supports best practices, while also enabling solutions to be implemented very, very rapidly.  The code-only approach of the library means that model layers end up being flexible and easy to customize. This week’s CTP5 release further refines EF Code First and helps ensure that it will be really sweet when it ships early next year.  I recommend using NuGet to install and give it a try today.  I think you’ll be pleasantly surprised by how awesome it is. Hope this helps, Scott

    Read the article

  • Minecraft Style Chunk building problem

    - by David Torrey
    I'm having some problems with speed in my chunk engine. I timed it out, and in its current state it takes a total ~5 seconds per chunk to fill each face's list. I have a check to see if each face of a block is visible and if it is not visible, it skips it and moves on. I'm using a dictionary (unordered map) because it makes sense memorywise to just not have an entry if there is no block. I've tracked my problem down to testing if there is an entry, and accessing an entry if it does exist. If I remove the tests to see if there is an entry in the dictionary for an adjacent block, or if the block type itself is seethrough, it runs within about 2-4 milliseconds. so here's my question: Is there a faster way to check for an entry in a dictionary than .ContainsKey()? As an aside, I tried TryGetValue() and it doesn't really help with the speed that much. If I remove the ContainsKey() and keep the test where it does the IsSeeThrough for each block, it halves the time, but it's still about 2-3 seconds. It only drops to 2-4ms if I remove BOTH checks. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Runtime.InteropServices; using OpenTK; using OpenTK.Graphics.OpenGL; using System.Drawing; namespace Anabelle_Lee { public enum BlockEnum { air = 0, dirt = 1, } [StructLayout(LayoutKind.Sequential,Pack=1)] public struct Coordinates<T1> { public T1 x; public T1 y; public T1 z; public override string ToString() { return "(" + x + "," + y + "," + z + ") : " + typeof(T1); } } public struct Sides<T1> { public T1 left; public T1 right; public T1 top; public T1 bottom; public T1 front; public T1 back; } public class Block { public int blockType; public bool SeeThrough() { switch (blockType) { case 0: return true; } return false ; } public override string ToString() { return ((BlockEnum)(blockType)).ToString(); } } class Chunk { private Dictionary<Coordinates<byte>, Block> mChunkData; //stores the block data private Sides<List<Coordinates<byte>>> mVBOVertexBuffer; private Sides<int> mVBOHandle; //private bool mIsChanged; private const byte mCHUNKSIZE = 16; public Chunk() { } public void InitializeChunk() { //create VBO references #if DEBUG Console.WriteLine ("Initializing Chunk"); #endif mChunkData = new Dictionary<Coordinates<byte> , Block>(); //mIsChanged = true; GL.GenBuffers(1, out mVBOHandle.left); GL.GenBuffers(1, out mVBOHandle.right); GL.GenBuffers(1, out mVBOHandle.top); GL.GenBuffers(1, out mVBOHandle.bottom); GL.GenBuffers(1, out mVBOHandle.front); GL.GenBuffers(1, out mVBOHandle.back); //make new list of vertexes for each face mVBOVertexBuffer.top = new List<Coordinates<byte>>(); mVBOVertexBuffer.bottom = new List<Coordinates<byte>>(); mVBOVertexBuffer.left = new List<Coordinates<byte>>(); mVBOVertexBuffer.right = new List<Coordinates<byte>>(); mVBOVertexBuffer.front = new List<Coordinates<byte>>(); mVBOVertexBuffer.back = new List<Coordinates<byte>>(); #if DEBUG Console.WriteLine("Chunk Initialized"); #endif } public void GenerateChunk() { #if DEBUG Console.WriteLine("Generating Chunk"); #endif for (byte i = 0; i < mCHUNKSIZE; i++) { for (byte j = 0; j < mCHUNKSIZE; j++) { for (byte k = 0; k < mCHUNKSIZE; k++) { Random blockLoc = new Random(); Coordinates<byte> randChunk = new Coordinates<byte> { x = i, y = j, z = k }; mChunkData.Add(randChunk, new Block()); mChunkData[randChunk].blockType = blockLoc.Next(0, 1); } } } #if DEBUG Console.WriteLine("Chunk Generated"); #endif } public void DeleteChunk() { //delete VBO references #if DEBUG Console.WriteLine("Deleting Chunk"); #endif GL.DeleteBuffers(1, ref mVBOHandle.left); GL.DeleteBuffers(1, ref mVBOHandle.right); GL.DeleteBuffers(1, ref mVBOHandle.top); GL.DeleteBuffers(1, ref mVBOHandle.bottom); GL.DeleteBuffers(1, ref mVBOHandle.front); GL.DeleteBuffers(1, ref mVBOHandle.back); //clear all vertex buffers ClearPolyLists(); #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void UpdateChunk() { #if DEBUG Console.WriteLine("Updating Chunk"); #endif ClearPolyLists(); //prepare buffers //for every entry in mChunkData map foreach(KeyValuePair<Coordinates<byte>,Block> feBlockData in mChunkData) { Coordinates<byte> checkBlock = new Coordinates<byte> { x = feBlockData.Key.x, y = feBlockData.Key.y, z = feBlockData.Key.z }; //check for polygonson the left side of the cube if (checkBlock.x > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x - 1, checkBlock.y, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } //check for polygons on the right side of the cube if (checkBlock.x < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x + 1, checkBlock.y, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } if (checkBlock.y > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y - 1, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } //check for polygons on the right side of the cube if (checkBlock.y < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y + 1, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } if (checkBlock.z > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z - 1)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } //check for polygons on the right side of the cube if (checkBlock.z < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z + 1)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } BuildBuffers(); #if DEBUG Console.WriteLine("Chunk Updated"); #endif } public void RenderChunk() { } public void LoadChunk() { #if DEBUG Console.WriteLine("Loading Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void SaveChunk() { #if DEBUG Console.WriteLine("Saving Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Saved"); #endif } private bool IsVisible(int pX,int pY,int pZ) { Block testBlock; Coordinates<byte> checkBlock = new Coordinates<byte> { x = Convert.ToByte(pX), y = Convert.ToByte(pY), z = Convert.ToByte(pZ) }; if (mChunkData.TryGetValue(checkBlock,out testBlock )) //if data exists { if (testBlock.SeeThrough() == true) //if existing data is not seethrough { return true; } } return true; } private void AddPoly(byte pX, byte pY, byte pZ, int BufferSide) { //create temp array GL.BindBuffer(BufferTarget.ArrayBuffer, BufferSide); if (BufferSide == mVBOHandle.front) { //front face mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.right) { //back face mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.top) { //left face mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.bottom) { //right face mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.front) { //top face mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.back) { //bottom face mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); } } private void BuildBuffers() { #if DEBUG Console.WriteLine("Building Chunk Buffers"); #endif GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.front); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.front.Count), mVBOVertexBuffer.front.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.back); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.back.Count), mVBOVertexBuffer.back.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.left); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.left.Count), mVBOVertexBuffer.left.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.right); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.right.Count), mVBOVertexBuffer.right.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.top); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.top.Count), mVBOVertexBuffer.top.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.bottom); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.bottom.Count), mVBOVertexBuffer.bottom.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer,0); #if DEBUG Console.WriteLine("Chunk Buffers Built"); #endif } private void ClearPolyLists() { #if DEBUG Console.WriteLine("Clearing Polygon Lists"); #endif mVBOVertexBuffer.top.Clear(); mVBOVertexBuffer.bottom.Clear(); mVBOVertexBuffer.left.Clear(); mVBOVertexBuffer.right.Clear(); mVBOVertexBuffer.front.Clear(); mVBOVertexBuffer.back.Clear(); #if DEBUG Console.WriteLine("Polygon Lists Cleared"); #endif } }//END CLASS }//END NAMESPACE

    Read the article

  • Asynchronous Streaming in ASP.NET WebApi

    - by andresv
     Hi everyone, if you use the cool MVC4 WebApi you might encounter yourself in a common situation where you need to return a rather large amount of data (most probably from a database) and you want to accomplish two things: Use streaming so the client fetch the data as needed, and that directly correlates to more fetching in the server side (from our database, for example) without consuming large amounts of memory. Leverage the new MVC4 WebApi and .NET 4.5 async/await asynchronous execution model to free ASP.NET Threadpool threads (if possible).  So, #1 and #2 are not directly related to each other and we could implement our code fulfilling one or the other, or both. The main point about #1 is that we want our method to immediately return to the caller a stream, and that client side stream be represented by a server side stream that gets written (and its related database fetch) only when needed. In this case we would need some form of "state machine" that keeps running in the server and "knows" what is the next thing to fetch into the output stream when the client ask for more content. This technique is generally called a "continuation" and is nothing new in .NET, in fact using an IEnumerable<> interface and the "yield return" keyword does exactly that, so our first impulse might be to write our WebApi method more or less like this:           public IEnumerable<Metadata> Get([FromUri] int accountId)         {             // Execute the command and get a reader             using (var reader = GetMetadataListReader(accountId))             {                 // Read rows asynchronously, put data into buffer and write asynchronously                 while (reader.Read())                 {                     yield return MapRecord(reader);                 }             }         }   While the above method works, unfortunately it doesn't accomplish our objective of returning immediately to the caller, and that's because the MVC WebApi infrastructure doesn't yet recognize our intentions and when it finds an IEnumerable return value, enumerates it before returning to the client its values. To prove my point, I can code a test method that calls this method, for example:        [TestMethod]         public void StreamedDownload()         {             var baseUrl = @"http://localhost:57771/api/metadata/1";             var client = new HttpClient();             var sw = Stopwatch.StartNew();             var stream = client.GetStreamAsync(baseUrl).Result;             sw.Stop();             Debug.WriteLine("Elapsed time Call: {0}ms", sw.ElapsedMilliseconds); } So, I would expect the line "var stream = client.GetStreamAsync(baseUrl).Result" returns immediately without server-side fetching of all data in the database reader, and this didn't happened. To make the behavior more evident, you could insert a wait time (like Thread.Sleep(1000);) inside the "while" loop, and you will see that the client call (GetStreamAsync) is not going to return control after n seconds (being n == number of reader records being fetched).Ok, we know this doesn't work, and the question would be: is there a way to do it?Fortunately, YES!  and is not very difficult although a little more convoluted than our simple IEnumerable return value. Maybe in the future this scenario will be automatically detected and supported in MVC/WebApi.The solution to our needs is to use a very handy class named PushStreamContent and then our method signature needs to change to accommodate this, returning an HttpResponseMessage instead of our previously used IEnumerable<>. The final code will be something like this: public HttpResponseMessage Get([FromUri] int accountId)         {             HttpResponseMessage response = Request.CreateResponse();             // Create push content with a delegate that will get called when it is time to write out              // the response.             response.Content = new PushStreamContent(                 async (outputStream, httpContent, transportContext) =>                 {                     try                     {                         // Execute the command and get a reader                         using (var reader = GetMetadataListReader(accountId))                         {                             // Read rows asynchronously, put data into buffer and write asynchronously                             while (await reader.ReadAsync())                             {                                 var rec = MapRecord(reader);                                 var str = await JsonConvert.SerializeObjectAsync(rec);                                 var buffer = UTF8Encoding.UTF8.GetBytes(str);                                 // Write out data to output stream                                 await outputStream.WriteAsync(buffer, 0, buffer.Length);                             }                         }                     }                     catch(HttpException ex)                     {                         if (ex.ErrorCode == -2147023667) // The remote host closed the connection.                          {                             return;                         }                     }                     finally                     {                         // Close output stream as we are done                         outputStream.Close();                     }                 });             return response;         } As an extra bonus, all involved classes used already support async/await asynchronous execution model, so taking advantage of that was very easy. Please note that the PushStreamContent class receives in its constructor a lambda (specifically an Action) and we decorated our anonymous method with the async keyword (not a very well known technique but quite handy) so we can await over the I/O intensive calls we execute like reading from the database reader, serializing our entity and finally writing to the output stream.  Well, if we execute the test again we will immediately notice that the a line returns immediately and then the rest of the server code is executed only when the client reads through the obtained stream, therefore we get low memory usage and far greater scalability for our beloved application serving big chunks of data.Enjoy!Andrés.        

    Read the article

  • PASS: Bylaw Changes

    - by Bill Graziano
    While you’re reading this, a post should be going up on the PASS blog on the plans to change our bylaws.  You should be able to find our old bylaws, our proposed bylaws and a red-lined version of the changes.  We plan to listen to feedback until March 31st.  At that point we’ll decide whether to vote on these changes or take other action. The executive summary is that we’re adding a restriction to prevent more than two people from the same company on the Board and eliminating the Board’s Officer Appointment Committee to have Officers directly elected by the Board.  This second change better matches how officer elections have been conducted in the past. The Gritty Details Our scope was to change bylaws to match how PASS actually works and tackle a limited set of issues.  Changing the bylaws is hard.  We’ve been working on these changes since the March board meeting last year.  At that meeting we met and talked through the issues we wanted to address.  In years past the Board has tried to come up with language and then we’ve discussed and negotiated to get to the result.  In March, we gave HQ guidance on what we wanted and asked them to come up with a starting point.  Hannes worked on building us an initial set of changes that we could work our way through.  Discussing changes like this over email is difficult wasn’t very productive.  We do a much better job on this at the in-person Board meetings.  Unfortunately there are only 2 or 3 of those a year. In August we met in Nashville and spent time discussing the changes.  That was also the day after we released the slate for the 2010 election. The discussion around that colored what we talked about in terms of these changes.  We talked very briefly at the Summit and again reviewed and revised the changes at the Board meeting in January.  This is the result of those changes and discussions. We made numerous small changes to clean up language and make wording more clear.  We also made two big changes. Director Employment Restrictions The first is that only two people from the same company can serve on the Board at the same time.  The actual language in section VI.3 reads: A maximum of two (2) Directors who are employed by, or who are joint owners or partners in, the same for-profit venture, company, organization, or other legal entity, may concurrently serve on the PASS Board of Directors at any time. The definition of “employed” is at the sole discretion of the Board. And what a mess this turns out to be in practice.  Our membership is a hodgepodge of interlocking relationships.  Let’s say three Board members get together and start a blog service for SQL Server bloggers.  It’s technically for-profit.  Let’s assume it makes $8 in the first year.  Does that trigger this clause?  (Technically yes.)  We had a horrible time trying to write language that covered everything.  All the sample bylaws that we found were just as vague as this. That led to the third clause in this section.  The first sentence reads: The Board of Directors reserves the right, strictly on a case-by-case basis, to overrule the requirements of Section VI.3 by majority decision for any single Director’s conflict of employment. We needed some way to handle the trivial issues and exercise some judgment.  It seems like a public vote is the best way.  This discloses the relationship and gets each Board member on record on the issue.   In practice I think this clause will rarely be used.  I think this entire section will only be invoked for actual employment issues and not for small side projects.  In either case we have the mechanisms in place to handle it in a public, transparent way. That’s the first and third clauses.  The second clause says that if your situation changes and you fall afoul of this restriction you need to notify the Board.  The clause further states that if this new job means a Board members violates the “two-per-company” rule the Board may request their resignation.  The Board can also  allow the person to continue serving with a majority vote.  I think this will also take some judgment.  Consider a person switching jobs that leads to three people from the same company.  I’m very likely to ask for someone to resign if all three are two weeks into a two year term.  I’m unlikely to ask anyone to resign if one is two weeks away from ending their term.  In either case, the decision will be a public vote that we can be held accountable for. One concern that was raised was whether this would affect someone choosing to accept a job.  I think that’s a choice for them to make.  PASS is clearly stating its intent that only two directors from any one organization should serve at any time.  Once these bylaws are approved, this policy should not come as a surprise to any potential or current Board members considering a job change.  This clause isn’t perfect.  The biggest hole is business relationships that aren’t defined above.  Let’s say that two employees from company “X” serve on the Board.  What happens if I accept a full-time consulting contract with that company?  Let’s assume I’m working directly for one of the two existing Board members.  That doesn’t violate section VI.3.  But I think it’s clearly the kind of relationship we’d like to prevent.  Unfortunately that was even harder to write than what we have now.  I fully expect that in the next revision of the bylaws we’ll address this.  It just didn’t make it into this one. Officer Elections The officer election process received a slightly different rewrite.  Our goal was to codify in the bylaws the actual process we used to elect the officers.  The officers are the President, Executive Vice-President (EVP) and Vice-President of Marketing.  The Immediate Past President (IPP) is also an officer but isn’t elected.  The IPP serves in that role for two years after completing their term as President.  We do that for continuity’s sake.  Some organizations have a President-elect that serves for one or two years.  The group that founded PASS chose to have an IPP. When I started on the Board, the Nominating Committee (NomCom) selected the slate for the at-large directors and the slate for the officers.  There was always one candidate for each officer position.  It wasn’t really an election so much as the NomCom decided who the next person would be for each officer position.  Behind the scenes the Board worked to select the best people for the role. In June 2009 that process was changed to bring it line with what actually happens.  An Officer Appointment Committee was created that was a subset of the Board.  That committee would take time to interview the candidates and present a slate to the Board for approval.  The majority vote of the Board would determine the officers for the next two years.  In practice the Board itself interviewed the candidates and conducted the elections.  That means it was time to change the bylaws again. Section VII.2 and VII.3 spell out the process used to select the officers.  We use the phrase “Officer Appointment” to separate it from the Director election but the end result is that the Board elects the officers.  Section VII.3 starts: Officers shall be appointed bi-annually by a majority of all the voting members of the Board of Directors. Everything else revolves around that sentence.  We use the word appoint but they truly are elected.  There are details in the bylaws for term limits, minimum requirements for President (1 prior term as an officer), tie breakers and filling vacancies. In practice we will have an election for President, then an election for EVP and then an election for VP Marketing.  That means that losing candidates will be able to fall down the ladder and run for the next open position.  Another point to note is that officers aren’t at-large directors.  That means if a current sitting officer loses all three elections they are off the Board.  Having Board member votes public will help with the transparency of this approach. This process has a number of positive and negatives.  The biggest concern I expect to hear is that our members don’t directly choose the officers.  I’m going to try and list all the positives and negatives of this approach. Many non-profits value continuity and are slower to change than a business.  On the plus side this promotes that.  On the negative side this promotes that.  If we change too slowly the members complain that we aren’t responsive.  If we change too quickly we make mistakes and fail at various things.  We’ve been criticized for both of those lately so I’m not entirely sure where to draw the line.  My rough assumption to this point is that we’re going too slow on governance and too quickly on becoming “more than a Summit.”  This approach creates competition in the officer elections.  If you are an at-large director there is no consequence to losing an election.  If you are an officer the only way to stay on the Board is to win an officer election or an at-large election.  If you are an officer and lose an election you can always run for the next office down.  This makes it very easy for multiple people to contest an election. There is value in a person moving through the officer positions up to the Presidency.  Having the Board select the officers promotes this.  The down side is that it takes a LOT of time to get to the Presidency.  We’ve had good people struggle with burnout.  We’ve had lots of discussion around this.  The process as we’ve described it here makes it possible for someone to move quickly through the ranks but doesn’t prevent people from working their way up through each role. We talked long and hard about having the officers elected by the members.  We had a self-imposed deadline to complete these changes prior to elections this summer. The other challenge was that our original goal was to make the bylaws reflect our actual process rather than create a new one.  I believe we accomplished this goal. We ran out of time to consider this option in the detail it needs.  Having member elections for officers needs a number of problems solved.  We would need a way for candidates to fall through the election.  This is what promotes competition.  Without this few people would risk an election and we’ll be back to one candidate per slot.  We need to do this without having multiple elections.  We may be able to copy what other organizations are doing but I was surprised at how little I could find on other organizations.  We also need a way for people that lose an officer election to win an at-large election.  Otherwise we’ll have very little competition for officers. This brings me to an area that I think we as a Board haven’t done a good job.  We haven’t built a strong process to tell you who is doing a good job and who isn’t.  This is a double-edged sword.  I don’t want to highlight Board members that are failing.  That’s not a good way to get people to volunteer and run for the Board.  But I also need a way let the members make an informed choice about who is doing a good job and would make a good officer.  Encouraging Board members to blog, publishing minutes and making votes public helps in that regard but isn’t the final answer.  I don’t know what the final answer is yet.  I do know that the Board members themselves are uniquely positioned to know which other Board members are doing good work.  They know who speaks up in meetings, who works to build consensus, who has good ideas and who works with the members.  What I Could Do Better I’ve learned a lot writing this about how we communicated with our members.  The next time we revise the bylaws I’d do a few things differently.  The biggest change would be to provide better documentation.  The March 2009 minutes provide a very detailed look into what changes we wanted to make to the bylaws.  Looking back, I’m a little surprised at how closely they matched our final changes and covered the various arguments.  If you just read those you’d get 90% of what we eventually changed.  Nearly everything else was just details around implementation.  I’d also consider publishing a scope document defining exactly what we were doing any why.  I think it really helped that we had a limited, defined goal in mind.  I don’t think we did a good job communicating that goal outside the meeting minutes though. That said, I wish I’d blogged more after the August and January meeting.  I think it would have helped more people to know that this change was coming and to be ready for it. Conclusion These changes address two big concerns that the Board had.  First, it prevents a single organization from dominating the Board.  Second, it codifies and clearly spells out how officers are elected.  This is the process that was previously followed but it was somewhat murky.  These changes bring clarity to this and clearly explain the process the Board will follow. We’re going to listen to feedback until March 31st.  At that time we’ll decide whether to approve these changes.  I’m also assuming that we’ll start another round of changes in the next year or two.  Are there other issues in the bylaws that we should tackle in the future?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >