Search Results

Search found 15233 results on 610 pages for 'ssis design patterns'.

Page 539/610 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Consolidating coding styles: Funcs, private method, single method classes

    - by jdoig
    Hi all, We currently have 3 devs with, some, conflicting styles and I'm looking for a way to bring peace to the kingdom... The Coders: Foo 1: Likes to use Func's & Action's inside public methods. He uses actions to alias off lengthy method calls and Func's to perform simple tasks that can be expressed in 1 or 2 lines and will be used frequently through out the code Pros: The main body of his code is succinct and very readable, often with only one or 2 public methods per class and rarely any private methods. Cons: The start of methods contain blocks of lambda rich code that other developers don't enjoy reading; and, on occasion, can contain higher order functions that other dev's REALLY don't like reading. Foo 2: Likes to create a private method for (almost) everything the public method will have to do . Pros: Public methods remain small and readable (to all developers). Cons: Private methods are numerous. With private methods that call into other private methods, that call into... etc, etc. Making code hard to navigate. Foo 3: Likes to create a public class with a, single, public method for every, non-trivial, task that needs performing, then dependency inject them into other objects. Pros: Easily testable, easy to understand (one object, one responsibility). Cons: project gets littered by classes, opening multiple class files to understand what code does makes navigation awkward. It would be great to take the best of all these techniques... Foo-1 Has really nice, readable (almost dsl-like) code... for the most part, except for all the Action and Func lambda shenanigans bulked together at the start of a method. Foo-3 Has highly testable and extensible code that just feels a bit "belt-&-braces" for some solutions and has some code-navigation niggles (constantly hitting F12 in VS and opening 5 other .cs files to find out what a single method does). And Foo-2... Well I'm not sure I like anything about the one-huge .cs file with 2 public methods and 12 private ones, except for the fact it's easier for juniors to dig into. I admit I grossly over-simplified the explanations of those coding styles; but if any one knows of any patterns, practices or diplomatic-manoeuvres that can help unite our three developers (without just telling any of them to just "stop it!") that would be great. From a feasibility standpoint : Foo-1's style meets with the most resistance due to some developers finding lambda and/or Func's hard to read. Foo-2's style meets with a less resistance as it's just so easy to fall into. Foo-3's style requires the most forward thinking and is difficult to enforce when time is short. Any ideas on some coding styles or conventions that can make this work?

    Read the article

  • jQuery.extend() not giving deep copy of object formed by constructor

    - by two7s_clash
    I'm trying to use this to clone a complicated Object. The object in question has a property that is an array of other Objects, and each of these have properties of different types, mostly primitives, but a couple further Objects and Arrays. For example, an ellipsed version of what I am trying to clone: var asset = new Assets(); function Assets() { this.values = []; this.sectionObj = Section; this.names = getNames; this.titles = getTitles; this.properties = getProperties; ... this.add = addAsset; function AssetObj(assetValues) { this.name = ""; this.title = ""; this.interface = ""; ... this.protected = false; this.standaloneProtected = true; ... this.chaptersFree = []; this.chaptersUnavailable = []; ... this.mediaOptions = { videoWidth: "", videoHeight: "", downloadMedia: true, downloadMediaExt: "zip" ... } this.chaptersAvailable = []; if (typeof assetValues == "undefined") { return; } for (var name in assetValues) { if (typeof assetValues[name] == "undefined") { this[name] = ""; } else { this[name] = assetValues[name]; } } ... function Asset() { return new AssetObj(); } ... function getProperties() { var propertiesArray = new Array(); for (var property in this.values[0]) { propertiesArray.push(property); } return propertiesArray; } ... function addAsset(assetValues) { var newValues; newValues = new AssetObj(assetValues); this.values.push(newValues); } } When I do var copiedAssets = $.extend(true, {}, assets); copiedAssets.values == [], while assets.values == [Object { name="section_intro", more...}, Object { name="select_textbook", more...}, Object { name="quiz", more...}, 11 more...] When I do var copiedAssets = $.extend( {}, assets); all copiedAssets.values.[X].properties are just pointers to the value in assets. What I want is a true deep copy all the way down. What am I missing? Do I need to write a custom extend function? If so, any recommended patterns?

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • I have made two template classes,could any one tell me if these things are useful?

    - by soul
    Recently i made two template classes,according to the book "Modern C++ design". I think these classes are useful but no one in my company agree with me,so could any one tell me if these things are useful? The first one is a parameter wrapper,it can package function paramters to a single dynamic object.It looks like TypeList in "Modern C++ design". You can use it like this: some place of your code: int i = 7; bool b = true; double d = 3.3; CParam *p1 = CreateParam(b,i); CParam *p2 = CreateParam(i,b,d); other place of your code: int i = 0; bool b = false; double d = 0.0; GetParam(p1,b,i); GetParam(p2,i,b,d); The second one is a generic callback wrapper,it has some special point compare to other wrappers: 1.This template class has a dynamic base class,which let you use a single type object represent all wrapper objects. 2.It can wrap the callback together with it's parameters,you can excute the callback sometimes later with the parameters. You can use it like this: somewhere of your code: void Test1(int i) { } void Test2(bool b,int i) { } CallbackFunc * p1 = CreateCallback(Test1,3); CallbackFunc * p2 = CreateCallback(Test2,false,99); otherwhere of your code: p1->Excute(); p2->Excute(); Here is a part of the codes: parameter wrapper: class NullType; struct CParam { virtual ~CParam(){} }; template<class T1,class T2> struct CParam2 : public CParam { CParam2(T1 &t1,T2 &t2):v1(t1),v2(t2){} CParam2(){} T1 v1; T2 v2; }; template<class T1> struct CParam2<T1,NullType> : public CParam { CParam2(T1 &t1):v1(t1){} CParam2(){} T1 v1; }; template<class T1> CParam * CreateParam(T1 t1) { return (new CParam2<T1,NullType>(t1)); } template<class T1,class T2> CParam * CreateParam(T1 t1,T2 t2) { return (new CParam2<T1,T2>(t1,t2)); } template<class T1,class T2,class T3> CParam * CreateParam(T1 t1,T2 t2,T3 t3) { CParam2<T2,T3> t(t2,t3); return new CParam2<T1,CParam2<T2,T3> >(t1,t); } template<class T1> void GetParam(CParam *p,T1 &t1) { PARAM1(T1)* p2 = dynamic_cast<CParam2<T1,NullType>*>(p); t1 = p2->v1; } callback wrapper: #define PARAM1(T1) CParam2<T1,NullType> #define PARAM2(T1,T2) CParam2<T1,T2> #define PARAM3(T1,T2,T3) CParam2<T1,CParam2<T2,T3> > class CallbackFunc { public: virtual ~CallbackFunc(){} virtual void Excute(void){} }; template<class T> class CallbackFunc2 : public CallbackFunc { public: CallbackFunc2():m_b(false){} CallbackFunc2(T &t):m_t(t),m_b(true){} T m_t; bool m_b; }; template<class M,class T> class StaticCallbackFunc : public CallbackFunc2<T> { public: StaticCallbackFunc(M m):m_m(m){} StaticCallbackFunc(M m,T t):CallbackFunc2<T>(t),m_m(m){} virtual void Excute(void){assert(CallbackFunc2<T>::m_b);CallMethod(CallbackFunc2<T>::m_t);} private: template<class T1> void CallMethod(PARAM1(T1) &t){m_m(t.v1);} template<class T1,class T2> void CallMethod(PARAM2(T1,T2) &t){m_m(t.v1,t.v2);} template<class T1,class T2,class T3> void CallMethod(PARAM3(T1,T2,T3) &t){m_m(t.v1,t.v2.v1,t.v2.v2);} private: M m_m; }; template<class M> class StaticCallbackFunc<M,void> : public CallbackFunc { public: StaticCallbackFunc(M method):m_m(method){} virtual void Excute(void){m_m();} private: M m_m; }; template<class C,class M,class T> class MemberCallbackFunc : public CallbackFunc2<T> { public: MemberCallbackFunc(C *pC,M m):m_pC(pC),m_m(m){} MemberCallbackFunc(C *pC,M m,T t):CallbackFunc2<T>(t),m_pC(pC),m_m(m){} virtual void Excute(void){assert(CallbackFunc2<T>::m_b);CallMethod(CallbackFunc2<T>::m_t);} template<class T1> void CallMethod(PARAM1(T1) &t){(m_pC->*m_m)(t.v1);} template<class T1,class T2> void CallMethod(PARAM2(T1,T2) &t){(m_pC->*m_m)(t.v1,t.v2);} template<class T1,class T2,class T3> void CallMethod(PARAM3(T1,T2,T3) &t){(m_pC->*m_m)(t.v1,t.v2.v1,t.v2.v2);} private: C *m_pC; M m_m; }; template<class T1> CallbackFunc *CreateCallback(CallbackFunc *p,T1 t1) { CParam2<T1,NullType> t(t1); return new StaticCallbackFunc<CallbackFunc *,CParam2<T1,NullType> >(p,t); } template<class C,class T1> CallbackFunc *CreateCallback(C *pC,void(C::*pF)(T1),T1 t1) { CParam2<T1,NullType>t(t1); return new MemberCallbackFunc<C,void(C::*)(T1),CParam2<T1,NullType> >(pC,pF,t); } template<class T1> CParam2<T1,NullType> CreateCallbackParam(T1 t1) { return CParam2<T1,NullType>(t1); } template<class T1> void ExcuteCallback(CallbackFunc *p,T1 t1) { CallbackFunc2<CParam2<T1,NullType> > *p2 = dynamic_cast<CallbackFunc2<CParam2<T1,NullType> > *>(p); p2->m_t.v1 = t1; p2->m_b = true; p->Excute(); }

    Read the article

  • Myself throwing NullReferenceException... needs help

    - by Amit Ranjan
    I know it might be a weird question and its Title too, but i need your help. I am a .net dev , working on platform for the last 1.5 years. I am bit confused on the term usually we say " A Good Programmer ". I dont know ,what are the qualities of a good programmer ? Is the guy who writes a bug free code? or Can develop applications solely? or blah blah blah...lots of points. I dont know... But as far i am concerned , I know I am not a good programmer, still in learning phase an needs a lot to learn in coming days. So you guys are requested to please help me with this two problems of mine My first problem is regarding the proper Error Handling, which is a most debatable aspect of programming. We all know we use ` try { } catch { } finally { } ` in our code to manage exception. But even if I use try { } catch(exception ex) { throw ex } finally { } , different guys have different views. I still dont know the good way to handle errors. I can write code, use try-catch but still i feel I lacks something. When I saw the codes generated by .net fx tools even they uses throw ex or `throw new Exception("this is my exception")`.. I am just wondering what will be the best way to achieve the above. All means the same thing but why we avoid something. If it has some demerits then it must be made obselete.Anyways I still dont have one [how to handle errors efficiently?]. I generally follow the try-catch(execoption ex){throw ex}, and usually got stucked in debates with leads why you follow this why not that... 2.Converting your entire code blocks in modules using Design patterns of some OOPs concepts. How do you guys decide what architeture or pattern will be the best for my upcoming application based on its working, flow etc. I need to know what you guys can see that I can't. Since I know , I dont have that much experience but I can say, with my experience that experience doesnot comes either from degree/certificates or success you made instead it cames from failures you faced or got stucking situations. Pleas help me out.

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • match "//" comments with regex but not inside a quote

    - by Wireless102
    I need to match and replace some comments. for example: $test = "the url is http://www.google.com";// comment "<-- that quote needs to be matched I want to match the comments outside of the quotes, and replace any "'s in the comments with &quot;'s. I have tried a number of patterns and different ways of running them but with no luck. The regex will be run with javascript to match php "//" comments UPDATE: I took the regex from borkweb below and modified it. used a function from http://ejohn.org/blog/search-and-dont-replace/ and came up with this: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title></title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script type="text/javascript"> function t_replace(data){ var q = {}, ret = ""; data.replace(/(?:((["'\/]*(("[^"]*")|('[^']*'))?[\s]*)?[\/\/|#][^"|^']*))/g, function(value){ q[key] = value; }); for ( var key in q ){ ret = q[key]; } var text = data.split(ret); var out = ret + text[1]; out = out.replace(/"/g,"&quot;"); out = out.replace(/'/g,"&apos;"); return text[0] + out; } </script> </head> <body> <script type="text/javascript"> document.write(t_replace("$test = \"the url is http://www.google.com\";// c'o\"mment \"\"\"<-- that quote needs to be matched")+"<br>"); document.write(t_replace("$test = 'the url is http://www.google.com';# c'o\"mment \"\"\"<-- that quote needs to be matched")); </script> </body> </html> it handles all the line comments outside of single or double quotes. Is there anyway I could optimize this function?

    Read the article

  • Why won't my AJAX controls work? (and ajax for .net 4 not working?)

    - by Nicklamort
    I'm totally new to ajax. I'm using VS2005. I just downloaded .NET framework 4 and so then I downloaded ajaxcontroltoolkit.binary.net4 via [http://ajaxcontroltoolkit.codeplex.com/releases/view/43475] (as opposed to ajaxcontroltoolkit.binary.net35 for .NET 3.5), but when I try to load the ajaxcontroltoolkit.dll into my toolbox (as said in the tutorials), I get the following error msg: "'C:......\ajaxcontroltoolkit.dll' is not a microsoft .NET module." First question: Why is this happening? So I tried downloading the "Recommended" ajaxcontroltoolkit.binary.net35, and it accepted the .dll file and loaded all my controls. So, I started a new website and tried to check out a combobox, and it displays, but IE is giving the follow error msg: 'Sys.Extended.UI.PositioningMode.BottomLeft' is null or not an object.' 2nd question: Why is this happening? LOL Thank you. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <%@ Register Assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Namespace="System.Web.UI" TagPrefix="asp" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajx" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:ScriptManager runat="server"> </asp:ScriptManager> <ajx:ComboBox ID="ComboBox1" runat="server"> </ajx:ComboBox> </div> </form> </body> </html> Here is my web.config: <?xml version="1.0"?> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings/> <connectionStrings/> <system.web> <pages> <controls> <add tagPrefix="ajaxToolkit" namespace="AjaxControlToolkit" assembly="AjaxControlToolkit"/> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <compilation debug="true"> <assemblies> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> <authentication mode="Windows"/> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> </configuration>

    Read the article

  • What makes great software?

    - by VirtuosiMedia
    From the perspective of an end user, what makes a software great rather than just good or functional? What are some fundamental principles that can shift the way a software is used and perceived? What are some of the little finishing touches that help put an application over the top? I'm in the later stages of developing a web app and I'm looking for ideas or concepts that I may have missed. If you have specific examples of software or apps that you absolutely love, please share the reasons or features that make it special. Keep in mind that I'm looking for examples that directly affect the end user, but not necessarily just UI suggestions. Here are some of the principles and little touches I'm trying to use: Keep the UI as simple as possible. Remove absolutely everything that isn't necessary. Use progressive disclosure when more information can be needed sometimes but isn't needed all the time. Provide inline help and useful error messages. Verbs on buttons wherever possible. Make anything that's clickable obvious. Fast, responsive UI. Accessibility (this is a work in progress). Reusable UI patterns. Once a user learns a skill, they will be able to use it in multiple places. Intelligent default settings. Auto-focusing forms when filling out the form is the primary action to be taken on the page. Clear metaphors (like tabs) and headings indicating location within the app. Automating repetitive tasks (with the ability to disable the automation). Use standardized or accepted metaphors for icons (like an "x" for delete). Larger text sizes for improved readability. High contrast so that each section is distinct. Making sure that it's obvious on every page what the user is supposed to do by establishing a clear information hierarchy and drawing the eye to the call to action. Most deletions can be undone. Discoverability - Make it easy to learn how to do new tasks. Group similar elements together.

    Read the article

  • Problem updating through LINQtoSQL in MVC application using StructureMap, Repository Pattern and UoW

    - by matt
    I have an ASP MVC application using LINQ to SQL for data access. I am trying to use the Repository and Unit of Work patterns, with a service layer consuming the repositories and unit of work. I am experiencing a problem when attempting to perform updates on a particular repository. My application architecture is as follows: My service class: public class MyService { private IRepositoryA _RepositoryA; private IRepositoryB _RepositoryB; private IUnitOfWork _unitOfWork; public MyService(IRepositoryA ARepositoryA, IRepositoryB ARepositoryB, IUnitOfWork AUnitOfWork) { _unitOfWork = AUnitOfWork; _RepositoryA = ARepositoryA; _RepositoryB = ARepositoryB; } public PerformActionOnObject(Guid AID) { MyObject obj = _RepositoryA.GetRecords() .WithID(AID); obj.SomeProperty = "Changed to new value"; _RepositoryA.UpdateRecord(obj); _unitOfWork.Save(); } } Repository interface: public interface IRepositoryA { IQueryable<MyObject> GetRecords(); UpdateRecord(MyObject obj); } Repository LINQtoSQL implementation: public class LINQtoSQLRepositoryA : IRepositoryA { private MyDataContext _DBContext; public LINQtoSQLRepositoryA(IUnitOfWork AUnitOfWork) { _DBConext = AUnitOfWork as MyDataContext; } public IQueryable<MyObject> GetRecords() { return from records in _DBContext.MyTable select new MyObject { ID = records.ID, SomeProperty = records.SomeProperty } } public bool UpdateRecord(MyObject AObj) { MyTableRecord record = (from u in _DB.MyTable where u.ID == AObj.ID select u).SingleOrDefault(); if (record == null) { return false; } record.SomeProperty = AObj.SomePropery; return true; } } Unit of work interface: public interface IUnitOfWork { void Save(); } Unit of work implemented in data context extension. public partial class MyDataContext : DataContext, IUnitOfWork { public void Save() { SubmitChanges(); } } StructureMap registry: public class DataServiceRegistry : Registry { public DataServiceRegistry() { // Unit of work For<IUnitOfWork>() .HttpContextScoped() .TheDefault.Is.ConstructedBy(() => new MyDataContext()); // RepositoryA For<IRepositoryA>() .Singleton() .Use<LINQtoSQLRepositoryA>(); // RepositoryB For<IRepositoryB>() .Singleton() .Use<LINQtoSQLRepositoryB>(); } } My problem is that when I call PerformActionOnObject on my service object, the update never fires any SQL. I think this is because the datacontext in the UnitofWork object is different to the one in RepositoryA where the data is changed. So when the service calls Save() on it's IUnitOfWork, the underlying datacontext does not have any updated data so no update SQL is fired. Is there something I've done wrong in the StrutureMap registry setup? Or is there a more fundamental problem with the design? Many thanks.

    Read the article

  • How do I use Ruby metaprogramming to refactor this common code?

    - by James Wenton
    I inherited a project with a lot of badly-written Rake tasks that I need to clean up a bit. Because the Rakefiles are enormous and often prone to bizarre nonsensical dependencies, I'm simplifying and isolating things a bit by refactoring everything to classes. Specifically, that pattern is the following: namespace :foobar do desc "Frozz the foobar." task :frozzify do unless Rake.application.lookup('_frozzify') require 'tasks/foobar' Foobar.new.frozzify end Rake.application['_frozzify'].invoke end # Above pattern repeats many times. end # Several namespaces, each with tasks that follow this pattern. In tasks/foobar.rb, I have something that looks like this: class Foobar def frozzify() # The real work happens here. end # ... Other tasks also in the :foobar namespace. end For me, this is great, because it allows me to separate the task dependencies from each other and to move them to another location entirely, and I've been able to drastically simplify things and isolate the dependencies. The Rakefile doesn't hit a require until you actually try to run a task. Previously this was causing serious issues because you couldn't even list the tasks without it blowing up. My problem is that I'm repeating this idiom very frequently. Notice the following patterns: For every namespace :xyz_abc, there is a corresponding class in tasks/... in the file tasks/[namespace].rb, with a class name that looks like XyzAbc. For every task in a particular namespace, there is an identically named method in the associated namespace class. For example, if namespace :foo_bar has a task :apples, you would expect to see def apples() ... inside the FooBar class, which itself is in tasks/foo_bar.rb. Every task :t defines a "meta-task" _t (that is, the task name prefixed with an underscore) which is used to do the actual work. I still want to be able to specify a desc-description for the tasks I define, and that will be different for each task. And, of course, I have a small number of tasks that don't follow the above pattern at all, so I'll be specifying those manually in my Rakefile. I'm sure that this can be refactored in some way so that I don't have to keep repeating the same idiom over and over, but I lack the experience to see how it could be done. Can someone give me an assist?

    Read the article

  • Poor performance / speed of regex with lookahead

    - by Hugo Zaragoza
    I have been observing extremely slow execution times with expressions with several lookaheads. I suppose that this is due to underlying data structures, but it seems pretty extreme and I wonder if I do something wrong or if there are known work-arounds. The problem is determining if a set of words are present in a string, in any order. For example we want to find out if two terms "term1" AND "term2" are somewhere in a string. I do this with the expresion: (?=.*\bterm1\b)(?=.*\bterm2\b) But what I observe is that this is an order of magnitude slower than checking first just \bterm1\b and just then \bterm2\b This seems to indicate that I should use an array of patterns instead of a single pattern with lookaheads... is this right? it seems wrong... Here is an example test code and resulting times: public static void speedLookAhead() { Matcher m, m1, m2; boolean find; int its = 1000000; // create long non-matching string char[] str = new char[2000]; for (int i = 0; i < str.length; i++) { str[i] = 'x'; } String test = str.toString(); // First method: use one expression with lookaheads m = Pattern.compile("(?=.*\\bterm1\\b)(?=.*\\bterm2\\b)").matcher(test); long time = System.currentTimeMillis(); ; for (int i = 0; i < its; i++) { m.reset(test); find = m.find(); } time = System.currentTimeMillis() - time; System.out.println(time); // Second method: use two expressions and AND the results m1 = Pattern.compile("\\bterm1\\b").matcher(test); m2 = Pattern.compile("\\bterm2\\b").matcher(test); time = System.currentTimeMillis(); ; for (int i = 0; i < its; i++) { m1.reset(test); m2.reset(test); find = m1.find() && m2.find(); } time = System.currentTimeMillis() - time; System.out.println(time); } This outputs in my computer: 1754 150

    Read the article

  • How should I organize my C# classes? [closed]

    - by oscar.fimbres
    I'm creating an email generator system. I'm creating some clases and I'm trying to make things right. By the time, I have created 5 classes. Look at the class diagram: I'm going to explain you each one. Person. It's not a big deal. Just have two constructors: Person(fname, lname1, lname2) and Person(token, fname, lname1, lname2). Note that email property stays without value. StringGenerator. This is a static class and it has only a public function: Generate. The function receives a Person class and it will return a list of patterns for the email. MySql. It contains all the necessary to connect to a database. Database. This class inherits from MySql class. It has particular functions for the database. This gets all the registries from a table (function GetPeople) and return a List. Each person from the list contains all data except Email. Also it can add records (List but this must contains an available email). An available email is when an email doesn't have another person. For that reason, I have a method named ExistsEmail. Container. This is the class which is causing me some problems. It's like a temporary container. It supposed to have a people list from GetPeople (in Database class) and for each person it adds, it must generate a list of possible names (StringGenerator.Generate), then it selects one of the list and it must check out if exists in the database or in the same container. As I told above this is temporal, it may none of the possible emails is available. So the user can modify or enter a custom email available and update the list in this container. When all the email's people are available, it sends a list to add in the database, It must have a Flush method, to insert all the people in the database. I'm trying to design correct class. I need a little help to improve or edite the classes, because I want to separate the logic and visual, and learn of you. I hope you've been able to understand me. Any question or doubt, please let me know. Anyway, I attached the solution here to better understand it: http://www.megaupload.com/?d=D94FH8GZ

    Read the article

  • New to MVC. Actionscript 3.0 Appreciate your help.

    - by Combustion007
    Hello Everyone, I am new to design patterns and am trying to learn the MVC implementation. I have written four classes: DataModel DataView DataController Main Main serves as the application facade. The main FLA is called HelloWorld. I have a symbol called "Box" in HelloWorld. I just would like to add an instance of the "Box" on the stage. Eventually I would like to place a button and when the button is clicked, the Box instance color will be changed. I would appreciate any help, please help me figure out what am I doing wrong. Here are my Classes: DATAMODEL CLASS: package { import flash.events.*; public class DataModel extends EventDispatcher { public static const UPDATE:String = "modelUpdate"; private var _color:Number = (Math.round(Math.random()* 0xffffff)); public function DataModel() { trace("DATA MODEL INIT"); } public function get color():Number { return _color; } public function set color(p:Number):void { _color = p; notifyObserver(); } public function notifyObserver():void { dispatchEvent(new Event(DataModel.UPDATE)); } } } //DATACONTROLLER CLASS: package { import flash.events.; import flash.display.; import flash.errors.*; public class DataController { private var _model:DataModel; public function DataController(m:DataModel) { trace("DATACONTROLLER INIT"); _model = m; } } } DATAVIEW CLASS: package { import flash.events.; import flash.display.; import flash.errors.*; public class DataView extends Sprite { private var _model:DataModel; private var _controller:DataController; private var b:Box; public function DataView(m:DataModel, c:DataController) { _model = m; _controller = c; b = new Box(); b.x = b.y = 100; addChild(b); } } } And Finally THE FACADE: package { import flash.display.; import flash.events.; import flash.text.; import flash.errors.; public class Main extends Sprite { private var _model:DataModel; private var _controller:DataController; private var _view:DataView; public function Main(m:DataModel, c:DataController, v:DataView) { _model = m; _controller = c; _view = v; addChild(v); } } } When I test the movie: I get this error: ArgumentError: Error #1063: Argument count mismatch on Main(). Expected 3, got 0. Thanks alot.

    Read the article

  • Preg_replace regex, newlines, connection resets

    - by bob_the_destroyer
    I have mixed html, custom code, and regular text I need to examine and change frequently on several, long wiki pages. I'm working with a proprietary wiki-like application and have no control over how the application functions or validates user input. The layout of pages that users add must follow a very specific standard layout and always include very specific text in only certain places - a standard which frequently changes. If users add pages that are so far out of the standard, they will be deleted. The fact that all this is obviously a complete waste of time when alternative platforms to do exactly what's needed here exist is already understood. I've built a PHP based API to automate this post-validation and frequent restandardization process for me. I've been able set up regex patterns to handle all this mixed text, and they all work fine for handling single lines. The problem I have is this: Poorly formed regex against long text with line breaks can lead to unexpected results, such as connection resets. I have no access to server-side logs to troubleshoot. How do I overcome this? This is just one example of what I currently have: {column} and {section} tags I'm searching for below can have any number of attributes, and wrap any text. {section} may or may not exist and may or may not be one or more lines under {column}, but it has to be wrapped inside {column}. {column} itself may or may not exist, and if it doesn't, I don't care. I want to grab the inner section contents and wrap it in an html div tag. I can't recall the exact pattern I'm using offhand at the moment, but it's close enough... $pattern = "/\{column:id=summary([|]?([a-zA-Z0-9-_ ]+[:][a-zA-Z0-9-_ ]+[ ]?))\}(.*)({section([|]([a-zA-Z0-9-_ ]+[:][a-zA-Z0-9-_ ]+[ ]?))\}(.*)\{section\}(.*))?{column\}/s"; $replacement = "{html}<div id='summary'$7</div{html}"; $text = preg_replace($pattern, $replacement, $subject); Handling the {column} and {section} attributes and passing only valid HTML parameters to the new html div or a subtext of it is itself a challenge, but my main focus above right now is getting that (.*) value within {section} above without causing a connection reset. Any pointers?

    Read the article

  • How to get latest files in C#

    - by Newbie
    I have some files with same name but with different dates. Basically we are finding the files with most recent dates The file patterns are <FileNames><YYYYMMDD><FileExtension> e.g. test_20100506.xls indicates <FileNames> = test_ <YYYYMMDD> = 20100506 <FileExtension> = .xls Now in the source folder, the files are standardization_20100503.xls, standardization_20100504.xls, standardization_20100305.xls, replacement_20100505.xls As can be seen that, standardization_.xls are 3 in numbers but replacement_.xls is only 1. The output will be a list of file names whose content will be standardization_20100504.xls and replacement_20100505.xls Because among all the standardization_.xls is the most recent one and replacement_.xls is also the same. I have tried with my own logic but somehow failed. My idea is as under private static void GetLatestFiles(ref List<FileInfo> validFiles) { List<FileInfo> validFilesTemp = new List<FileInfo>(); for (int i = 0; i < validFiles.Count; i++) { for (int j = i+1; j < validFiles.Count; j++) { string getFileTextX = ExtractText(validFiles[i].Name); string getFileTextY = ExtractText(validFiles[j].Name); if (getFileTextX == getFileTextY) { int getFileDatesX = Convert.ToInt32(ExtractNumbers(validFiles[i].Name)); int getFileDatesY = Convert.ToInt32(ExtractNumbers(validFiles[j].Name)); if (getFileDatesX > getFileDatesY) { validFilesTemp.Add(validFiles[i]); } else { validFilesTemp.Add(validFiles[j]); } } } } validFiles.Clear(); validFiles = validFilesTemp; } The ExtractNumbers is: public static string ExtractNumbers(string expr) { return string.Join(null, System.Text.RegularExpressions.Regex.Split(expr, "[^\\d]")); } and the ExtractText is public static string ExtractText(string expr) { return string.Join(null, System.Text.RegularExpressions.Regex.Split(expr, "[\\d]")); } I am using c#3.0 and framework 3.5 Help needed .. it's very urgent Thanks .

    Read the article

  • Intelligent "Subtraction" of one text logfile from another

    - by Vi
    Example: Application generates large text log file A with many different messages. It generates similarly large log file B when does not function correctly. I want to see what messages in file B are essentially new, i.e. to filter-out everything from A. Trivial prototype is: Sort | uniq both files Join files sort | uniq -c grep -v "^2" This produces symmetric difference and inconvenient. How to do it better? (including non-symmetric difference and preserving of messages order in B) Program should first analyse A and learn which messages are common, then analyse B showing with messages needs attention. Ideally it should automatically disregard things like timestamps, line numbers or other volatile things. Example. A: 0:00:00.234 Received buffer 0x324234 0:00:00.237 Processeed buffer 0x324234 0:00:00.238 Send buffer 0x324255 0:00:03.334 Received buffer 0x324255 0:00:03.337 Processeed buffer 0x324255 0:00:03.339 Send buffer 0x324255 0:00:05.171 Received buffer 0x32421A 0:00:05.173 Processeed buffer 0x32421A 0:00:05.178 Send buffer 0x32421A B: 0:00:00.134 Received buffer 0x324111 0:00:00.137 Processeed buffer 0x324111 0:00:00.138 Send buffer 0x324111 0:00:03.334 Received buffer 0x324222 0:00:03.337 Processeed buffer 0x324222 0:00:03.338 Error processing buffer 0x324222 0:00:03.339 Send buffer 0x3242222 0:00:05.271 Received buffer 0x3242FA 0:00:05.273 Processeed buffer 0x3242FA 0:00:05.278 Send buffer 0x3242FA 0:00:07.280 Send buffer 0x3242FA failed Result: 0:00:03.338 Error processing buffer 0x324222 0:00:07.280 Send buffer 0x3242FA failed One of ways of solving it can be something like that: Split each line to logical units: 0:00:00.134 Received buffer 0x324111,0:00:00.134,Received,buffer,0x324111,324111,Received buffer, \d:\d\d:\d\d\.\d\d\d, \d+:\d+:\d+.\d+, 0x[0-9A-F]{6}, ... It should find individual words, simple patterns in numbers, common layouts (e.g. "some date than text than number than text than end_of_line"), also handle combinations of above. As it is not easy task, user assistance (adding regexes with explicit "disregard that","make the main factor","don't split to parts","consider as date/number","take care of order/quantity of such messages" rules) should be supported (but not required) for it. Find recurring units and "categorize" lines, filter out too volatile things like timestamps, addresses or line numbers. Analyse the second file, find things that has new logical units (one-time or recurring), or anything that will "amaze" the system which has got used to the first file. Example of doing some bit of this manually: $ cat A | head -n 1 0:00:00.234 Received buffer 0x324234 $ cat A | egrep -v "Received buffer" | head -n 1 0:00:00.237 Processeed buffer 0x324234 $ cat A | egrep -v "Received buffer|Processeed buffer" | head -n 1 0:00:00.238 Send buffer 0x324255 $ cat A | egrep -v "Received buffer|Processeed buffer|Send buffer" | head -n 1 $ cat B | egrep -v "Received buffer|Processeed buffer|Send buffer" 0:00:03.338 Error processing buffer 0x324222 0:00:07.280 Send buffer 0x3242FA failed This is a boring thing (there are a lot of message types); also I can accidentally include some too broad pattern. Also it can't handle complicated things like interrelation between messages. I know that it is AI-related. May be there are already developed tools?

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • How Mary Meeker’s Latest Findings May Make You Re-Imagine Commerce

    - by Brenna Johnson-Oracle
    0 0 1 954 5439 Endeca Technologies 45 12 6381 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Today, Mary Meeker released her highly anticipated annual “Internet Trends” presentation for 2014. All 164 slides are jam-packed with pretty much everything you need to know about the state of the Internet. And as luck would have it, Oracle is staying ahead of these trends (but we’ll talk about that later). There were a few surprises, some stats to solidify what you likely already know, and Meeker’s novel observations about where we are all going. What interested me the most is not only how people are engaging in their personal lives, but how they engage with brands. As you could probably predict, Internet usage growth is slowing while tablet user and mobile data traffic growth continue their meteoric rise around the globe, with tremendous growth in underpenetrated markets like China, India, Brazil and Indonesia. Now hold those the “Internet is dead” comments. Keep in mind there’s still plenty of room to grow, and a multiscreen model is Meeker’s vision for our future. Despite 1.5x YOY growth for mobile traffic, mobile still only makes up about 23% of all traffic today. With tablet shipments easily outpacing figures for PCs even at their height (in 2007), mobile will only continue on it’s path, but won’t be everything to everyone. Mobile won’t replace every touchpoint, it’s just created our shorter attention spans and demand for simpler, more personal experiences. As Meeker points out TVs, tablets, PCs, and smartphones are used for different activities at present, but lines will blur (for example, 84% of smartphones owners use their device while watching TV). Day-to-day activities are being re-imagining through simple, beautiful user experiences. It seems like every day I discover a new way a brand/site/app made the most mundane or mounting task enjoyable and frictionless – and I’m not alone. Meeker points out the evolution of how we do everything from how we communicate, get information, use money, meet someone, get places, order a meal, and consume media is all done through new user interfaces that make day-to-day tasks simpler. This movement has caused just about everyone’s patience for a poor UX to take a nosedive. And it’s not just the digital user experience, technology is making a lot of people’s offline lives easier, and less expensive. Today 47% of online shopping utilizes free shipping— nearly half. And Meeker predicts same day local delivery will be the “next big thing” (and you can take a guess on who will own that). Content, Community and Commerce creates the “Internet Trifecta.” Meeker pointed out that when content, communities and commerce occur in a single experience it’s embraced by consumers, which translates to big dollars for brands. The magic happens when consumers can get inspired, research, and buy in a single experience. As the buying cycle has changed and touchpoints (Web, mobile, social, store) are no longer tied to “roles” or steps in the customer journey, brands must make all experiences (content and commerce) available in a single, adaptable experience. (We at Oracle Commerce have a lot to say on this topic – stay tuned!) And in what Meeker calls the “biggest re-imagination of all:” consumers enabled with smartphones and sensors are creating troves of findable and sharable data, which she says is in the early stages, by growing rapidly. She notes that transparency and patterns of consumers with this hardware (FYI - there are up to 10 sensors embedded in smartphones now) has created a Big Data treasure chest to be mined to improve business and the life of the consumer. The opportunities are endless. So what does it all mean for a company doing business online? Start thinking about how you can: Re-imagine your experience. Not your online experience and your mobile experience and your social experience – your overall experience. When consumers can research, buy, and advocate from anywhere (and their attention spans are at an all-time low) channels don’t exist. Enable simple and beautiful interactions informed by all of the online and offline data you leverage across your enterprise. Ethically leverage the endless supply of data (user generated content, clicks, purchases, in-store behavior, social activity) to make experiences more beautiful, more accurate, and more personalized (not to mention, more lucrative for you). Re-imagine content and commerce. Content and commerce must co-exist in a single destination where shoppers can get inspired, explore, research, share, and purchase in a collective experience. Think of how you can deliver an experience where all types of experiences (brand stories and commerce) adapt to every customer need. (Look for more on this topic coming soon). Re-imagine your reach. Look to Meeker’s findings to see how the global appetite for digital experiences is growing, but under-served in many places (i.e.: India, Mexico, Indonesia, Brazil, Philippines, etc.). Growing your online business to a new geography doesn’t have to mean starting from scratch or having an entirely new team manage the new endeavor. Expand using what you’ve already built in a multisite framework, with global language support. And of course, make sure it’s optimized for mobile! Re-imagine the possible. After every Meeker report, I’m always left with the thought “we are just at the beginning.” Everyday there is more data, more possibilities, more online consumers, and more opportunities to use new latest technology to get closer to your customers and be more successful. There’s a lot going on in our Product Development and Product Innovations groups to automate innovation for our customers, so that they can continue to stay ahead of these trends, without disrupting their business. Check out a recent interview with our Innovations Team on some of these new possibilities. Staying on track despite the seemingly endless possibilities out there is the hard part. Prioritizing where you will focus based on your unique brand promise, customer and goals is what you do best. To learn how Oracle Commerce can help your business achieve your goals check out oracle.com/commerce. Check out Meeker’s entire report here.

    Read the article

  • CodePlex Daily Summary for Tuesday, February 08, 2011

    CodePlex Daily Summary for Tuesday, February 08, 2011Popular ReleasesyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Supports YouTube's new layout Complete internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details.fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...Pyxis 2: Production Release: Pyxis 2.0.0.13 - Full Production Release This release of Pyxis 2 offers you a wide range of features: Launch Applications in their own threads & domains Render alpha-blended icons on the desktop Support for SD & USB drives Online App Store Dynamic & Static IP support Menus & Modals Over a dozen GUI controls File selection dialogs Folder selection dialog Application, Bootloader, and Firmware Updating Update Release Notes Much More!Microsoft All-In-One Code Framework: Sample Browser v2 (CTP Release): Sample Browser v2 (CTP Release) http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=205917MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.Finestra Virtual Desktops: 1.0: Finally the version 1.0 release! Sorry for the long delay since the last release, but I think that you'll find this release to be really smooth, really stable, and a really great enhancement to Windows. New features include: Windows 7 taskbar integration Major performance and usability improvements Redesigned look and feel New name: Finestra Better automatic updating Much faster full-screen switcher Fixes Windows 7 hotkey collisions by default Updated installerFacebook C# SDK: 5.0.2 (BETA): PLEASE TAKE A FEW MINUTES TO GIVE US SOME FEEDBACK: Facebook C# SDK Survey This is third BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. This release contains some breaking changes. Particularly with authentication. After spending time reviewing the trouble areas that people are having using th...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 3.0.0 for MVC3: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. ChangelogTargeting ASP.NET MVC 3 and .NET 4.0 Additional UpdatePriority options for generating XML sitemaps Allow to specify target on SiteMapTitleAttribute One action with multiple routes and breadcrumbs Medium Trust optimizations Create SiteMapTitleAttribute for setting parent title IntelliSense for your sitemap with MvcSiteMapSchem...patterns & practices SharePoint Guidance: SharePoint Guidance 2010 Hands On Lab: SharePoint Guidance 2010 Hands On Lab consists of six labs: one for logging, one for service location, and four for application setting manager. Each lab takes about 20 minutes to walk through. Each lab consists of a PDF document. You can go through the steps in the doc to create solution and then build/deploy the solution and run the lab. For those of you who wants to save the time, we included the final solution so you can just build/deploy the solution and run the lab.Value Injecter - object(s) to -> object mapper: 2.3: it lets you define your own convention-based matching algorithms (ValueInjections) in order to match up (inject) source values to destination values. inject from multiple sources in one InjectFrom added ConventionInjectionMobile Device Detection and Redirection: 0.1.11.11: Improvements to Beta Release The following changes have been made in version 0.1.11.11: BlackBerry Version 6 devices (such as the 9800 Torch) are now correctly identified with a dedicated handler. Android powered devices are now correctly identified. Minor change to Provider.cs to improve performance and optimise data sent to 51Degrees.mobi if the option is enabled. GC.collect is no longer called at any point. All garbage collection now happens automatically IMPORTANT CHANGES This rele...TweetSharp: TweetSharp v2.0.0.0 - Preview 10: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for trends Added support for Silverlight 4 Elevated WP7 fixes Third Party Library VersionsHammock v1.1.7: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comNew Projectsbisolu_sendus: bisolu smsBlack & Scholes: Black & Scholes OptionsCosLabs: CosLabs makes it easy to see the full capabilities of the Cosmos operating system toolkit. CosLabs has a number of experiments to find new and unique uses for Cosmos. It's developed in C#.DeployToAzure: DeployToAzure allows automating deployment of Windows Azure project and making it a part of TFS 2010 build process without using PowerShell and Azure Management CmdLets. Digital: Educational purposesDiscogsNet: DiscogsNet is a .NET library to query the Discogs.com API and parse the Discogs XML data dump files. It's developed in C# and usable from any .NET language. The API of the library is focused on ease of use and intuitiveness.DJME - The jQuery extensions for ASP.NET MVC: DJME - The jQuery extensions for ASP.NET MVC is a lightweight framework which helps you build rich user interfaces for ASP.NET MVC while enjoying great developer productivity. DokuDB: Visio AddIn to document different versions of a databaseDtsConfig Explorer: This project is a windows froms application that helps explore a SSIS dtsconfig file with an easy wayEveryDNS Service: Windows service to automatically send your current IP address to everydns.net for dynamic domains. Automatically monitors and sends IP address changes without having to be logged in. The project is built in C# targeting the .NET 4.0 framework.EvoSim - Evolution Simulation: EvoSim is a pet project to learn aspects of MVVM, WPF/Silverlight, and Parallel Processing features of .NET 4. The goal is to simulate a world filled with creatures that move, eat, reproduce, and die according to Darwinian evolutionary principles. It is tile and turn based.EZStorage: A storage library for XNA based on EasyStorage but abstracting in an XML based file list for each folder, allowing file listings on all folders and details like file creation dates which are no longer possible in XNA 4.0ForeverBell's Snake: A snake game written in VB.NET.Glauca Browser: This is a browser with a webkit core inside.Grauers SharePoint 2010 Custom MasterPage Feature: This is a SharePoint 2010 Custom MastePage feature. Custom Master and custom CSS Blog post about the project: http://www.grauers.net/archive/2011/02/07/build-a-sharepoint-2010-custom-mastepage-with-visual-studio-2010.aspxHamming Code Sample: HamCode demonstrate how the hamming code work, result of coded and decoded data. Shows the benefits of hamming method/codeHydroDesktop Mercurial Test: This is a trial version of the HydroDesktop project (hydrodesktop.codeplex.com) running on Mercurial source code repository. We first want to test whether the data update/download is happening fine before switching the official HydroDesktop website to the new system.LateBindingApi: Creates .Net Proxy Components from COM Type Libraries.Middlesex County College Library Wireless Auto Login: The Middlesex County College Library Wireless Auto Login automatically logs a computer in to the Middlesex County College (Edison, NJ) Library's WifiMobility: Mobility is a small program that reads from a text file. The text file includes everything about the program, right down to that cute little bunny on your desktop! Try Mobility today!OpenMFC: OpenMFC is opensource version of MFC for using with C/C++ compiler without MFC.Orchard Content Sharing: This Orchard module adds content sharing functionality via integration with AddThis.com sharing service.Orchard Wunder Weather Widget: This project is used to maintain the source code for the Wunder Weather widget in the Orchard gallery.Professional Audio Recorder: Professional Audio Recorder is a Audio Recorder for Windows Phone 7Python Node Info for Umbraco: Designed to assist Umbraco macro authors who are writing their macros in Python. Provides a helper macro that, when inserted into a page, will enable the display of an info panel showing the page node properties as represented in Python.Softina.Graphs: Softina.Graphs is a graph managment and algorithm utility. It includes a set of libraries to work with graph processing. It is written using .net framework with c# language and MS Visual Studio 2008. Client applications is created using devexpress components.spangesharp: Application to help people learn c#Visual Studio Private Extension Gallery: Add a new tab in the Visual Studio extension manager to manage private extensions.WCF Home Framework: Home Framework is a service-oriented framework able to facilitate deploy and management of wcf services in a mid-size scenario project.Windows Phone 7 Video Player: This project contains all the source of an application for Windows Phone 7 to consume an RSS Media exposed by our Smooth Streaming Video Player plugin for WordPress (http://smooth.codeplex.com).WOL Shopping List: This is a shoppingList version of WOL's NearMeWP7RSSReader: Updated RSSReader project for Windows Phone 7 using the RTM tools with the October 2010 update.Xen (XNA Extended) Framework: The Xen Framework is a set of libraries that provides and extends XNA 4.0 functionality to make game development easier and faster while producing clean, maintainable code. Xen frees you up to spend more time building your game.

    Read the article

  • Silverlight for Windows Embedded tutorial (step 4)

    - by Valter Minute
    I’m back with my Silverlight for Windows Embedded tutorial. Sorry for the long delay between step 3 and step 4, the MVP summit and some work related issue prevented me from working on the tutorial during the last weeks. In our first,  second and third tutorial steps we implemented some very simple applications, just to understand the basic structure of a Silverlight for Windows Embedded application, learn how to handle events and how to operate on images. In this third step our sample application will be slightly more complicated, to introduce two new topics: list boxes and custom control. We will also learn how to create controls at runtime. I choose to explain those topics together and provide a sample a bit more complicated than usual just to start to give the feeling of how a “real” Silverlight for Windows Embedded application is organized. As usual we can start using Expression Blend to define our main page. In this case we will have a listbox and a textblock. Here’s the XAML code: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="ListDemo.Page" Width="640" Height="480" x:Name="ListPage" xmlns:ListDemo="clr-namespace:ListDemo">   <Grid x:Name="LayoutRoot" Background="White"> <ListBox Margin="19,57,19,66" x:Name="FileList" SelectionChanged="Filelist_SelectionChanged"/> <TextBlock Height="35" Margin="19,8,19,0" VerticalAlignment="Top" TextWrapping="Wrap" x:Name="CurrentDir" Text="TextBlock" FontSize="20"/> </Grid> </UserControl> In our listbox we will load a list of directories, starting from the filesystem root (there are no drives in Windows CE, the filesystem has a single root named “\”). When the user clicks on an item inside the list, the corresponding directory path will be displayed in the TextBlock object and the subdirectories of the selected branch will be shown inside the list. As you can see we declared an event handler for the SelectionChanged event of our listbox. We also used a different font size for the TextBlock, to make it more readable. XAML and Expression Blend allow you to customize your UI pretty heavily, experiment with the tools and discover how you can completely change the aspect of your application without changing a single line of code! Inside our ListBox we want to insert the directory presenting a nice icon and their name, just like you are used to see them inside Windows 7 file explorer, for example. To get this we will define a user control. This is a custom object that will behave like “regular” Silverlight for Windows Embedded objects inside our application. First of all we have to define the look of our custom control, named DirectoryItem, using XAML: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="ListDemo.DirectoryItem" Width="500" Height="80">   <StackPanel x:Name="LayoutRoot" Orientation="Horizontal"> <Canvas Width="31.6667" Height="45.9583" Margin="10,10,10,10" RenderTransformOrigin="0.5,0.5"> <Canvas.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform/> <RotateTransform Angle="-31.27"/> <TranslateTransform/> </TransformGroup> </Canvas.RenderTransform> <Rectangle Width="31.6667" Height="45.8414" Canvas.Left="0" Canvas.Top="0.116943" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.569519" Canvas.Top="1.05249" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142632,0.753441" EndPoint="1.01886,0.753441"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142632" CenterY="0.753441" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142632" CenterY="0.753441" Angle="-35.3437"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="2.28036" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="1.34485" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="26.4269" Height="45.8414" Canvas.Left="0.227798" Canvas.Top="0" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="1.25301" Height="45.8414" Canvas.Left="1.70862" Canvas.Top="0.116943" Stretch="Fill" Fill="#FFEBFF07"/> </Canvas> <TextBlock Height="80" x:Name="Name" Width="448" TextWrapping="Wrap" VerticalAlignment="Center" FontSize="24" Text="Directory"/> </StackPanel> </UserControl> As you can see, this XAML contains many graphic elements. Those elements are used to design the folder icon. The original drawing has been designed in Expression Design and then exported as XAML. In Silverlight for Windows Embedded you can use vector images. This means that your images will look good even when scaled or rotated. In our DirectoryItem custom control we have a TextBlock named Name, that will be used to display….(suspense)…. the directory name (I’m too lazy to invent fancy names for controls, and using “boring” intuitive names will make code more readable, I hope!). Now that we have some XAML code, we may execute XAML2CPP to generate part of the aplication code for us. We should then add references to our XAML2CPP generated resource file and include in our code and add a reference to the XAML runtime library to our sources file (you can follow the instruction of the first tutorial step to do that), To generate the code used in this tutorial you need XAML2CPP ver 1.0.1.0, that is downloadable here: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2010/03/08/xaml2cpp-1.0.1.0.aspx We can now create our usual simple Win32 application inside Platform Builder, using the same step described in the first chapter of this tutorial (http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2009/10/01/silverlight-for-embedded-tutorial.aspx). We can declare a class for our main page, deriving it from the template that XAML2CPP generated for us: class ListPage : public TListPage<ListPage> { ... } We will see the ListPage class code in a short time, but before we will see the code of our DirectoryItem user control. This object will be used to populate our list, one item for each directory. To declare a user control things are a bit more complicated (but also in this case XAML2CPP will write most of the “boilerplate” code for use. To interact with a user control you should declare an interface. An interface defines the functions of a user control that can be called inside the application code. Our custom control is currently quite simple and we just need some member functions to store and retrieve a full pathname inside our control. The control will display just the last part of the path inside the control. An interface is declared as a C++ class that has only abstract virtual members. It should also have an UUID associated with it. UUID means Universal Unique IDentifier and it’s a 128 bit number that will identify our interface without the need of specifying its fully qualified name. UUIDs are used to identify COM interfaces and, as we discovered in chapter one, Silverlight for Windows Embedded is based on COM or, at least, provides a COM-like Application Programming Interface (API). Here’s the declaration of the DirectoryItem interface: class __declspec(novtable,uuid("{D38C66E5-2725-4111-B422-D75B32AA8702}")) IDirectoryItem : public IXRCustomUserControl { public:   virtual HRESULT SetFullPath(BSTR fullpath) = 0; virtual HRESULT GetFullPath(BSTR* retval) = 0; }; The interface is derived from IXRCustomControl, this will allow us to add our object to a XAML tree. It declares the two functions needed to set and get the full path, but don’t implement them. Implementation will be done inside the control class. The interface only defines the functions of our control class that are accessible from the outside. It’s a sort of “contract” between our control and the applications that will use it. We must support what’s inside the contract and the application code should know nothing else about our own control. To reference our interface we will use the UUID, to make code more readable we can declare a #define in this way: #define IID_IDirectoryItem __uuidof(IDirectoryItem) Silverlight for Windows Embedded objects (like COM objects) use a reference counting mechanism to handle object destruction. Every time you store a pointer to an object you should call its AddRef function and every time you no longer need that pointer you should call Release. The object keeps an internal counter, incremented for each AddRef and decremented on Release. When the counter reaches 0, the object is destroyed. Managing reference counting in our code can be quite complicated and, since we are lazy (I am, at least!), we will use a great feature of Silverlight for Windows Embedded: smart pointers.A smart pointer can be connected to a Silverlight for Windows Embedded object and manages its reference counting. To declare a smart pointer we must use the XRPtr template: typedef XRPtr<IDirectoryItem> IDirectoryItemPtr; Now that we have defined our interface, it’s time to implement our user control class. XAML2CPP has implemented a class for us, and we have only to derive our class from it, defining the main class and interface of our new custom control: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { ... } XAML2CPP has generated some code for us to support the user control, we don’t have to mind too much about that code, since it will be generated (or written by hand, if you like) always in the same way, for every user control. But knowing how does this works “under the hood” is still useful to understand the architecture of Silverlight for Windows Embedded. Our base class declaration is a bit more complex than the one we used for a simple page in the previous chapters: template <class A,class B> class DirectoryItemUserControlRegister : public XRCustomUserControlImpl<A,B>,public TDirectoryItem<A,XAML2CPPUserControl> { ... } This class derives from the XAML2CPP generated template class, like the ListPage class, but it uses XAML2CPPUserControl for the implementation of some features. This class shares the same ancestor of XAML2CPPPage (base class for “regular” XAML pages), XAML2CPPBase, implements binding of member variables and event handlers but, instead of loading and creating its own XAML tree, it attaches to an existing one. The XAML tree (and UI) of our custom control is created and loaded by the XRCustomUserControlImpl class. This class is part of the Silverlight for Windows Embedded framework and implements most of the functions needed to build-up a custom control in Silverlight (the guys that developed Silverlight for Windows Embedded seem to care about lazy programmers!). We have just to initialize it, providing our class (DirectoryItem) and interface (IDirectoryItem). Our user control class has also a static member: protected:   static HINSTANCE hInstance; This is used to store the HINSTANCE of the modules that contain our user control class. I don’t like this implementation, but I can’t find a better one, so if somebody has good ideas about how to handle the HINSTANCE object, I’ll be happy to hear suggestions! It also implements two static members required by XRCustomUserControlImpl. The first one is used to load the XAML UI of our custom control: static HRESULT GetXamlSource(XRXamlSource* pXamlSource) { pXamlSource->SetResource(hInstance,TEXT("XAML"),IDR_XAML_DirectoryItem); return S_OK; }   It initializes a XRXamlSource object, connecting it to the XAML resource that XAML2CPP has included in our resource script. The other method is used to register our custom control, allowing Silverlight for Windows Embedded to create it when it load some XAML or when an application creates a new control at runtime (more about this later): static HRESULT Register() { return XRCustomUserControlImpl<A,B>::Register(__uuidof(B), L"DirectoryItem", L"clr-namespace:DirectoryItemNamespace"); } To register our control we should provide its interface UUID, the name of the corresponding element in the XAML tree and its current namespace (namespaces compatible with Silverlight must use the “clr-namespace” prefix. We may also register additional properties for our objects, allowing them to be loaded and saved inside XAML. In this case we have no permanent properties and the Register method will just register our control. An additional static method is implemented to allow easy registration of our custom control inside our application WinMain function: static HRESULT RegisterUserControl(HINSTANCE hInstance) { DirectoryItemUserControlRegister::hInstance=hInstance; return DirectoryItemUserControlRegister<A,B>::Register(); } Now our control is registered and we will be able to create it using the Silverlight for Windows Embedded runtime functions. But we need to bind our members and event handlers to have them available like we are used to do for other XAML2CPP generated objects. To bind events and members we need to implement the On_Loaded function: virtual HRESULT OnLoaded(__in IXRDependencyObject* pRoot) { HRESULT retcode; IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; return ((A*)this)->Init(pRoot,hInstance,app); } This function will call the XAML2CPPUserControl::Init member that will connect the “root” member with the XAML sub tree that has been created for our control and then calls BindObjects and BindEvents to bind members and events to our code. Now we can go back to our application code (the code that you’ll have to actually write) to see the contents of our DirectoryItem class: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { protected:   WCHAR fullpath[_MAX_PATH+1];   public:   DirectoryItem() { *fullpath=0; }   virtual HRESULT SetFullPath(BSTR fullpath) { wcscpy_s(this->fullpath,fullpath);   WCHAR* p=fullpath;   for(WCHAR*q=wcsstr(p,L"\\");q;p=q+1,q=wcsstr(p,L"\\")) ;   Name->SetText(p); return S_OK; }   virtual HRESULT GetFullPath(BSTR* retval) { *retval=SysAllocString(fullpath); return S_OK; } }; It’s pretty easy and contains a fullpath member (used to store that path of the directory connected with the user control) and the implementation of the two interface members that can be used to set and retrieve the path. The SetFullPath member parses the full path and displays just the last branch directory name inside the “Name” TextBlock object. As you can see, implementing a user control in Silverlight for Windows Embedded is not too complex and using XAML also for the UI of the control allows us to re-use the same mechanisms that we learnt and used in the previous steps of our tutorial. Now let’s see how the main page is managed by the ListPage class. class ListPage : public TListPage<ListPage> { protected:   // current path TCHAR curpath[_MAX_PATH+1]; It has a member named “curpath” that is used to store the current directory. It’s initialized inside the constructor: ListPage() { *curpath=0; } And it’s value is displayed inside the “CurrentDir” TextBlock inside the initialization function: virtual HRESULT Init(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode;   if (FAILED(retcode=TListPage<ListPage>::Init(hInstance,app))) return retcode;   CurrentDir->SetText(L"\\"); return S_OK; } The FillFileList function is used to enumerate subdirectories of the current dir and add entries for each one inside the list box that fills most of the client area of our main page: HRESULT FillFileList() { HRESULT retcode; IXRItemCollectionPtr items; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; // retrieves the items contained in the listbox if (FAILED(retcode=FileList->GetItems(&items))) return retcode;   // clears the list if (FAILED(retcode=items->Clear())) return retcode;   // enumerates files and directory in the current path WCHAR filemask[_MAX_PATH+1];   wcscpy_s(filemask,curpath); wcscat_s(filemask,L"\\*.*");   WIN32_FIND_DATA finddata; HANDLE findhandle;   findhandle=FindFirstFile(filemask,&finddata);   // the directory is empty? if (findhandle==INVALID_HANDLE_VALUE) return S_OK;   do { if (finddata.dwFileAttributes&=FILE_ATTRIBUTE_DIRECTORY) { IXRListBoxItemPtr listboxitem;   // add a new item to the listbox if (FAILED(retcode=app->CreateObject(IID_IXRListBoxItem,&listboxitem))) { FindClose(findhandle); return retcode; }   if (FAILED(retcode=items->Add(listboxitem,NULL))) { FindClose(findhandle); return retcode; }   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=app->CreateObject(IID_IDirectoryItem,&directoryitem))) { FindClose(findhandle); return retcode; }   WCHAR fullpath[_MAX_PATH+1];   wcscpy_s(fullpath,curpath); wcscat_s(fullpath,L"\\"); wcscat_s(fullpath,finddata.cFileName);   if (FAILED(retcode=directoryitem->SetFullPath(fullpath))) { FindClose(findhandle); return retcode; }   XAML2CPPXRValue value((IXRDependencyObject*)directoryitem);   if (FAILED(retcode=listboxitem->SetContent(&value))) { FindClose(findhandle); return retcode; } } } while (FindNextFile(findhandle,&finddata));   FindClose(findhandle); return S_OK; } This functions retrieve a pointer to the collection of the items contained in the directory listbox. The IXRItemCollection interface is used by listboxes and comboboxes and allow you to clear the list (using Clear(), as our function does at the beginning) and change its contents by adding and removing elements. This function uses the FindFirstFile/FindNextFile functions to enumerate all the objects inside our current directory and for each subdirectory creates a IXRListBoxItem object. You can insert any kind of control inside a list box, you don’t need a IXRListBoxItem, but using it will allow you to handle the selected state of an item, highlighting it inside the list. The function creates a list box item using the CreateObject function of XRApplication. The same function is then used to create an instance of our custom control. The function returns a pointer to the control IDirectoryItem interface and we can use it to store the directory full path inside the object and add it as content of the IXRListBox item object, adding it to the listbox contents. The listbox generates an event (SelectionChanged) each time the user clicks on one of the items contained in the listbox. We implement an event handler for that event and use it to change our current directory and repopulate the listbox. The current directory full path will be displayed in the TextBlock: HRESULT Filelist_SelectionChanged(IXRDependencyObject* source,XRSelectionChangedEventArgs* args) { HRESULT retcode;   IXRListBoxItemPtr listboxitem;   if (!args->pAddedItem) return S_OK;   if (FAILED(retcode=args->pAddedItem->QueryInterface(IID_IXRListBoxItem,(void**)&listboxitem))) return retcode;   XRValue content; if (FAILED(retcode=listboxitem->GetContent(&content))) return retcode;   if (content.vType!=VTYPE_OBJECT) return E_FAIL;   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=content.pObjectVal->QueryInterface(IID_IDirectoryItem,(void**)&directoryitem))) return retcode;   content.pObjectVal->Release(); content.pObjectVal=NULL;   BSTR fullpath=NULL;   if (FAILED(retcode=directoryitem->GetFullPath(&fullpath))) return retcode;   CurrentDir->SetText(fullpath);   wcscpy_s(curpath,fullpath); FillFileList(); SysFreeString(fullpath);     return S_OK; } }; The function uses the pAddedItem member of the XRSelectionChangedEventArgs object to retrieve the currently selected item, converts it to a IXRListBoxItem interface using QueryInterface, and then retrives its contents (IDirectoryItem object). Using the GetFullPath method we can get the full path of our selected directory and assing it to the curdir member. A call to FillFileList will update the listbox contents, displaying the list of subdirectories of the selected folder. To build our sample we just need to add code to our WinMain function: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1;   if (FAILED(retcode=DirectoryItem::RegisterUserControl(hInstance))) return retcode;   ListPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   page.FillFileList();   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return 0; } This code is very similar to the one of the WinMains of our previous samples. The main differences are that we register our custom control (you should do that as soon as you have initialized the XAML runtime) and call FillFileList after the initialization of our ListPage object to load the contents of the root folder of our device inside the listbox. As usual you can download the full sample source code from here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/ListBoxTest.zip

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >