Search Results

Search found 14131 results on 566 pages for 'note'.

Page 336/566 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • SOA Suite 11g Releases

    - by antony.reynolds
    A few years ago Mars renamed one of the most popular chocolate bars in England from Marathon to Snickers.  Even today there are still some people confused by the name change and refer to them as marathons. Well last week we released SOA Suite 11.1.1.3 and BPM Suite 11.1.1.3 as well as OSB 11.1.1.3.  Seems that some people are a little confused by the naming and how to install these new versions, probably the same Brits who call Snickers a Marathon :-).  Seems that calling all the revisions 11g Release 1 has caused confusion.  To help these people I have created a little diagram to show how you can get the latest version onto your machine.  The dotted lines indicate dependencies. Note that SOA Suite 11.1.1.3 and BPM 11.1.1.3 are provided as a patch that is applied to SOA Suite 11.1.1.2.  For a new install there is no need to run the 11.1.1.2 RCU, you can run the 11.1.1.3 RCU directly. All SOA & BPM Suite 11g installations are built on a WebLogic Server base.  The WebLogic 11g Release 1 version is 10.3 with an additional number indicating the revision.  Similarly the 11g Release 1 SOA Suite, Service Bus and BPM Suite have a version 11.1.1 with an additional number indicating the revision.  The final revision number should match the final revision in the WebLogic Server version.  The products are also sometimes identified by a Patch Set number, indicating whether this is the 11gR1 product with the first or second patch set.  The table below show the different revisions with their alias. Product Version Base WebLogic Alias SOA Suite 11gR1 11.1.1.1 10.3.1 Release 1 or R1 SOA Suite 11gR1 11.1.1.2 10.3.2 Patch Set 1 or PS1 SOA Suite 11gR1 11.1.1.3 10.3.3 Patch Set 2 or PS2 BPM Suite 11gR1 11.1.1.3 10.3.3 Release 1 or R1 OSB 11gR1 11.1.1.3 10.3.3 Release 1 or R1 Hope this helps some people, if you find it useful you could always send me a Marathon bar, sorry Snickers!

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part3

    - by ybbest
    In the second part of the series, I showed you how to display and close a custom page in a SharePoint modal dialog using JavaScript and display a message after the modal dialog is closed. In this post, I’d like to show you how to use SPLongOperation with the Modal dialog box. You can download the source code here. 1. Firstly, modify the element file as follow <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function emitStatus(messageToDisplay) { statusId = SP.UI.Status.addStatus(messageToDisplay.message + ' ' +messageToDisplay.location ); SP.UI.Status.setStatusPriColor(statusId, 'Green'); } function portalModalDialogClosedCallback(result, value) { if (value !== null) { emitStatus(value); } } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: portalModalDialogClosedCallback }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked and display a status bar. Note that you need to use window.frameElement.commonModalDialogClose instead of window.frameElement.commonModalDialogClose protected void SubmitClicked(object sender, EventArgs e) { //Process stuff string message = "You clicked the Submit button"; string newLocation="http://www.google.com"; string information = string.Format("{{'message':'{0}','location':'{1}' }}", message, newLocation); var longOperation = new SPLongOperation(Page); longOperation.LeadingHTML = "Processing the  application"; longOperation.TrailingHTML = "Please wait while the application is being processed."; longOperation.Begin(); Thread.Sleep(5*1000); var closeDialogScript = GetCloseDialogScriptForLongProcess(information); longOperation.EndScript(closeDialogScript); } protected static string GetCloseDialogScriptForLongProcess(string message) { var scriptBuilder = new StringBuilder(); scriptBuilder.Append("window.frameElement.commonModalDialogClose(1,").Append(message).Append(");"); return scriptBuilder.ToString(); }   References: How to: Display a Page as a Modal Dialog Box

    Read the article

  • Music Notation Editor - Refactoring view creation logic elsewhere

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • Recording Topics manually and automatically

    - by maria.cozzolino(at)oracle.com
    When you are recording UPK topics, the default mode for recording is manual recording, where you tell the system when to record each screen shot. This mode allows you to take the exact screen shot you need. However, it does get a bit tedious when you are recording long topics, especially if you forget to take a few screen shots. In UPK 3.5, a new version of recording was introduced - Automatic Recording. It was designed to simplify the recording process by automatically capturing screen shots as you perform your transaction. If you haven't experimented with Automatic Recording, I'd recommend you give it a try - it might make your recording life easier. If you are recording with sound, you can also narrate your topic while recording it. To turn on Automatic Recording: 1. In Tools/Options, there are two recorder tabs. The first tab, under content defaults, includes settings that you may want to share between developers, like whether keyboard shortcuts are automatically captured. 2. The second tab is the one that contains the personal preferences, like screen shot capture key and whether to record automatically or manually. On this tab, choose the option for Automatic Recording. 3. Save the settings. Note that this setting will NOT impact content defaults; this is for your user only. When you launch the recorder, you will notice a slightly different message with guidance on how to start and stop automatic recording. Once you start recording, the recorder window is hidden until the end of the recording session to allow you to capture your transaction. In the task tray, there is a series of icons that let you know that you are capturing content. You can pause the recording, as well as set and view your sound levels if you are using sound. A camera appears during each screen capture to help you know when the system is capturing a screen shot, and a context indicator appears to show the recognition. With automatic recording, you can let the system capture the necessary screen shots. It may provide a more natural recording experience, and is probably easier for the untrained developer. On the other hand, you have a bit more control with manual recording on which screen shot appears, but it also means you have to remember to capture the screen shot. :) We'd be interested in hearing which type of recording you do, and any rationale on why you made that choice. Please comment and let us know. --Maria Cozzolino, Manager of UPK Software Requirements and UI Design

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • OTN Latinoamérica Tour 2012

    - by Dana Singleterry
    Better late than never. Sorry for the delay on getting this content up for all of you and thanks again for your attendance. A number of excellent questions came out of the sessions I delivered and herein I'm providing you with the content, in pdf format, for those sessions. I'm also providing pointers to Forms to ADF integration/migration as well as some details around OAF as used in E-Business Suite and ADF. Here's the sessions delivered by location. Click on any of the links to download the session content in pdf format. Montevideo Uruguay: Is Oracle ADF Simpler than Oracle Forms? Understanding the Fusion Development Platform Building Web Data Dashboards Without Coding Buenos Aires, Argentina: Is Oracle ADF Simpler than Oracle Forms? Developing Cross Device Mobile Applications Sao Paulo, Brazil Understanding the Fusion Development Platform Is Oracle ADF Simpler than Oracle Forms? A brief note on Form Integration & Migration: Does your organization have an Oracle Forms application that you'd like to migrate to ADF? Or, perhaps you're an Oracle Forms Developer and want to modernize your application development skills? If so, you've come to the right place! This section will strive to answer common questions that arise as you move from Forms to ADF. Our Oracle Forms Statement of Direction points out that Oracle is committed to the long-term support of Oracle Forms and Reports. However, many customers feel they are outgrowing their Forms applications. Users are demanding more sophisticated and interactive users interfaces. Executives are requiring SOA-enabled applications that integrate with peripheral services. Development leads are encouraging a more modern approach to application development, including adherence to design patterns like MVC. So even as Oracle still supports Forms, the list of reasons to move off of it is becoming more compelling and is only gaining further momentum by the fact that Oracle's own Fusion Applications are using ADF. Developers and organizations looking to align with both the technology stack and look-and-feel of Fusion Applications are choosing ADF, and thus reaping the benefits of years of best practices in enterprise application development that are baked into the ADF framework. So, if you decide to migrate off of Forms for any of these reasons, ADF is the way to go. Grant Ronald has published a video of our position on the subject, along with an ODTUG article explaining our direction. These materials explain that there are other migration tools/frameworks/paths, but the best choice is usually to follow Gartner's recommendation that if you are going to migrate off of Oracle Forms, ADF is the least risky and least costly migration path. Please visit the Oracle Forms page here. For details around OAF as used in E-Business Suite (EBS) and when to use ADF with EBS you can review the following blogs from Shay Shmeltzer. To ADF or to OAF? or Can I use ADF with Oracle E-Business Suite?

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • How to Create a Task From an Email Message in Outlook 2013

    - by Lori Kaufman
    If you need to do something related to an email message you received, you can easily create a task from the message in Outlook. A task can be created that contains all the content of the message without requiring you to re-enter the information. Creating a task in Outlook from an email message is different from flagging the message. As it says on Microsoft’s site: “When you flag an email message, the message appears in the To-Do List in Tasks and on the Tasks peek. However, if you delete the message, it also disappears from the To-Do List in Tasks and on the Tasks peek. Flagging a message doesn’t create a separate task.” Using the method described below to create a task from an email message, the task is separate from the message. The original message can be deleted or changed and the related task will not be affected. In Outlook, make sure the Mail section is active. If not, click Mail on the Navigation Bar at the bottom of the Outlook window. Then, click on the message you want to add to a task and drag it to Tasks on the Navigation Bar. A new Task window displays containing the email message and allowing you to enter the subject of the task, the Start and Due dates, Status, Priority, among other settings. When you have specified the settings for the task, click Save & Close in the Actions section of the Task tab. When the Task window closes, the Mail section is still active. If you move your mouse over Tasks on the Navigation Bar, a snippet from the new task displays in a popup window (the Task peek). Click Tasks to go to the Tasks section of Outlook. The To-Do List displays with your newly-added task listed in the middle pane. The right pane displays the details of the task and the contents of the message included in the task (as pictured at the beginning of this article). Click on Tasks to see a complete listing of all your tasks, including the one you just added from your email message. Note that attachments in an email message added to a new task are not copied to the task. You can also create new tasks by dragging contacts, calendar items, and notes to Tasks on the Navigation Bar.     

    Read the article

  • Mark Zuckerberg tops the list of 50 Highest Rated CEOs. 3 Indian CEOs feature in the list.

    - by Gopinath
    Mark Zuckerberg, the CEO of Facebook is rated as the best CEO according to a report released by the popular employee reviews website Glassdoor.com. 50,000 employees reviews submitted to Glassdoor in the past 1 year are considered for preparing the rating list and Zukerberg topped the list with 99 percent approval to the question “Do you approve of the way your CEO is leading the company?”. Wow! That’s an amazing support to Zukergerg from his employees though stock market and share holders are not with him. Coincidently Facebook is also rated as the best company to work by Glassdoor in a recent survey. Here is the list of top 10 CEOs Mark Zuckerberg, Facebook; 99.3% Approval Bill McDermott & Jim Hagemann Snabe, SAP; 99% Approval Dominic Barton, McKinsey & Company; 97% Approval Jim Turley, Ernst & Young; 96% Approval John E. Schlifske, Northwestern Mutual; 96% Approval Frank D’Souza, Cognizant Technology Solutions; 96% Approval Joe Tucci, EMC; 96% Approval Paul E. Jacobs, QUALCOMM; 95% Approval Richard K. Davis, U.S. Bank; 95% Approval Pierre Nanterme, Accenture; 95% Approval 3 Indian CEOs in the top 50 list – TCS, Wipro & MindTree The list featured three Indian CEOs and all the three are leading Software IT Services organizations in India and creating thousands of IT jobs.  Natarajan Chandrasekaran – the CEO of TCS is at 25th position, Krishnakumar Natarajan – the CEO of MindTree is at 28th position and  Wipro’s T.K.Kurien is at 44th position. Glad to see Indian CEO joining the global ranks. Tech Heavy Weights Google, Apple, Amazon & Microsoft aren’t in top 10 Another thing to note from this report is that the CEO’s of technology heavy weights Google, Apple, Amazon and Microsoft are not in the top10 list- looks like their employees are not really happy with their bosses. At least not as happy as their peers at Facebook. Google CEO’s Larry Page is at 11th position, Jeff Bezos of Amazon at 16th position and Tim Cook of Apple is at 18th position. Well the Microsoft CEO is not even in the list of top 50!! You can read the complete list of ratings at Glassdoor.com’s blog. Photo Credit: Andrew Feinberg

    Read the article

  • Equal Gifts Algorithm Problem

    - by 7Aces
    Problem Link - http://opc.iarcs.org.in/index.php/problems/EQGIFTS It is Lavanya's birthday and several families have been invited for the birthday party. As is customary, all of them have brought gifts for Lavanya as well as her brother Nikhil. Since their friends are all of the erudite kind, everyone has brought a pair of books. Unfortunately, the gift givers did not clearly indicate which book in the pair is for Lavanya and which one is for Nikhil. Now it is up to their father to divide up these books between them. He has decided that from each of these pairs, one book will go to Lavanya and one to Nikhil. Moreover, since Nikhil is quite a keen observer of the value of gifts, the books have to be divided in such a manner that the total value of the books for Lavanya is as close as possible to total value of the books for Nikhil. Since Lavanya and Nikhil are kids, no book that has been gifted will have a value higher than 300 Rupees... For the problem, I couldn't think of anything except recursion. The code I wrote is given below. But the problem is that the code is time-inefficient and gives TLE (Time Limit Exceeded) for 9 out of 10 test cases! What would be a better approach to the problem? Code - #include<cstdio> #include<climits> #include<algorithm> using namespace std; int n,g[150][2]; int diff(int a,int b,int f) { ++f; if(f==n) { if(a>b) { return a-b; } else { return b-a; } } return min(diff(a+g[f][0],b+g[f][1],f),diff(a+g[f][1],b+g[f][0],f)); } int main() { int i; scanf("%d",&n); for(i=0;i<n;++i) { scanf("%d%d",&g[i][0],&g[i][1]); } printf("%d",diff(g[0][0],g[0][1],0)); } Note - It is just a practice question, & is not part of a competition.

    Read the article

  • How to Assign a Default Signature in Outlook 2013

    - by Lori Kaufman
    If you sign most of your emails the same way, you can easily specify a default signature to automatically insert into new email messages and replies and forwards. This can be done directly in the Signature editor in Outlook 2013. We recently showed you how to create a new signature. You can also create multiple signatures for each email account and define a different default signature for each account. When you change your sending account when composing a new email message, the signature would change automatically as well. NOTE: To have a signature added automatically to new email messages and replies and forwards, you must have a default signature assigned in each email account. If you don’t want a signature in every account, you can create a signature with just a space, a full stop, dashes, or other generic characters. To assign a default signature, open Outlook and click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. To change the default signature for an email account, select the account from the E-mail account drop-down list on the top, right side of the dialog box under Choose default signature. Then, select the signature you want to use by default for New messages and for Replies/forwards from the other two drop-down lists. Click OK to accept your changes and close the dialog box. Click OK on the Outlook Options dialog box to close it. You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. Click Signature in the Include section of the New Mail Message window and select Signatures from the drop-down menu. In the next few days, we will be covering how to use the features of the signature editor next, and then how to insert and change signatures manually, backup and restore your signatures, and modify a signature for use in plain text emails.     

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • How to Quickly Add Multiple IP Addresses to Windows Servers

    - by Sysadmin Geek
    If you have ever added multiple IP addresses to a single Windows server, going through the graphical interface is an incredible pain as each IP must be added manually, each in a new dialog box. Here’s a simple solution. Needless to say, this can be incredibly monotonous and time consuming if you are adding more than a few IP addresses. Thankfully, there is a much easier way which allows you to add an entire subnet (or more) in seconds. Adding an IP Address from the Command Line Windows includes the “netsh” command which allows you to configure just about any aspect of your network connections. If you view the accepted parameters using “netsh /?” you will be presented with a list of commands each which have their own list of commands (and so on). For the purpose of adding IP addresses, we are interested in this string of parameters: netsh interface ipv4 add address Note: For Windows Server 2003/XP and earlier, “ipv4″ should be replaced with just “ip” in the netsh command. If you view the help information, you can see the full list of accepted parameters but for the most part what you will be interested in is something like this: netsh interface ipv4 add address “Local Area Connection” 192.168.1.2 255.255.255.0 The above command adds the IP Address 192.168.1.2 (with Subnet Mask 255.255.255.0) to the connection titled “Local Area Network”. Adding Multiple IP Addresses at Once When we accompany a netsh command with the FOR /L loop, we can quickly add multiple IP addresses. The syntax for the FOR /L loop looks like this: FOR /L %variable IN (start,step,end) DO command So we could easily add every IP address from an entire subnet using this command: FOR /L %A IN (0,1,255) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 This command takes about 20 seconds to run, where adding the same number of IP addresses manually would take significantly longer. A Quick Demonstration Here is the initial configuration on our network adapter: ipconfig /all Now run netsh from within a FOR /L loop to add IP’s 192.168.1.10-20 to this adapter: FOR /L %A IN (10,1,20) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 After the above command is run, viewing the IP Configuration of the adapter now shows: Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Extending the ADF Controller exception handler

    - by frank.nimphius
    The Oracle ADF controller provides a declarative option for developers to define a view activity, method activity or router activity to handle exceptions in bounded or unbounded task flows. Exception handling however is for exceptions only and not handling all types of Throwable. Furthermore, exceptions that occur during the JSF RENDER RESPONSE phase are not looked at either as it is considered too late in the cycle. For developers to try themselves to handle unhandled exceptions in ADF Controller, it is possible to extend the default exception handling, while still leveraging the declarative configuration. To add your own exception handler: · Create a Java class that extends ExceptionHandler · Create a textfile with the name “oracle.adf.view.rich.context.Exceptionhandler” (without the quotes) and store it in .adf\META-INF\services (you need to create the “services” folder) · In the file, add the absolute name of your custom exception handler class (package name and class name without the “.class” extension) For any exception you don't handle in your custom exception handler, just re-throw it for the default handler to give it a try … import oracle.adf.view.rich.context.ExceptionHandler; public class MyCustomExceptionHandler extends ExceptionHandler { public MyCustomExceptionHandler() {      super(); } public void handleException(FacesContext facesContext,                              Throwable throwable, PhaseId phaseId)                              throws Throwable {    String error_message;    error_message = throwable.getMessage();    //check error message and handle it if you can    if( … ){          //handle exception        …    }    else{       //delegate to the default ADFc exception handler        throw throwable;}    } } Note however, that it is recommended to first try and handle exceptions with the ADF Controller default exception handling mechanism. In the past, I've seen attempts on OTN to handle regular application use cases with custom exception handlers for where there was no need to override the exception handler. So don't go for this solution to quickly and always think of alternative solutions. Sometimes a try-catch-final block does it better than sophisticated web exception handling.

    Read the article

  • How do you play or record audio (to .WAV) on Linux in C++? [closed]

    - by Jacky Alcine
    Hello, I've been looking for a way to play and record audio on a Linux (preferably Ubuntu) system. I'm currently working on a front-end to a voice recognition toolkit that'll automate a few steps required to adapt a voice model for PocketSphinx and Julius. Suggestions of alternative means of audio input/output are welcome, as well as a fix to the bug shown below. Here is the current code I've used so far to play a .WAV file: void Engine::sayText ( const string OutputText ) { string audioUri = "temp.wav"; string requestUri = this->getRequestUri( OPENMARY_PROCESS , OutputText.c_str( ) ); int error , audioStream; pa_simple *pulseConnection; pa_sample_spec simpleSpecs; simpleSpecs.format = PA_SAMPLE_S16LE; simpleSpecs.rate = 44100; simpleSpecs.channels = 2; eprintf( E_MESSAGE , "Generating audio for '%s' from '%s'..." , OutputText.c_str( ) , requestUri.c_str( ) ); FILE* audio = this->getHttpFile( requestUri , audioUri ); fclose(audio); eprintf( E_MESSAGE , "Generated audio."); if ( ( audioStream = open( audioUri.c_str( ) , O_RDONLY ) ) < 0 ) { fprintf( stderr , __FILE__": open() failed: %s\n" , strerror( errno ) ); goto finish; } if ( dup2( audioStream , STDIN_FILENO ) < 0 ) { fprintf( stderr , __FILE__": dup2() failed: %s\n" , strerror( errno ) ); goto finish; } close( audioStream ); pulseConnection = pa_simple_new( NULL , "AudioPush" , PA_STREAM_PLAYBACK , NULL , "openMary C++" , &simpleSpecs , NULL , NULL , &error ); for (int i = 0;;i++ ) { const int bufferSize = 1024; uint8_t audioBuffer[bufferSize]; ssize_t r; eprintf( E_MESSAGE , "Buffering %d..",i); /* Read some data ... */ if ( ( r = read( STDIN_FILENO , audioBuffer , sizeof (audioBuffer ) ) ) <= 0 ) { if ( r == 0 ) /* EOF */ break; eprintf( E_ERROR , __FILE__": read() failed: %s\n" , strerror( errno ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } /* ... and play it */ if ( pa_simple_write( pulseConnection , audioBuffer , ( size_t ) r , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_write() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } usleep(2); } /* Make sure that every single sample was played */ if ( pa_simple_drain( pulseConnection , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_drain() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } } NOTE: If you want the rest of the code to this file, you can download it here directly from Launchpad.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Follow the How-To Geek Writers on Twitter

    - by The Geek
    Ever wonder what the How-To Geek writers are up to? If you’re a Twitter user, you can connect with us directly. We’ve also setup a new @howtogeeknews account if you just want to keep up with the latest articles. So if you want just the latest articles… click the image below and then click the Follow button. Otherwise, if you’d like to connect with the rest of us that actually use Twitter, you can follow each of us separately through  the links below. Note: Let’s try to stick to discussion, and leave the tech support questions for our forum. the How-To Geek (that’s me!) -  @howtogeek Matthew Guay – @maguay Trevor Bekolay – @TrevorBekolay Asian Angel – @asian_angel  Andrew Gehman – @andrewgehman Some of the HTG writers are not currently using Twitter… but I’m gonna list their accounts just in case you wanted to follow them. Mark Virtue – @markvirtue Mysticgeek – @mysticgeek  (He’s far too productive to waste time on Twitter!) Enjoy the conversation! Similar Articles Productive Geek Tips Got Awesome Geek Skills? The How-To Geek is Looking for WritersGot Awesome Skills? Why Not Write for How-To Geek?Integrate Twitter With Microsoft OutlookState of the Geek 2009: Behind the Scenes and Other GeekeryAnnouncing the How-To Geek Blogs TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • Silverlight Cream for June 15, 2010 - 2 -- #883

    - by Dave Campbell
    In this Issue: Vibor Cipan, Chris Klug, Pete Brown, Kirupa, and Xianzhong Zhu. Shoutouts (thought I gave up on them, didn't you?): Jesse Liberty has the companion video to his WP7 OData post up: New Video: Master/Detail in WinPhone 7 with oData Michael Scherotter who made the first Ball Watch SL1 app back in the day, has a Virtual Event: Creating an Entry for the BALL Watch Silverlight Contest... sounds like the thing to do if you want in on this :) Even if you don't speak Portuguese, you can check this out: MSN Brazil Uses Silverlight to Showcase the 2010 FIFA World Cup South Africa Erik Mork and crew have their latest up: This Week in Silverlight – Teched and Quizes Michael Klucher has a post up to give you some relief if you're having Trouble Installing the Windows Phone Developer Tools Portuguese above and now French... Jeremy Alles has a post up about [WP7] Windows Phone 7 challenge for french readers ! Just a note, not that it makes any difference, but Adam Kinney turned @SilverlightNews over to me today. I am the only one that has ever posted on it, but still having it all to myself feels special :) From SilverlightCream.com: Silverlight 4 tutorial: HOW TO use PathListBox and Sample Data Crank up that new version of Blend and follow along with Vibor Cipan's PathListBox tutorial ... oh, and sample data too. Cool INotifyPropertyChanged implementation Chris Klug shows off some INotifyPropertyChange goodness he is not implementing, and credits a blog by Manuel Felicio for some inspiration. Check out that post as well... I've tagged his blog... I needed *another* one :) Silverlight Tip: Using LINQ to Select the Largest Available Webcam Resolution With no Silverlight Tip of the Day today, Pete Brown stepped up with this tip for finding the largest available webcam resolution using LINQ ... and read the comment from Rene as well. Creating a Master-Detail UI in Blend Kirupa has a very nice Master/Detail UI post up with backrounder info and the code for the project. There's a running example in the post for you to get an idea what you're learning. Get started with Farseer Physics 2.1.3 in Silverlight 3 Xianzhong Zhu has a Silverlight 3 tutorial up for Farseer Physics 2.1.3 ... might track for Silverlight 4, but hey, WP7 is kinda/sort Silverlight 3, right? ... lots of code and external links. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • E160 ubuntu 12.04 can't detect the modem

    - by Matt
    i've got problem with e160 on ubuntu 12.04. I'cant configure network manager and connect because NM can't see the e160. I;ve tried lot of solutions with no result. ateusz@mateusz-Aspire-5738:~$ sudo usb_modeswitch -v 0x12d1 -p 0x1003 -H [sudo] password for mateusz: aLooking for default devices ... found matching product ID adding device Found device in default mode, class or configuration (1) Accessing device 002 on bus 001 ... Getting the current device configuration ... OK, got current device configuration (1) Using first interface: 0x00 Using endpoints 0x01 (out) and 0x82 (in) Not a storage device, skipping SCSI inquiry USB description data (for identification) ------------------------- Manufacturer: HUAWEI Technology Product: HUAWEI Mobile Serial No.: not provided ------------------------- Sending Huawei control message ... OK, Huawei control message sent - Run lsusb to note any changes. Bye. Dmesg [ 521.480062] usb 1-4: reset high-speed USB device number 4 using ehci_hcd [ 521.617792] option 1-4:1.1: GSM modem (1-port) converter detected [ 521.617945] usb 1-4: GSM modem (1-port) converter now attached to ttyUSB0 [ 521.618062] option 1-4:1.0: GSM modem (1-port) converter detected [ 521.618232] usb 1-4: GSM modem (1-port) converter now attached to ttyUSB1 [ 530.840276] option: option_instat_callback: error -108 [ 530.840455] option1 ttyUSB1: GSM modem (1-port) converter now disconnected from ttyUSB1 [ 530.840484] option 1-4:1.0: device disconnected [ 537.680378] option1 ttyUSB0: GSM modem (1-port) converter now disconnected from ttyUSB0 [ 537.680398] option 1-4:1.1: device disconnected [ 537.792088] usb 1-4: reset high-speed USB device number 4 using ehci_hcd [ 537.929549] option 1-4:1.1: GSM modem (1-port) converter detected [ 537.929702] usb 1-4: GSM modem (1-port) converter now attached to ttyUSB0 [ 537.929818] option 1-4:1.0: GSM modem (1-port) converter detected [ 537.929993] usb 1-4: GSM modem (1-port) converter now attached to ttyUSB1 [ 547.224294] option: option_instat_callback: error -108 [ 547.224470] option1 ttyUSB1: GSM modem (1-port) converter now disconnected from ttyUSB1 [ 547.224511] option 1-4:1.0: device disconnected [ 556.988066] tty_ldisc_hangup: waiting (usb-storage) for ttyUSB0 took too long, but we keep waiting... [ 558.990663] option1 ttyUSB0: GSM modem (1-port) converter now disconnected from ttyUSB0 [ 558.990698] option 1-4:1.1: device disconnected [ 559.100068] usb 1-4: reset high-speed USB device number 4 using ehci_hcd [ 559.241293] option 1-4:1.1: GSM modem (1-port) converter detected [ 559.241446] usb 1-4: GSM modem (1-port) converter now attached to ttyUSB0 [ 559.241565] option 1-4:1.0: GSM modem (1-port) converter detected [ 559.241739] usb 1-4: GSM modem (1-port) converter now attached to ttyUSB1 [ 568.728283] option: option_instat_callback: error -108 [ 568.728466] option1 ttyUSB1: GSM modem (1-port) converter now disconnected from ttyUSB1 [ 568.728496] option 1-4:1.0: device disconnected lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 003: ID 064e:a103 Suyin Corp. Acer/HP Integrated Webcam [CN0314] Bus 005 Device 002: ID 09da:c20a A4 Tech Co., Ltd Bus 001 Device 002: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E230/E270/E870 HSDPA/HSUPA Modem

    Read the article

  • Attach to Process in Visual Studio

    - by Daniel Moth
    One option for achieving step 1 in the Live Debugging process is attaching to an already running instance of the process that hosts your code, and this is a good place for me to talk about debug engines. You can attach to a process by selecting the "Debug" menu and then the "Attach To Process…" menu in Visual Studio 11 (Ctrl+Alt+P with my keyboard bindings), and you should see something like this screenshot: I am not going to explain this UI, besides being fairly intuitive, there is good documentation on MSDN for the Attach dialog. I do want to focus on the row of controls that starts with the "Attach to:" label and ends with the "Select..." button. Between them is the readonly textbox that indicates the debug engine that will be used for the selected process if you click the "Attach" button. If you haven't encountered that term before, read on MSDN about debug engines. Notice that the "Type" column shows the Code Type(s) that can be detected for the process. Typically each debug engine knows how to debug a specific code type (the two terms tend to be used interchangeably). If you click on a different process in the list with a different code type, the debug engine used will be different. However note that this is the automatic behavior. If you believe you know best, or more typically you want to choose the debug engine for a process using more than one code type, you can do so by clicking the "Select..." button, which should yield a "Select Code Type" dialog like this one: In this dialog you can switch to the debug engine you want to use by checking the box in front of your desired one, then hit "OK", then hit "Attach" to use it. Notice that the dialog suggests that you can select more than one. Not all combinations work (you'll get an error if you select two incompatible debug engines), but some do. Also notice in the list of debug engines one of the new players in Visual Studio 11, the GPU debug engine - I will be covering that on the C++ AMP team blog (and no, it cannot be combined with any others in this release). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Highlights from recent Yammer video

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A few weeks back, Ryan Kennedy of Yammer gave a talk about Berkeley DB Java Edition. You can find it posted here on Alex Popescu's Blog, or go directly to the video post itself. It was full of useful nuggets of information, such as why they chose to use BDB JE, performance, and some tips & tricks at the end. At over 40 minutes, the video is quite long. Ryan is an entertaining speaker, so I suggest you watch all of it. But if you only have time for the highlights, here are some times you can sync to:  06:18 hear the Berkeley DB JE features that caused Yammer select it, including: replication auto leader election, failover configurable durability and consistency guarantees 23:10 System performance characteristics 35:08 Check out the tips and tricks for using Berkeley DB JE I know the Berkeley DB development team is very pleased that BDB JE is working out well for Yammer. We definitely encourage others out there to take note of this success, especially if your requirements are similar to Yammer's (which Ryan outlines at the beginning of his talk)

    Read the article

  • Array Multiplication and Division

    - by Narfanator
    I came across a question that (eventually) landed me wondering about array arithmetic. I'm thinking specifically in Ruby, but I think the concepts are language independent. So, addition and subtraction are defined, in Ruby, as such: [1,6,8,3,6] + [5,6,7] == [1,6,8,3,6,5,6,7] # All the elements of the first, then all the elements of the second [1,6,8,3,6] - [5,6,7] == [1,8,3] # From the first, remove anything found in the second and array * scalar is defined: [1,2,3] * 2 == [1,2,3,1,2,3] But What, conceptually, should the following be? None of these are (as far as I can find) defined: Array x Array: [1,2,3] * [1,2,3] #=> ? Array / Scalar: [1,2,3,4,5] / 2 #=> ? Array / Scalar: [1,2,3,4,5] % 2 #=> ? Array / Array: [1,2,3,4,5] / [1,2] #=> ? Array / Array: [1,2,3,4,5] % [1,2] #=> ? I've found some mathematical descriptions of these operations for set theory, but I couldn't really follow them, and sets don't have duplicates (arrays do). Edit: Note, I do not mean vector (matrix) arithmetic, which is completely defined. Edit2: If this is the wrong stack exchange, tell me which is the right one and I'll move it. Edit 3: Add mod operators to the list. Edit 4: I figure array / scalar is derivable from array * scalar: a * b = c => a = b / c [1,2,3] * 3 = [1,2,3]+[1,2,3]+[1,2,3] = [1,2,3,1,2,3,1,2,3] => [1,2,3] = [1,2,3,1,2,3,1,2,3] / 3 Which, given that programmer's division ignore the remained and has modulus: [1,2,3,4,5] / 2 = [[1,2], [3,4]] [1,2,3,4,5] % 2 = [5] Except that these are pretty clearly non-reversible operations (not that modulus ever is), which is non-ideal. Edit: I asked a question over on Math that led me to Multisets. I think maybe extensible arrays are "multisets", but I'm not sure yet.

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >