Search Results

Search found 13093 results on 524 pages for 'studio'.

Page 521/524 | < Previous Page | 517 518 519 520 521 522 523 524  | Next Page >

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Setup Custom Portal & Content Enabled Domain

    - by Stefan Krantz
    When overlooking the past year we have seen a large increase in deployments where only some parts of the WebCenter Suite infrastructure has been used. The most common from my personal perspective is a domain topology that includes: WebCenter Custom Portal, WebCenter Content and Oracle HTTP ServicesToday its very common to see installation where the whole suite is installed when the use case only requires the custom portal and some sub component like WebCenter Content. This post will go into detail on how to minimize the deployment time and effort by only laying down the necessary managed servers needed, by following this proposed method you will minimize the configuration steps and only install the required components and schema's, configure only the necessary components and minimize the impact of architectural changes through reduced dependencies. Assumptions: Oracle 11g Database installed SYS or equivalent access to Database to setup schema's via RCU Running Operating System supporting JDK 7 Update 2 (Check support matrix here) Good understanding of WebLogic Architecture Binaries: Oracle JDK 7 Update 2 (1.7.0_02) (Download) Oracle WebLogic 10.3.6 (Download) Oracle WebCenter Binaries (11.1.1.6) (Download) Oracle WebCenter Content Binaries (11.1.1.6) (Download 1) (Download 2) Oracle HTTP Services (11.1.1.6) (Download) Oracle Repository Creation Utility (11.1.1.6) (Download Linux or Windows) Schema's: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} MDS - Meta Data Services (WebCenter and OWSM) WebCenter (WebCenter Schema) OCS (Oracle WebCenter Content) Activities (WebCenter Activities) OPSS (Policy Store for WebCenter) Installation Structure: - [Installation Home]/Middleware    - Oracle_WC1 (WebCenter Installation)    - Oracle_WT1 (Oracle WebTier)    - Oracle_ECM (WebCenter Content)    - wlserver_10.3 (Weblogic installation)- [Installation Home]/domains    - webcenter (WebCenter Domain)    - instances (OHS/OPMN instance)- [Installation Home]/applications- [Installation Home]/JDK1.7.0_02 Installation and Configuration Steps: Install Java and configure Java Home Extract the Java Installable (jdk-7u2-linux-x64) to [Installation Home]/JDK1.7.0_02 Add JAVA_HOME to Environment Settings (JAVA_HOME=[Installation Home]/JDK1.7.0_02) Update PATH in Environment Settings (PATH=$JAVA_HOME/bin:$PATH) Install WebLogic Server (Middleware Home) Run the installer / execute jar file (java - jar wls1036_generic.jar) Create the Middleware Home under [Installation Home]/Middleware Install WebCenter Portal (Extend Middleware Home) Extract the compressed file (ofm_wc_generic_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_WC1) Install WebCenter Content (Extend Middleware Home) Extract the compressed files (ofm_wcc_generic_11.1.1.6.0_disk1_1of2.zip & ofm_wcc_generic_11.1.1.6.0_disk1_2of2.zip) to the same temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_ECM) Configure Initial Domain (Domain name webcenter) Execute configuration tool - [Installation Home]/Middleware/wlserver_10.3/common/bin/config Select "Create a New Weblogic Domain" Select following template (Basic Weblogic Server Domain, Oracle Enterprise Manager, Oracle WSM Policy Manager, Oracle JRF) Create new domain with name webcenter under following location ([Installation Home]/domains) for applications ([Installation Home]/applications) Select Production Mode Finish Configuration wizard Setup username for startup scripts - Add a new file called boot.properties to ([Installation Home]/domains/webcenter/servers/AdminServer/security)Add following lines to boot.propertiesusername=weblogicpassword=[password clear text, it will be encrypted during first start] Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Install and Configure Oracle WebTier (OHS Server) Extract compressed file (ofm_webtier_linux_11.1.1.6.0_64_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller) Select Install & Configure option Deselect Oracle WebCache Auto Configure Ports Configure Schema's with RCU (Repository Creation Utility) Extract compressed file (ofm_rcu_linux_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute rcu with following command ([temp]/rcuHome/rcu) Make sure database meets RCU requirements, particular (PROCESSES is 200 or more) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using SQLPLUS and sys user tou can update this configuration in the database with following procedure:ALTER SYSTEM SET PROCESSES=200 SCOPE=SPFILE shutdown immediate startup Create and Configure following schemas:MDS - Meta Data Services (WebCenter and OWSM)WebCenter (WebCenter Schema)OCS (Oracle WebCenter Content)Activities (WebCenter Activities)OPSS (Policy Store for WebCenter) Remember selected schema prefix and password (will be used later) Configure WebCenter Portal instance (WC_CustomPortal) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_WC1/common/bin/config) Select Extend an Existing WebLogic domain Select the existing webcenter domain ([Installation Home]/domains/webcenter) Select Extend my domain using existing extension templateBrowse to ([Installation Home]/Middleware/Oracle_WC1/common/templates/applications)Select oracle.wc_custom_portal_template_11.1.1.jar Select to configure (Managed Servers/Clusters/Machines) On the Managed Server Screen you can now configure 1 or more WC_CustomPortal managed servers (name them WC_CustomPortal[n] (skip numbering if not clustered)) In case of two WC_CustomPortal Servers then create a Cluster (any name) and make sure the managed servers join the new cluster Create a new machine with same name as the current machine Make sure the AdminServer and WC_CustomPortal[n] managed servers joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start WC_CustomPortal in the foreground (([Installation Home]/domains/webcenter/bin/startManagedServer WC_CustomPortal))- repeat for each WC_CustomPortal instance on the host Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/) Result should be ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/security/boot.properties) Configure WebCenter Content instance (UCM_server1) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_ECM/common/bin/config) Select Extend an Existing WebLogic domain Select Oracle Universal Content Management - Content Server Select to configure (Managed Servers/Clusters/Machines) On Managed Server Screen create only one managed server instance (UCM_server1 on port 16200 (you can select any other available port)) Make sure the UCM_server1 managed server joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start UCM_server1 in the foreground ([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1)Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/UCM_server1/ Result should be ([Installation Home]/domains/webcenter/servers/UCM_server1/security/boot.properties) Post Configure WebCenter Content instance for WebCenter Portal Open a browser where you have support for Java applets - navigate to http://host:port/cs WARNING: The page that you are presented with after authentication will only appear once for each instance WARNING: Make sure you set correct storage options - also remember to consider file sharing options if you like to cluster your Content Server instance over multiple hosts Set an appropriate Auto number prefix Update the Server Socket Port: Commonly set to (4444)  used for RIDC communication (a requirement for WebCenter Portal) Update the IP Address Filter to include the IP that is planned to access the server over RIDC - at the minimum add the ip address of the current host (this option can be updated later via EM) Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Admin Server Go to General ConfigurationCheck Enable AccountsIn Additional Configuration Variables (Add on two lines) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AllowUpdateForGenwww=1CollectionUseCache=1 Save the changes and go to Component Manager Click on the link advanced component manager Enable following components Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Folders_g, WebCenterConfigure, SiteStudio, SiteStudioExternalApplications, DBSearchContainsOpSupport WARNING: Make sure that following component is disabled: FrameworkFolders Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Site Studio Administration and update - Do not forget to save and submit each page Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Set Default ProjectSet Default WebAssets Post Configure Oracle WebTier (OHS) to include Content Server and WebCenter Portal application context Update following file - [Installation Home]/domains/instances/instance1/config/OHS/ohs1/mod_wl_ohs.conf For single add lines from following example: Link For clustered environment add lines from following template (note the clustering in example on applies to WC_CustomPortal): Link For more information on this: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/contentsvr.htm#WCEDG318 Optional - Configure JOC Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Follow instructions: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/extend_wc.htm#WCEDG264 Optional (Recommended) - Configure Node Manager Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/core.1111/e12037/node_manager.htm#WCEDG277 Optional (Mandatory for clustered environments) - Re-Associate Policy Store to Database or OID Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_credstore.htm#CFHDEDJH Optional - Configure Coherence for Content Presenter Follow instructions in Blog Post (This post is for PS4): https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enabling_coherence_for_content_presenter Other Recommended Post Cloning WebCenter Custom Portal - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/cloning_a_webcenter_portal_managedImproving WebCenter Performance through caching - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/improving_webcenter_performance

    Read the article

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • CodePlex Daily Summary for Saturday, April 24, 2010

    CodePlex Daily Summary for Saturday, April 24, 2010New ProjectsAutoWorkLoad: Is an application intended to load hours to accounting system such as TimeTracker automatically.Chemistry Add-in for Word: The Chemistry Add-in for Word makes it easier for students, chemists, and researchers to insert and modify chemical information, such as labels, fo...Exceptional Visualizer: A Debugger Visualizer for VS 2008 that allows for effective visual tracing of an Exception stack. Useful for Unity Resolution Exceptions as seen i...FTE Owner Requirement: FIM 2010 Activity: Forefront Identity Manager 2010 (FIM) activity designed to ensure that group object has at least one assigned Full Time Employee owner. This policy...Globus CB: Group project 2009-2010Highlighterr for Visual C++ 2010: A simple code syntax highlighter to change the colors of classes, structs, interfaces, macros and typedefs in the Visual C++ 2010 IDE. It is implem...HTML to word (.doc): easy to export your HTML code to Microsoft Word (.doc extension)IETT Hat Güzergah Importer: http://www.iett.gov.tr sitesinden otobüs hat ve güzergahlarını indirerek RegEx ile parse eder. Elde ettiği verileri SQL Server'a kaydeder.Industrial Dashboard: WCF service that allows executing SQL Server stored procedures straight from javascript code, enabling sending and receiving structured data withou...iSafePDF: iSafePDF is a PDF protection software. it allows you to encrypt PDF document, signe them using a certificate and timestamp the signature. all those...Kordinat Dönüştürücü: * UTM Koordinatlardan DDD koordinatlara iki yönlü dönüşüm. * Google Earth üzerinde koordinat, polygon ve ruhsat gösterimi. * Türkiye Paftalarının...LinkedIn® for Windows Mobile: LinkedIn® for Windows Mobile brings your LinkedIn® account to your Window Mobile powered phone. See networkupdates / connections / profile etc. Macrosome: An F# project demonstrates recording and replaying user operations.Markov Text Generator: Markov Text Generator.MEFedMVVM: Library for building MEF MVVM applications for Silverlight and WPF. By using this library you can easily build MVVM application. *UNDER Constructi...Mercurial to Team Foundation Server Work Item Hook: This is a Mercurial hook that will mark Team Foundation Server work items as resolved with a specific format in the commit description.Metaball WPF HLSL: Metaballs in WPF 3 with pixel shaders.Project Audiophile: Project Audiophile is a suite of applications and libraries built for .Net and Mono for the purposes of listening and organising music.RSS Application Updater: A Libbrary that helps you to update your app from your web site's feed. works very good with drupal .Sherwood Content Management Suite: A project that aims to provide a powerful and flexible tool for aggregating data from different data sources. Add your own plugins to store wanted ...Sonic.Net: Sonic.Net is a .Net Library designed to facilitate development of rich client applications both in Silverlight and WPF. Sonic.Net makes use of all ...StoichiometriCS: Stoichiometric Chemical Equation Solver.Vate Game Engine: Vate is a new XNA Game Engine. For more information about this project, please visit http://blog.aphysoft.com.Yahoo OpenID YQL Demo: This is demo program how to use Yahoo OpenID and Yahoo Query Language (YQL)New ReleasesBasic Sprite Sheet Creator: Sprite Tool V1.11: I had a small error when using multiple animations without the one pixel border that I overlooked when rewriting the code. It should be completely ...Braintree Client Library: Braintree-2.0.0: Updated IsSuccess() on transaction results to return false on declined transactions Search results now implement IEnumerable and will automatical...BV Commerce 5 Import Export Tools: Version 5.7.0 (for BV Commerce 5.7): Updated version compatible with BV Commerce 5.7. Do not use on earlier versions.Chemistry Add-in for Word: Chemistry Add-in for Word Beta 2: This is the source code release of the Chemistry Add-in for Word Beta 2. System Requirements To run this software, you’ll need the following: Wind...Controlled Vocabulary: 1.0.0.5: System Requirements Outlook 2007 / 2010 .Net Framework 3.5 VSTO 2010 Runtime Installation 1. Close Outlook (Use Task Manager to ensure no running ...DotNetNuke® Store: 02.01.33 RC: What's New in this release? Bugs corrected: - Fixed a bug related to encryption cookie. New Features: - Adden token pair [IFLOGGED] [/IFLOGGED] us...EdiliOS: Beta 0.2.1: Aggiunto supporto a FidoCadJ, editor FidoCad multipiattaforma di Davide Bucci, con Libreria di Ingegneria Civile integrata.Event Scavenger: Admin tool Version 3.1.1: Fixed the Admin tool that fails on editing general settings. Only Admin tool is affected.Exceptional Visualizer: Exception Visualizer: A Debugger Visualizer to help with long chains of exceptions where digging through the inner exceptions is hard to do. Specifically, this release ...fracback: Binaries: Use at your own riskFree Silverlight & WPF Chart Control - Visifire: Visifire for SL 4 and WPF Charts 3.5.1 Released: Hi, This release contains fix for the following bug: Chart threw exception with DateTime axis if IntervalType property was set as ‘Minutes’ in Ax...Free Silverlight & WPF Chart Control - Visifire: Visifire Silverlight and WPF Charts 3.0.8 Released: Hi, This release contains fix for the following bug: * Chart threw exception with DateTime axis if IntervalType property was set as ‘Minutes’...GeoUtility Library: GeoUtility Library 3.1.5.0: Please Note: This is an open source version. The commercial version offers much more functionality. Help files (english/german) are only available ...Highlighterr for Visual C++ 2010: Highlighterr for Visual C++ 2010 Test Release 1.0: To install the extension, download the and then double-click on the Highlighterr.vsix file. This should bring up a dialog saying something about wh...Highlighterr for Visual C++ 2010: Highlighterr for Visual C++ 2010 Test Release 1.01: The lack of support for /* */ comments was annoying me, so I added it. To install the extension, download the and then double-click on the Highlig...Home Access Plus+: v4.0.1.0: v4.0.1.0 Beta Change Log: Fixed an issue with laptops and the booking system (CSS and code fixes) Moved filters to top Added some Javascript to...HTML to word (.doc): FullSourceDownload: the full source contain: bin/AppWebdx7wusqu.dll Default.aspx htw.ascx htw.ascx.vb Web.configIndustrial Dashboard: 3.0 Beta: Added Example with Dojox.DataGrid Added Example with Ext.js ChartiSafePDF: iSafePDF v1.2: This is the first public release, this version support : PDF signature, timestamped signature, multi-signature, PDF encryption and meta-data modifi...linq.js - LINQ for JavaScript: ver 2.0.0.0: all code rewrite from scratch. enumerator support Dispose. namespace changed E, Linq.Enumerable -> Enumerable delete methods ToJSON ToTable Trace...LogikBug's IoC Container: Third Release: This project is dependent upon Microsoft.Practices.ServiceLocation and must be referenced when referencing LogikBug.Injection. Click here to view d...Macrosome: 0.0.1 preview: Key pointsOnly mouse clicks supported Just for preview: not stable How to useStart from Macrosome.Wpf.exe Click "Record" button will start oper...Mercurial to Team Foundation Server Work Item Hook: Version 0.1: This is the first version of the Mercurial to Team Foundation Server hook. It currently supports only adding comments to existing work items.MOSSDAL: MOSS Data Access Layer for data from the Sharepoint Lists Service: MOSSDAL Silverlight Framework Release 1: This is the first release of MOSSDAL for silverlight. For the MOSSDAL Framework for .NET release 1 click hereMOSSDAL: MOSS Data Access Layer for data from the Sharepoint Lists Service: MOSSDAL Silverlight Sample Release 1: This is the first release of the MOSSDAL framework samples for Silverlight. For the MOSSDAL .NET Release 1 Sample click hereMyWSAT - ASP.NET Membership Administration Tool: MyWSAT v3.5.1: MyWSAT 3.5.1 Update Notes - April 23rd 2010 1.) Fixed standard profile problem in web.config as well as on all the forms the profiles are used. The...NetPE: NetPE v1.0: Initial Release of NetPE. Features: -View & Editing Portable Executable -Hex editor -Full Metadata Support -Disassemble Cil/x86 codeNSIS Autorun: NSIS Autorun 0.1.1: NSIS Autorun 0.1.1 This release includes source code, application binary, and example materials.Over Store: OverStore Release 1.17.0.0: Version 1.17.0.0 - AdoNetStorage: AdoNetStorage refactored. Detailed Log messages added on each event. Database resource management moved to A...RoTwee: RoTwee (11.0.0.3): Fix for "17385 Remove saved degree val from code"Silverlight 4.0 Popup Menu: Popup Menu for Silverlight 4.0: This is the first project release. Added drop shadow and fade in effects. Left click and hover events are also supported.Silverlight 4.0 Popup Menu: Popup menu for Silverlight 4.0 Version 0.8: - Placed the invoker for the 'Opening' event handler within a dispatcher. This ensures that the visual tree is created before it is accessed.sNPCedit: sNPCedit v0.9: + Some changes in GUI and Behaviour + Added: Search functionSonic.Net: Sonic.Net v1.0.0: Sonic.Net v1.0.0 Targets Net 3.5 sp1 and Sinverlight 3 Includes: sonic.UnityConfiguration.Silverlight sonic.UnityConfiguration.WpfSurvey - web survey & form engine: Survey™ 1.2.1: Survey™ 1.2.1 release (based on the original Nsurvey 1.9.1. source files) New Features & fixes: 1. Final bits of code rewritten to become 100% ...SysPad: 4.10.10.2: Release Notes A folder management and scratchpad utility; especially useful in a business network setting that utilizes numerous, commonly used fol...Third Hand - Use your voice to control Visual Studio: Update for VS2010: Added support for VS2010, and minor improvements when using the grid.TiledLib: TiledLib 1.2: - Added overload of Map.Draw that specifies the area to draw - Added demo of a camera control for a mapXMLPreprocess: 2.0.12: What's new in this release: This release contains a number of enhancements based on feedback given through the discussion forums and issue tracker....Xrns2XMod: Xrns2Xmod 0.8: Added >> Real preliminary sound conversion for XM >> Some code optimizations Note Some samples might not be converted due to a flac parsing error ...Yahoo OpenID YQL Demo: Yahoo YQL .Net Demo: This is a demo program for using YQL with C#.netYahoo OpenID YQL Demo: YQL Demo: This is a demo program using Yahoo YQL with C#.NetYahoo OpenID YQL Demo: YQLDemo using .Net: This is a demo program for using Yahoo YQL with C#.NetMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitSilverlight ToolkitMicrosoft SQL Server Product Samples: Databasepatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrBlogEngine.NETParticle Plot PivotNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDotNetZip LibraryGMap.NET - Great Maps for Windows Forms & Presentationturing machine simulatorIonics Isapi Rewrite Filterpatterns & practices: Composite WPF and Silverlight

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • CodePlex Daily Summary for Tuesday, May 25, 2010

    CodePlex Daily Summary for Tuesday, May 25, 2010New ProjectsBibleNames: BibleNames BibleNames BibleNames BibleNames BibleNamesBing Search for PHP Developers: This is the Bing SDK for PHP, a toolkit that allows you to easily use Bing's API to fetch search results and use in your own application. The Bin...Fading Clock: Fading Clock is a normal clock that has the ability to fade in out. It's developed in C# as a Windows App in .net 2.0.Fuzzy string matching algorithm in C# and LINQ: Fuzzy matching strings. Find out how similar two string is, and find the best fuzzy matching string from a string table. Given a string (strA) and...HexTile editor: Testing hexagonal tile editorhgcheck12: hgcheck12Metaverse Router: Metaverse Router makes it easier for MIIS, ILM, FIM Sync engine administrators to manage multiple provisioning modules, turn on/off provisioning wi...MyVocabulary: Use MyVocabulary to structure and test the words you want to learn in a foreign language. This is a .net 3.5 windows forms application developed in...phpxw: Phpxw 是一个简易的PHP框架。以我自己的姓名命名的。 Phpxw is a simple PHP framework. Take my name named.Plop: Social networking wrappers and projects.PST Data Structure View Tool: PST Data Structure View Tool (PSTViewTool) is a tool supporting the PST file format documentation effort. It allows the user to browse the internal...PST File Format SDK: PST File Format SDK (pstsdk) is a cross platform header only C++ library for reading PST files.QWine: QWine is Queue Machine ApplicationSharePoint Cross Site Collection Security Trimmed Navigation: This SP2010 project will show security trimmed navigation that works across site collections. The project is written for SP2010, but can be easily ...SharePoint List Field Manager: The SharePoint List Field Manager allows users to manage the Boolean properties of a list such as Read Only, Hidden, Show in New Form etc... It sup...Silverlight Toolbar for DataGrid working with RIA Services or local data: DataGridToolbar contains two controls for Silverlight DataGrid designed for RIA Services and local data. It can be used to filter or remove a data,...SilverShader - Silverlight Pixel Shader Demos: SilverShader is an extensible Silverlight application that is used to demonstrate the effect of different pixel shaders. The shaders can be applied...SNCFT Gadget: Ce gadget permet de consulter les horaires des trains et de chercher des informations sur le site de la société nationale des chemins de fer tunisi...Software Transaction Memory: Software Transaction Memory for .NETStreamInsight Samples: This project contains sample code for StreamInsight, Microsoft's platform for complex event processing. The purpose of the samples is to provide a ...StyleAnalizer: A CSS parserSudoku (Multiplayer in RnD): Sudoku project was to practice on C# by making a desktop application using some algorithm Before this, I had worked on http://shaktisaran.tech.o...Tiplican: A small website built for the purpose of learning .Net 4 and MVC 2TPager: Mercurial pager with color support on Windowsunirca: UNIRCA projectWcfTrace: The WcfTrace is a easy way to collect information about WCF-service call order, processing time and etc. It's developed in C#.New ReleasesASP.NET TimePicker Control: 12 24 Hour Bug Fix: 12 24 Hour Bug FixASP.NET TimePicker Control: ASP.NET TimePicker Control: This release fixes a focus bug that manifests itself when switching focus between two different timepicker controls on the same page, and make chan...ASP.NET TimePicker Control: ASP.NET TimePicker Control - colon CSS Fix: Fixes ":" seperator placement issues being too hi and too low in IE and FireFox, respectively.ASP.NET TimePicker Control: Release fixes 24 Hour Mode Bug: Release fixes 24 Hour Mode BugBFBC2 PRoCon: PRoCon 0.5.1.1: Visit http://phogue.net/?p=604 for release notes.BFBC2 PRoCon: PRoCon 0.5.1.2: Release notes can be found at http://phogue.net/?p=604BFBC2 PRoCon: PRoCon 0.5.1.4: Ha.. choosing the "stable" option at the moment is a bit of a joke =\ Release notes at http://phogue.net/?p=604BFBC2 PRoCon: PRoCon 0.5.1.5: BWHAHAHA stable.. ha. Actually this ones looking pretty good now. http://phogue.net/?p=604Bojinx: Bojinx Core V4.5.14: Issues fixed in this release: Fixed an issue that caused referencePropertyName when used through a property configuration in the context to not wo...Bojinx: Bojinx Debugger V0.9B: Output trace and filtering that works with the Bojinx logger.CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1.5 and 4.0.1.5 beta3: Fixed fairly serious bug #13290 http://cassinidev.codeplex.com/WorkItem/View.aspx?WorkItemId=13290Content Rendering: Content Rendering API 1.0.0 Revision 46406: Initial releaseDeploy Workflow Manager: Deploy Workflow Manager Web Part v2: Recommend you test in your development environment first BEFORE using in production.dotSpatial: System.Spatial.Projections Zip May 24, 2010: Adds a new spherical projection.eComic: eComic 2010.0.0.2: Quick release to fix a couple of bugs found in the previous version. Version 2010.0.0.2 Fixed crash error when accessing the "Go To Page" dialog ...Exchange 2010 RBAC Editor (RBAC GUI) - updated on 5/24/2010: RBAC Editor 0.9.4.1: Some bugs fixed; support for unscopoedtoplevel (adding script is not working yet) Please use email address in About menu of the tool for any feedb...Facebook Graph Toolkit: Preview 1 Binaries: The first preview release of the toolkit. Not recommended for use in production, but enought to get started developing your app.Fading Clock: Clock.zip: Clock.zip is a zip file that contains the application Clock.exe.hgcheck12: Rel8082: Rel8082hgcheck12: scsc: scasMetaverse Router: Metaverse Router v1.0: Initial stable release (v.1.0.0.0) of Metaverse Router Working with: FIM 2010 Synchronization Engine ILM 2007 MIIS 2003MSTestGlider: MSTestGlider 1.5: What MSTestGlider 1.5.zip unzips to: http://i49.tinypic.com/2lcv4eg.png If you compile MSTestGlider from its source you will need to change the ou...MyVocabulary: Version 1.0: First releaseNLog - Advanced .NET Logging: Nightly Build 2010.05.24.001: Changes since the last build:2010-05-23 20:45:37 Jarek Kowalski fixed whitespace in NLog.wxs 2010-05-23 12:01:48 Jarek Kowalski BufferingTargetWra...NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 2.0.0: 测试版本:NoteExpress 2.5.0.1154 +调整了Tab页的排列方式 注:2.0未做大的改动,仅仅是运行环境由原来的.net 3.5升级到4.0。openrs: Beta Release (Revision 1): This is the beta release of the openrs framework. Please make sure you submit issues in the issue tracker tab. As this is beta, extreme, flawless ...openrs: Revision 2: Revision 2 of the framework. Basic worker example as well as minor improvements on the auth packet.phpxw: Phpxw: Phpxw 1.0 phpxw 是一个简易的PHP框架。以我自己的姓名命名的。 Phpxw is a simple PHP framework. Take my name named. 支持基本的业务逻辑流程,功能模块化,实现了简易的模板技术,同时也可以支持外接模板引擎。 Support...sELedit: sELedit v1.1a: Fixed: clean file before overwriting Fixed: list57 will only created when eLC.Lists.length > 57sGSHOPedit: sGSHOPedit v1.0a: Fixed: bug with wrong item array re-size when adding items after deleting items Added: link to project page pwdatabase.com version is now selec...SharePoint Cross Site Collection Security Trimmed Navigation: Release 1.0.0.0: If you want just the .wsp, and start using this, you can grab it here. Just stsadm add/deploy to your website, and activate the feature as describ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4.0 v1.24 Beta: - Updated the demo and added clipboard cut/copy and paste functionality. - Added delay on hover events for both parent and child menus. - Parent me...Silverlight Toolbar for DataGrid working with RIA Services or local data: DataGridToolBar Beta: For Silverlight 4.0sMAPtool: sMAPtool v0.7d (without Maps): Added: link to project pagesMODfix: sMODfix v1.0a: Added: Support for ECM v52 modelssNPCedit: sNPCedit v0.9a: browse source commits for all changes...SocialScapes: SocialScapes TwitterWidget 1.0.0: The SocialScapes TwitterWidget is a DotNetNuke Widget for displaying Twitter searches. This widget will be used to replace the twitter functionali...SQL Server 2005 and 2008 - Backup, Integrity Check and Index Optimization: 23 May 2010: This is the latest version of my solution for Backup, Integrity Check and Index Optimization in SQL Server 2005, SQL Server 2008 and SQL Server 200...sqwarea: Sqwarea 0.0.280.0 (alpha): This release brings a lot of improvements. We strongly recommend you to upgrade to this version.sTASKedit: sTASKedit v0.7c: Minor Changes in GUI & BehaviourSudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.0.0 source: Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The basic idea of algorithm is from http://www.ac...Sudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.0.1 source: Worked on user-interface, would improve it Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The b...TFS WorkItem Watcher: TFS WorkItem Watcher Version 1.0: This version contains the following new features: Added support to autodetect whether to start as a service or to start in console mode. The "-c" ...TfsPolicyPack: TfsPolicyPack 0.1: This is the first release of the TfsPolicyPack. This release includes the following policies: CustomRegexPathPolicythinktecture Starter STS (Community Edition): StarterSTS v1.1 CTP: Added ActAs / identity delegation support.TPager: TPager-20100524: TPager 2010-05-24 releaseTrance Layer: TranceLayer Transformer: Transformer is a Beta version 2, morphing from "Digger" to "Transformer" release cycle. It is intended to be used as a demonstration of muscles wh...TweetSharp: TweetSharp v1.0.0.0: Changes in v1.0.0Added 100% public code comments Bug fixes based on feedback from the Release Candidate Changes to handle Twitter schema additi...VCC: Latest build, v2.1.30524.0: Automatic drop of latest buildWCF Client Generator: Version 0.9.2.33468: Version 0.9.2.33468 Fixed: Nested enum types names are not handled correctly. Can't close Visual Studio if generated files are open when the code...Word 2007 Redaction Tool: Version 1.2: A minor update to the Word 2007 Redaction Tool. This version can be installed directly over any existing version. Updates to Version 1.2Fixed bugs:...xPollinate - Windows Live Writer Cross Post Plugin: 1.0.0.5 for WLW 14.0.8117.416: This version works with WLW 14.0.8117.416. This release includes a fix to enable publishing posts that have been opened directly from a blog, but ...Yet another developer blog - Examples: jQuery Autocomplete in ASP.NET MVC: This sample application shows how to use jQuery Autocomplete plugin in ASP.NET MVC. This application is accompanied by the following entry on Yet a...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrpatterns & practices: Windows Azure Security GuidanceSqlServerExtensionsGMap.NET - Great Maps for Windows Forms & PresentationMono.AddinsCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NETIonics Isapi Rewrite FilterSQL Server PowerShell Extensions

    Read the article

  • CodePlex Daily Summary for Friday, February 11, 2011

    CodePlex Daily Summary for Friday, February 11, 2011Popular ReleasesSnoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WatchersNET.TagCloud: WatchersNET.TagCloud 01.09.03: Whats NewAdded New Skin TagTastic http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-TagTastic-Skin.jpg Added New Skin RoundedButton http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-RoundedButton-Skin.jpg changes Tag Count fixed on Tag Source Referrals Fixed Tag Count when multiple Tag Sources are usedExtremeML: ExtremeML v1.0 Beta 3: VS solution source code updated for compatibility with VS2010 (accommodates VS2010 breaking changes in T4 template support).Finestra Virtual Desktops: 1.1: This release adds a few more performance and graphical enhancements to 1.0. Switching desktops is now about as fast as you can blink. Desktop switching optimizations New welcome wizard for Vista/7 Fixed a few minor bugs Added a few more options to the options dialog (including ability to disable the taskbar switching)WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details. Webdeploy package sha1 checksum: 28785b7248052465ea0738a7775e8e8744d84c27fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.New ProjectsAlchemySearch: A middleware application that allows the searching of Open Text's Alchemy application from a command line or from a calling application. C# .NET 4alltuan: all tuan infoasp.net mvc 2.0 ??????: asp.net mvc 2.0 ??????Basic Text File Generator: This is an investigatory development that creates text files from a windows forms app.bisolu_luc: nothing yetCriteria Pack for EPiServer CMS: This is a collection of useful criteria for extending the personalization mechanism, Visitor Groups, in EPiServer CMS 6 R2 (or later).CTRNN.NET: CTRNN.NET is a C# implementation of Continuous Time Recurrent Neural Networks.DXperience Toolkit: Complimentary controls and libraries from DevExpress.Excel Library for Small Basic: This library extend Small Basic to allows you to read or write contents of a cell in the Excel file from Small Basic program. ???????? Small Basic ????、Small Basic ??? Excel ???????????????????。HealthVault Eventing Sample: A sample application demonstrating the new HealthVault Eventing feature.Hitcents Blog - Example Code: Contains various source code examples from articles at http://www.hitcents.com/blog. Brought to you by Hitcents.INI Modifier: A proprietary application to amend users ini files for JBAe automatically from a central control file.issueIT.net - ”I just found the last bug”: issueIT is the best open source web-based issue tracker for .NET. It’s easy to customize to suit your organization. This project is not finished yet - it is only at a very early stage Leage of Legends Masteries Tool: -Small, portable and fast tool to help you set your masteries. -No Install. -Masteries saved in the same folder: "lolMasterSet.dat" (text file) How to use: - Run and then go to masteries tab in League of Legends luncher. You should see the overlay. Memory View controls: Memory View controls are a pair of controls for real time and static viewing and editing of a memory space. MemoryHexEditor is a hex editor that supports viewing and editing memory, and MemoryHeatMap is a time-based graphical view that shows how memory is read and written to.Moq Contrib: Community contributions to Moq.Operating System Basic: Sistema operacional desenvolvido em QuickBasic, durante minha adolecência.PAiRS - A WPF Memory Card Game: PAiRS is an implementation of a card matching game in which you are given an even number of cards face down in a grid, and you try to flip over 2 cards at a time to create a match until all cards are matched. PAiRS is built using C#, .Net 3.5+ and WPF.RIBA - Rich Internet Business Application for Silverlight: RIBA is an example LOB application with many features to show how to build distributed and secure Line of Business applications based on Silverlight 4. You can find the official blog on www.silverlightblog.netSearcher: Searcher is a Windows Phone project which allows you to quickly search using multiple search engines. SharePoint Term Store Powershell Utilities: An assortment of PowerShell script-based utilities to perform actions on the SharePoint Managed Metadata Term Store. PowerShell script-based utilities have been chosen rather than cmdlets as they are more suitable for environments that have strict installation policies.SmartHotelFoundation: Smart Hotel FoundationSumer Filemanager: This project ensured fully ajax based file management.The Open Source PMU: The Open Source PMU Project provides resources that enable you to build your own SynchroPhasor sensor for use with the openPDC project, research, development, or electric grid observation.Trail Blazer: Movie Trailer Fetcher - Downloads trailers for movies in a collection and saves them in movie folder. It is being developed as a companion to Meta Data Fetchers for the Media Browser project. It is being developed as a standalone C# app but a frontend in MCML is being considered.University Project Tracker: University Project TrackerVideo Backup Fusion: The complete and simply to use backup solution for video portalsWPF ShellFactory: WPF ShellFactory is an easy-to-use application framework for WPF, using a system derived from M-V-VM for its views. It supports multiple modules, either statically or dynamically loaded (i.e. plugins). It also contains mechanisms for application-wide services.

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Log message Request and Response in ASP.NET WebAPI

    - by Fredrik N
    By logging both incoming and outgoing messages for services can be useful in many scenarios, such as debugging, tracing, inspection and helping customers with request problems etc.  I have a customer that need to have both incoming and outgoing messages to be logged. They use the information to see strange behaviors and also to help customers when they call in  for help (They can by looking in the log see if the customers sends in data in a wrong or strange way).   Concerns Most loggings in applications are cross-cutting concerns and should not be  a core concern for developers. Logging messages like this:   // GET api/values/5 public string Get(int id) { //Cross-cutting concerns Log(string.Format("Request: GET api/values/{0}", id)); //Core-concern var response = DoSomething(); //Cross-cutting concerns Log(string.Format("Reponse: GET api/values/{0}\r\n{1}", id, response)); return response; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } will only result in duplication of code, and unnecessarily concerns for the developers to be aware of, if they miss adding the logging code, no logging will take place. Developers should focus on the core-concern, not the cross-cutting concerns. By just focus on the core-concern the above code will look like this: // GET api/values/5 public string Get(int id) { return DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The logging should then be placed somewhere else so the developers doesn’t need to focus care about the cross-concern. Using Message Handler for logging There are different ways we could place the cross-cutting concern of logging message when using WebAPI. We can for example create a custom ApiController and override the ApiController’s ExecutingAsync method, or add a ActionFilter, or use a Message Handler. The disadvantage with custom ApiController is that we need to make sure we inherit from it, the disadvantage of ActionFilter, is that we need to add the filter to the controllers, both will modify our ApiControllers. By using a Message Handler we don’t need to do any changes to our ApiControllers. So the best suitable place to add our logging would be in a custom Message Handler. A Message Handler will be used before the HttpControllerDispatcher (The part in the WepAPI pipe-line that make sure the right controller is used and called etc). Note: You can read more about message handlers here, it will give you a good understanding of the WebApi pipe-line. To create a Message Handle we can inherit from the DelegatingHandler class and override the SendAsync method: public class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { return base.SendAsync(request, cancellationToken); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If we skip the call to the base.SendAsync our ApiController’s methods will never be invoked, nor other Message Handlers. Everything placed before base.SendAsync will be called before the HttpControllerDispatcher (before WebAPI will take a look at the request which controller and method it should be invoke), everything after the base.SendAsync, will be executed after our ApiController method has returned a response. So a message handle will be a perfect place to add cross-cutting concerns such as logging. To get the content of our response within a Message Handler we can use the request argument of the SendAsync method. The request argument is of type HttpRequestMessage and has a Content property (Content is of type HttpContent. The HttpContent has several method that can be used to read the incoming message, such as ReadAsStreamAsync, ReadAsByteArrayAsync and ReadAsStringAsync etc. Something to be aware of is what will happen when we read from the HttpContent. When we read from the HttpContent, we read from a stream, once we read from it, we can’t be read from it again. So if we read from the Stream before the base.SendAsync, the next coming Message Handlers and the HttpControllerDispatcher can’t read from the Stream because it’s already read, so our ApiControllers methods will never be invoked etc. The only way to make sure we can do repeatable reads from the HttpContent is to copy the content into a buffer, and then read from that buffer. This can be done by using the HttpContent’s LoadIntoBufferAsync method. If we make a call to the LoadIntoBufferAsync method before the base.SendAsync, the incoming stream will be read in to a byte array, and then other HttpContent read operations will read from that buffer if it’s exists instead directly form the stream. There is one method on the HttpContent that will internally make a call to the  LoadIntoBufferAsync for us, and that is the ReadAsByteArrayAsync. This is the method we will use to read from the incoming and outgoing message. public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var requestMessage = await request.Content.ReadAsByteArrayAsync(); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); return response; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will read the content of the incoming message and then call the SendAsync and after that read from the content of the response message. The following code will add more logic such as creating a correlation id to combine the request with the response, and create a log entry etc: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } public class MessageLoggingHandler : MessageHandler { protected override async Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Request: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } protected override async Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Response: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will show the following in the Visual Studio output window when the “api/values” service (One standard controller added by the default WepAPI template) is requested with a Get http method : 6347483479959544375 - Request: GET http://localhost:3208/api/values 6347483479959544375 - Response: GET http://localhost:3208/api/values ["value1","value2"] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Register a Message Handler To register a Message handler we can use the Add method of the GlobalConfiguration.Configration.MessageHandlers in for example Global.asax: public class WebApiApplication : System.Web.HttpApplication { protected void Application_Start() { GlobalConfiguration.Configuration.MessageHandlers.Add(new MessageLoggingHandler()); ... } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Summary By using a Message Handler we can easily remove cross-cutting concerns like logging from our controllers. You can also find the source code used in this blog post on ForkCan.com, feel free to make a fork or add comments, such as making the code better etc. Feel free to follow me on twitter @fredrikn if you want to know when I will write other blog posts etc.

    Read the article

  • Create excel files with GemBox.Spreadsheet .NET component

    - by hajan
    Generating excel files from .NET code is not always a very easy task, especially if you need to make some formatting or you want to do something very specific that requires extra coding. I’ve recently tried the GemBox Spreadsheet and I would like to share my experience with you. First of all, you can install GemBox Spreadsheet library from VS.NET 2010 Extension manager by searching in the gallery: Go in the Online Gallery tab (as in the picture bellow) and write GemBox in the Search box on top-right of the Extension Manager, so you will get the following result: Click Download on GemBox.Spreadsheet and you will be directed to product website. Click on the marked link then you will get to the following page where you have the component download link Once you download it, install the MSI file. Open the installation folder and find the Bin folder. There you have GemBox.Spreadsheet.dll in three folders each for different .NET Framework version. Now, lets move to Visual Studio.NET. 1. Create sample ASP.NET Web Application and give it a name. 2. Reference The GemBox.Spreadsheet.dll file in your project So you don’t need to search for the dll file in your disk but you can simply find it in the .NET tab in ‘Add Reference’ window and you have all three versions. I chose the version for 4.0.30319 runtime. Next, I will retrieve data from my Pubs database. I’m using Entity Framework. Here is the code (read the comments in it):             //get data from pubs database, tables: authors, titleauthor, titles             pubsEntities context = new pubsEntities();             var authorTitles = (from a in context.authors                                join tl in context.titleauthor on a.au_id equals tl.au_id                                join t in context.titles on tl.title_id equals t.title_id                                select new AuthorTitles                                {                                     Name = a.au_fname,                                     Surname = a.au_lname,                                     Title = t.title,                                     Price = t.price,                                     PubDate = t.pubdate                                }).ToList();             //using GemBox library now             ExcelFile myExcelFile = new ExcelFile();             ExcelWorksheet excWsheet = myExcelFile.Worksheets.Add("Hajan's worksheet");             excWsheet.Cells[0, 0].Value = "Pubs database Authors and Titles";             excWsheet.Cells[0, 0].Style.Borders.SetBorders(MultipleBorders.Bottom,System.Drawing.Color.Red,LineStyle.Thin);             excWsheet.Cells[0, 1].Style.Borders.SetBorders(MultipleBorders.Bottom, System.Drawing.Color.Red, LineStyle.Thin);                                      int numberOfColumns = 5; //the number of properties in the authorTitles we have             //for each column             for (int c = 0; c < numberOfColumns; c++)             {                 excWsheet.Columns[c].Width = 25 * 256; //set the width to each column                             }             //header row cells             excWsheet.Rows[2].Cells[0].Value = "Name";             excWsheet.Rows[2].Cells[1].Value = "Surname";             excWsheet.Rows[2].Cells[2].Value = "Title";             excWsheet.Rows[2].Cells[3].Value = "Price";             excWsheet.Rows[2].Cells[4].Value = "PubDate";             //bind authorTitles in the excel worksheet             int currentRow = 3;             foreach (AuthorTitles at in authorTitles)             {                 excWsheet.Rows[currentRow].Cells[0].Value = at.Name;                 excWsheet.Rows[currentRow].Cells[1].Value = at.Surname;                 excWsheet.Rows[currentRow].Cells[2].Value = at.Title;                 excWsheet.Rows[currentRow].Cells[3].Value = at.Price;                 excWsheet.Rows[currentRow].Cells[4].Value = at.PubDate;                 currentRow++;             }             //stylizing my excel file look             CellStyle style = new CellStyle(myExcelFile);             style.HorizontalAlignment = HorizontalAlignmentStyle.Left;             style.VerticalAlignment = VerticalAlignmentStyle.Center;             style.Font.Color = System.Drawing.Color.DarkRed;             style.WrapText = true;             style.Borders.SetBorders(MultipleBorders.Top                 | MultipleBorders.Left | MultipleBorders.Right                 | MultipleBorders.Bottom, System.Drawing.Color.Black,                 LineStyle.Thin);                                 //pay attention on this, we set created style on the given (firstRow, firstColumn, lastRow, lastColumn)             //in my example:             //firstRow = 2; firstColumn = 0; lastRow = authorTitles.Count+1; lastColumn = numberOfColumns-1; variable             excWsheet.Cells.GetSubrangeAbsolute(3, 0, authorTitles.Count+2, numberOfColumns-1).Style = style;             //save my excel file             myExcelFile.SaveXls(Server.MapPath(".") + @"/myFile.xls"); The AuthorTitles class: public class AuthorTitles {     public string Name { get; set; }     public string Surname { get; set; }     public string Title { get; set; }     public decimal? Price { get; set; }     public DateTime PubDate { get; set; } } The excel file will be generated in the root of your ASP.NET Web Application. The result is: There is a lot more you can do with this library. A set of good examples you have in the GemBox.Spreadsheet Samples Explorer application which comes together with the installation and you can find it by default in Start –> All Programs –> GemBox Software –> GemBox.Spreadsheet Samples Explorer. Hope this was useful for you. Best Regards, Hajan

    Read the article

  • CodePlex Daily Summary for Thursday, January 13, 2011

    CodePlex Daily Summary for Thursday, January 13, 2011Popular ReleasesMVC Music Store: MVC Music Store v2.0: This is the 2.0 release of the MVC Music Store Tutorial. This tutorial is updated for ASP.NET MVC 3 and Entity Framework Code-First, and contains fixes and improvements based on feedback and common questions from previous releases. The main download, MvcMusicStore-v2.0.zip, contains everything you need to build the sample application, including A detailed tutorial document in PDF format Assets you will need to build the project, including images, a stylesheet, and a pre-populated databas...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 GA Released: Hi, Today we are releasing Visifire 3.6.7 GA with the following feature: * Inlines property has been implemented in Title. Also, this release contains fix for the following bugs: * In Column and Bar chart DataPoint’s label properties were not working as expected at real-time if marker enabled was set to true. * 3D Column and Bar chart were not rendered properly if AxisMinimum property was set in x-axis. You can download Visifire v3.6.7 here. Cheers, Team VisifireFluent Validation for .NET: 2.0: Changes since 2.0 RC Fix typo in the name of FallbackAwareResourceAccessorBuilder Fix issue #7062 - allow validator selectors to work against nullable properties with overriden names. Fix error in German localization. Better support for client-side validation messages in MVC integration. All changes since 1.3 Allow custom MVC ModelValidators to be added to the FVModelValidatorProvider Support resource provider for custom property validators through the new IResourceAccessorBuilder ...EnhSim: EnhSim 2.3.2 ALPHA: 2.3.1 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Quick update to ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: paging for the lookup lookup with multiselect changes: the css classes used by the framework where renamed to be more standard the lookup controller requries an item.ascx (no more ViewData["structure"]), and LookupList action renamed to Search all the...pwTools: pwTools: Changelog v1.0 base release??????????: All-In-One Code Framework ??? 2011-01-12: 2011???????All-In-One Code Framework(??) 2011?1??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, AJAX, WinForm, Windows Shell????13?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2011/01/13/6135037.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????patterns & practices – Enterprise Library: Enterprise Library 5.0 - Extensibility Labs: This is a preview release of the Hands-on Labs to help you learn and practice different ways the Enterprise Library can be extended. Learning MapCustom exception handler (estimated time to complete: 1 hr 15 mins) Custom logging trace listener (1 hr) Custom configuration source (registry-based) (30 mins) System requirementsEnterprise Library 5.0 / Unity 2.0 installed SQL Express 2008 installed Visual Studio 2010 Pro (or better) installed AuthorsChris Tavares, Microsoft Corporation ...Orchard Project: Orchard 1.0: Orchard Release Notes Build: 1.0.20 Published: 1/12/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.1.0.20.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orchard folder from ...Umbraco CMS: Umbraco 4.6.1: The Umbraco 4.6.1 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're ...Google URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.New Projects4chan: Project for educational purposesAE.Net.Mail - POP/IMAP Client: C# POP/IMAP client libraryAlmathy: Application communautaire pour le partage de données basée sur le protocole XMPP. Discussion instantanée, mail, échange de données, travaux partagés. Développée en C#, utilisant les Windows Forms.AMK - Associação Metropolitana de Kendo: Projeto do site versão 2011 da Associação Metropolitana de Kendo. O site contará com: - Cadastro de atletas e eventos; - Controle de cadastro junto a CBK e histórico de graduações; - Calendário de treinos; Será desenvolvido usando .NET, MVC2, Entity Framework, SQL Server, JQueryAzke: New: Azke is a portal developed with ASP.NET MVC and MySQL. Old: Azke is a portal developed with ASP.net and MySQL.BuildScreen: A standalone Windows application to displays project build statuses from TeamCity. It can be used on a large screen for the whole development team to watch their build statuses as well as on a developer machine.CardOnline: CardOnline GameLobby GameCAudioEndpointVolume: The CAudioEndpointVolume implements the IAudioEndpointVolume Interface to control the master volume in Visual Basic 6.0 for Windows Vista and later operating systems.Cloudy - Online Storage Library: The goal of Cloudy is to be an online storage library to enable access for the most common storage services : DropBox, Skydrive, Google Docs, Azure Blob Storage and Amazon S3. It'll work in .NET 3.5 and up, Silverlight and Windows Phone 7 (WP7).ContentManager: Content Manger for SAP Kpro exports the data pool of a content-server by PHIOS-Lists (txt-files) into local binary files. Afterwards this files can be imported in a SAP content-repository . The whole List of the PHIOS Keys can also be downloaded an splitted in practical units.CSWFKrad: private FrameworkDiego: Diegodoomhjx_javalib: this is my lib for the java project.Eve Planetary Interaction: Eve Planetary InteractionEventWall: EventWall allows you to show related blogposts and tweets on the big screen. Just configure the hashtag and blog RSS feeds and you're done.Google URL Shortener API for .NET: Google URL Shortener API for .NET is a wrapper for the Google Project below: http://code.google.com/apis/urlshortener/v1/reference.html With Google URL Shortener API, you may shorten urls and get some analytics information about this. It's developer in C# 4.0 and VS2010. HtmlDeploy: A project to compile asp's Master page into pure html files. The idea is to use jQuery templating to do all the "if" and "for" when it comes to generating html output. Inventory Management System: Inventory Management System is a Silverlight-based system. It aims to manage and control the input raw material activites, output of finished goods of a small inventory.koplamp kapot: demo project voor AzureMSAccess SVN: Access SVN adds to Microsoft Access (MS Access) support for SVN Source controlMultiConvert: console based utility primary to convert solid edge files into data exchange formats like *.stp and *.dxf. It's using the API of the original software and can't be run alone. The goal of this project is creating routines with desired pre-instructions for batch conversionsNGN Image Getter: Nationalgeographic has really stunning photos! They allow to download them via website. This program automates that process.OOD CRM: Project CRM for Object Oriented Development final examsOpen NOS Client: Open NOS Rest ClientopenEHR.NET: openEHR.NET is a C# implementation of openEHR (http://openehr.org) Reference Model (RM) and Archetype Model (AM) specifications (Release 1.0.1). It allows you to build openEHR applications by composing RM objects, validate against AM objects and serialise to/from XML.OpenGL Flow Designer: OpenGL Flow Designer allows writing pseudo-C code to build OpenGL pipeline and preview results immediately.Orchard Translation Manager: An Orchard module that aims at facilitating the creation and management of translations of the Orchard CMS and its extensions.pwTools: A set of tools for viewing/editing some perfect world client/server filesServerStatusMobile: Server availability monitoring application for windows mobile 6.5.x running on WVGA (480x800) devices. Supports autorun, vibration & notification when server became unavailable, text log files for each server. .NET Compact Framework 3.5 required for running this applicationSimple Silverlight Multiple File Uploader: The project allows you to achieve multiple file uploading in Silverlight without complex, complicated, or third-party applications. In just two core classes, it's very simple to understand, use, and extend. Snos: Projet Snos linker Snes multi SupportT-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers: This open source project includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable.The Media Store: The Media StoreUM: source control for the um projecturBook - Your Address Book Manager: urBook is and Address Book Manager that can merge all your contacts on all web services you use with your phone contacts to have one consistent Address Book for all your contacts and to update back your web services contacts with the one consistent address bookWindows Phone Certificate Installer: Helps install Trusted Root Certificates on Windows Phone 7 to enable SSL requests. Intended to allow developers to use localhost web servers during development without requiring the purchase of an SSL certificate. Especially helpful for ws-trust secured web service development.

    Read the article

  • Metro: Using Templates

    - by Stephen.Walther
    The goal of this blog post is to describe how templates work in the WinJS library. In particular, you learn how to use a template to display both a single item and an array of items. You also learn how to load a template from an external file. Why use Templates? Imagine that you want to display a list of products in a page. The following code is bad: var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productsHTML = ""; for (var i = 0; i < products.length; i++) { productsHTML += "<h1>Product Details</h1>" + "<div>Product Name: " + products[i].name + "</div>" + "<div>Product Price: " + products[i].price + "</div>"; } document.getElementById("productContainer").innerHTML = productsHTML; In the code above, an array of products is displayed by creating a for..next loop which loops through each element in the array. A string which represents a list of products is built through concatenation. The code above is a designer’s nightmare. You cannot modify the appearance of the list of products without modifying the JavaScript code. A much better approach is to use a template like this: <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> A template is simply a fragment of HTML that contains placeholders. Instead of displaying a list of products by concatenating together a string, you can render a template for each product. Creating a Simple Template Let’s start by using a template to render a single product. The following HTML page contains a template and a placeholder for rendering the template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> In the page above, the template is defined in a DIV element with the id productTemplate. The contents of the productTemplate are not displayed when the page is opened in the browser. The contents of a template are automatically hidden when you convert the productTemplate into a template in your JavaScript code. Notice that the template uses data-win-bind attributes to display the product name and price properties. You can use both data-win-bind and data-win-bindsource attributes within a template. To learn more about these attributes, see my earlier blog post on WinJS data binding: http://stephenwalther.com/blog/archive/2012/02/26/windows-web-applications-declarative-data-binding.aspx The page above also includes a DIV element named productContainer. The rendered template is added to this element. Here’s the code for the default.js script which creates and renders the template: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000 }; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); productTemplate.render(product, document.getElementById("productContainer")); } }; app.start(); })(); In the code above, a single product object is created with the following line of code: var product = { name: "Tesla", price: 80000 }; Next, the productTemplate element from the page is converted into an actual WinJS template with the following line of code: var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); The template is rendered to the templateContainer element with the following line of code: productTemplate.render(product, document.getElementById("productContainer")); The result of this work is that the product details are displayed: Notice that you do not need to call WinJS.Binding.processAll(). The Template render() method takes care of the binding for you. Displaying an Array in a Template If you want to display an array of products using a template then you simply need to create a for..next loop and iterate through the array calling the Template render() method for each element. (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); After each product in the array is rendered with the template, the result is appended to the productContainer element. No changes need to be made to the HTML page discussed in the previous section to display an array of products instead of a single product. The same product template can be used in both scenarios. Rendering an HTML TABLE with a Template When using the WinJS library, you create a template by creating an HTML element in your page. One drawback to this approach of creating templates is that your templates are part of your HTML page. In order for your HTML page to validate, the HTML within your templates must also validate. This means, for example, that you cannot enclose a single HTML table row within a template. The following HTML is invalid because you cannot place a TR element directly within the body of an HTML document:   <!-- Product Template --> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> This template won’t validate because, in a valid HTML5 document, a TR element must appear within a THEAD or TBODY element. Instead, you must create the entire TABLE element in the template. The following HTML page illustrates how you can create a template which contains a TR element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> <!-- Place where Product Template is Rendered --> <table> <thead> <tr> <th>Name</th><th>Price</th> </tr> </thead> <tbody id="productContainer"> </tbody> </table> </body> </html>   In the HTML page above, the product template includes TABLE and TBODY elements: <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> We discard these elements when we render the template. The only reason that we include the TABLE and THEAD elements in the template is to make the HTML page validate as valid HTML5 markup. Notice that the productContainer (the target of the template) in the page above is a TBODY element. We want to add the rows rendered by the template to the TBODY element in the page. The productTemplate is rendered in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); } } }; app.start(); })(); When the product template is rendered, the TR element is extracted from the rendered template by using the WinJS.Utilities.query() method. Next, only the TR element is added to the productContainer: productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); I discuss the WinJS.Utilities.query() method in depth in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/23/windows-web-applications-query-selectors.aspx When everything gets rendered, the products are displayed in an HTML table: You can see the actual HTML rendered by looking at the Visual Studio DOM Explorer window:   Loading an External Template Instead of embedding a template in an HTML page, you can place your template in an external HTML file. It makes sense to create a template in an external file when you need to use the same template in multiple pages. For example, you might need to use the same product template in multiple pages in your application. The following HTML page does not contain a template. It only contains a container that will act as a target for the rendered template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> The template is contained in a separate file located at the path /templates/productTemplate.html:   Here’s the contents of the productTemplate.html file: <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> Notice that the template file only contains the template and not the standard opening and closing HTML elements. It is an HTML fragment. If you prefer, you can include all of the standard opening and closing HTML elements in your external template – these elements get stripped away automatically: <html> <head><title>product template</title></head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> Either approach – using a fragment or using a full HTML document  — works fine. Finally, the following default.js file loads the external template, renders the template for each product, and appends the result to the product container: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(null, { href: "/templates/productTemplate.html" }); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); The path to the external template is passed to the constructor for the Template class as one of the options: var productTemplate = new WinJS.Binding.Template(null, {href:"/templates/productTemplate.html"}); When a template is contained in a page then you use the first parameter of the WinJS.Binding.Template constructor to represent the template – instead of null, you pass the element which contains the template. When a template is located in an external file, you pass the href for the file as part of the second parameter for the WinJS.Binding.Template constructor. Summary The goal of this blog entry was to describe how you can use WinJS templates to render either a single item or an array of items to a page. We also explored two advanced topics. You learned how to render an HTML table by extracting the TR element from a template. You also learned how to place a template in an external file.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 27, 2012

    CodePlex Daily Summary for Tuesday, March 27, 2012Popular ReleasesHarness: Harness 2.0.2: change to .NET Framework Client Profile bug fix the download dialog auto answer. bug fix setFocus command. add "SendKeys" command. remove "closeAll" command. minor bugs fixed.BugNET Issue Tracker: BugNET 0.9.161: Below is a list of fixes in this release. Bug BGN-2092 - Link in Email "visit your profile" not functional BGN-2083 - Manager of bugnet can not edit project when it is not public BGN-2080 - clicking on a link in the project summary causes error (0.9.152.0) BGN-2070 - Missing Functionality On Feed.aspx BGN-2069 - Calendar View does not work BGN-2068 - Time tracking totals not ok BGN-2067 - Issues List Page Size Bug: Index was out of range. Must be non-negative and less than the si...YAF.NET (aka Yet Another Forum.NET): v1.9.6.1 RTW: v1.9.6.1 FINAL is .NET v4.0 ONLY v1.9.6.1 has: Performance Improvements .NET v4.0 improvements Improved FaceBook Integration More complete change list and discussion here: http://forum.yetanotherforum.net/yaf_postst14201_v1-9-6-1-RTW-Dated--3-26-2012.aspxQuick Performance Monitor: Version 1.8.1: Added option to set main window to be 'Always On Top'. Use context (right-click) menu on graph to toggle.Asp.NET Url Router: v1.0: build for .net 2.0 and .net 4.0SQLinq - use LINQ to generate Ad-Hoc Sql Queries: SQLinq v1.1: Nuget Package:http://nuget.org/packages/sqlinq Install SQLinq via Nuget Change Log:Fixed "SELECT *" bug when no selector is specified Added ".Take(int)" and ".Skip(int)" methods to support paging Added ability to specify "ORDER BY"DbViewSharp: Sql Compact Edition plugins: The SQL CE plugins are new assemblies written to allow DbViewSharp to work with SQL Compact Edition databases. Some features available for Sql Server databases are unavailable because of restrictions in the Compact Edition engine. However there are plans to add different new features as compensation for this. See the Sql CE Plugin page for more details.TileSet Map Editor: Map Creator: can add maps/ layers can use only 1 tileset for now Have Save/Load Logics... added Fill Copy and Paste working towards better code and more optionsBagammon pc player: Baggamon pc player v.1.3: This a source code of a project "tool-game" Bagammon pc player. It has bug. Please do not fix them. Thank you. For your information : "If you want to use it buy it. Send an email."openSourceC.Daylife: Release v1.0a: This is a minor bug fix release with some minor internal refactoring as well. The Documentation page has some code samples that show how to use the library. If you discover any issues with this release, please check the existing Discussions and Issues to see if the issue has already been reported, and if not, create a new discussion with the details of the issue.menu4web: menu4web 0.0.3: menu4web 0.0.3Windawesome: Windawesome v1.4.0 x86: Added a SeparatorWidget. Implemented some xmonad-like functionality for multiple-monitors - see SwapCurrentWorkspaceWith, SwitchToNextMonitor and SwitchToPreviousMonitor. Thanks to mkocubinski for the idea and some of the implementation. Implemented AddBarToWorkspace and RemoveBarFromWorkspace. Small performance improvements. Any issues/recommendations/requests for future versions? This is the 32-bit version of the release. If you use a 32-bit Windows, this is the release you should u...Navigation for ASP.NET Web Forms: Navigation 1.4: Navigation for ASP.NET Web Forms manages movement and data passing between ASPX pages in a unit-testable manner. There is no client-side logic, so it works in all browsers, and no server-side cache, so it works with the browser back button. Comprehensive documentation and sample code can be found under the Documentation tab (Make sure to unblock all zip files prior to extraction) New - Added default State NavigationData. Supports strongly typed values and routing defaults New - Added mobi...Afrihost Usage Monitoring Gadget: Afrihost Gadget 1.2.0: This is the stable current download: Changes: Added support for Uncapped accounts. Added support for IS Uncapped Accounts.ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Final: This release installs both the ArcGIS Editor for OSM Server Component and/or ArcGIS Editor for OSM Desktop components. The Desktop tools allow you to download data from the OpenStreetMap servers and store it locally in a geodatabase. You can then use the familiar editing environment of ArcGIS Desktop to create, modify, or delete data. Once you are done editing, you can post back the edit changes to OSM to make them available to all OSM users. The Server Component allows you to quickly create...Craig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...DotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...New Projects(MVC4) Character Creation: A simple web site to manage your Avalon CharactersAmfSample: Sample projectBismillah Quran Reader for Wp7: Bismillah Quran Reader is an application for reading Quran translations in WP7. Translations can be read in various languages. Also recitations can be listened to.BlogEngine Mvc: This is an MVC version of BlogEngine.net. Project Description Our plan is to convert the whole BlogEngine.NET into an MVC application by the end of June 2012. It's developed in C# ASP.NET MVC3.Db7: Db7EF4.3 Code First and Migration Sample: EF4.3 Code First and Migration Sampleemoji for windows phone: This project is a windows phone 7 enmoji libary.Excel Document Merger: Excel Document Merger is a utility for combining multiple Excel workbooks and worksheets into a single workbook.exceladdin: exceladdinfastBinaryJSON: Binary JSON serializer based on fastJSONFontographer: A metro style WPF app to demonstrate the capabilites of the fonts on the users systemGonte Web Desktop: Another web desktop using ExtJs javascript frameworkIndoor Cricket Stats: Indoor Cricket StatsKernel32 C# wrapper: Kernel32.dll C# wrapper. Mostly done for threading, pipes, mutexes and other stuff. Not all methods implemented.Live for Desktop: A simple app that lets you browse you Live accout from a webbrowser integrated in the software. Future version will also include a custom interface and a Metro style look.LogoScriptIDE: IDE for LogoScript, A logo and C like scripting languageMcCloud Service Framework: Monte Carlo Cloud Service Framework (McCloud) provides a generic service implementation of Monte Carlo method, based on Microsoft Windows Azure, to solve a wide range of scientific and engineering problems.NetView Control for Microsoft Access: A native control for Microsoft Access forms to display and interact with non-hierarchical data.Polygon: Polygon is a UI composition framework for ASP.NET Web Forms. It can be used for third-party plugin extensibility of ASP.NET Web Forms applications. Though it's developed in C#, plugins can also done in VB as well.Programmeerproject-LambdaOffice: Architectural project. Takes input from the user and prints it to various fileformats such as .docx and .pdfQuick Job Seeker: Final project of computer scienceSchool Education Management: Project Description School Management System helps schools in managing student's data. It is targeted for colleges in the Philippines. It is developed in ASp.Net MVC3 and uses SQL Server as the database. The system is divided into several modules: 1. Registrar Module - used by the Registrar. 2. Scheduling Module - used by Deans for creating course offerings schedule 3. Cashiering Module - used by the Accounting Department 4. Grading Entry Module - used by teachers for encoding grade...Sharepoint Carousel: Sharepoint Carousel\Slider is a webpart that allows you to have a carousel that contains an image with a link below it. it is fully customisable from styles to the actual javascript that generates the slider data it has currently only been tested with sharepoint 2010 This is carousel\slider for sharepoint is built of the JCarousel http://sorgalla.com/jcarousel/.Sieena Dashboard: Metro UI DashboardSimple Redirect Module for DotNetNuke: This module allows content editors in DotNetNuke to have a simple and easy way to properly redirect incoming URLs that are incorrectly indexed by search engines.software de entrenador 2.0: software de entrenador es un Programa diseñado para entrenadores de musculacion y personas cuyo proposito es mejorar en sus entrenamientos y desean llevar un control del mismo a modo de diario.Sqlite Loader: Tool to Import/Export data to an Sqlite database using CSV, XML with a GUI written in C#TestProject_Mercurial: Test Project with MercuriaTestProject_TFS: the test project with tfsThinkPHP-??、???PHP????: ??WEB??????,?????WEB?????? Twesh Ajax: TweshAjax is clientside javascript library for making asynchronous calls to server.UserProfilePropertiesSync: This utility allows to synchronize User Profile properties between different SharePoint 2010 environments.VRacer: Vektor Car racing on windows phone.Workflow Foundation State Machine Service: This is a sample project for workflow foundation 4 state machine exposed as a WCF serviceWPZilla: Bugzilla client for Windows Phone 7.1 and up.XNA GPU Particles Tool: This is a tool to help create particle effects based on the sample shaders provided in the XNA education catalogue. View the changes to the parameters in real time.Y.Music: Y.Music - ??? ?????? ??? ??????? ??????.?????? ???????? ??????. ?? ????????? ?? ??????? ???-??????? ????????? ??????????. ? ???? ?? ?? ?????????? ??????? ?????? ?????????? ?????? ? ????????????? ??????? ? ????????? ? ?????????. ????????? ?????????? ?? WPF/?#/.NET. ????? ? ??? ???????????? ?????????? NAudio ? Fluent Ribbon.

    Read the article

  • Metro Walkthrough: Creating a Task List with a ListView and IndexedDB

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can work with data in a Metro style application written with JavaScript. In particular, we create a super simple Task List application which enables you to create and delete tasks. Here’s a video which demonstrates how the Task List application works: In order to build this application, I had to take advantage of several features of the WinJS library and technologies including: IndexedDB – The Task List application stores data in an IndexedDB database. HTML5 Form Validation – The Task List application uses HTML5 validation to ensure that a required field has a value. ListView Control – The Task List application displays the tasks retrieved from the IndexedDB database in a WinJS ListView control. Creating the IndexedDB Database The Task List application stores all of its data in an IndexedDB database named TasksDB. This database is opened/created with the following code: var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; The msIndexedDB.open() method accepts two parameters: the name of the database to open and the version of the database to open. If a database with a matching version already exists, then calling the msIndexedDB.open() method opens a connection to the existing database. If the database does not exist then the upgradeneeded event is raised. You handle the upgradeneeded event to create a new database. In the code above, the upgradeneeded event handler creates an object store named “tasks” (An object store roughly corresponds to a database table). When you add items to the tasks object store then each item gets an id property with an auto-incremented value automatically. The code above also includes an error event handler. If the IndexedDB database cannot be opened or created, for whatever reason, then an error message is written to the Visual Studio JavaScript Console window. Displaying a List of Tasks The TaskList application retrieves its list of tasks from the tasks object store, which we created above, and displays the list of tasks in a ListView control. Here is how the ListView control is declared: <div id="tasksListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: TaskList.tasks.dataSource, itemTemplate: select('#taskTemplate'), tapBehavior: 'toggleSelect', selectionMode: 'multi', layout: { type: WinJS.UI.ListLayout } }"> </div> The ListView control is bound to the TaskList.tasks.dataSource data source. The TaskList.tasks.dataSource is created with the following code: // Create the data source var tasks = new WinJS.Binding.List(); // Open the database var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; // Load the data source with data from the database req.onsuccess = function () { db = req.result; var tran = db.transaction("tasks"); tran.objectStore("tasks").openCursor().onsuccess = function(event) { var cursor = event.target.result; if (cursor) { tasks.dataSource.insertAtEnd(null, cursor.value); cursor.continue(); }; }; }; // Expose the data source and functions WinJS.Namespace.define("TaskList", { tasks: tasks }); Notice the success event handler. This handler is called when a database is successfully opened/created. In the code above, all of the items from the tasks object store are retrieved into a cursor and added to a WinJS.Binding.List object named tasks. Because the ListView control is bound to the WinJS.Binding.List object, copying the tasks from the object store into the WinJS.Binding.List object causes the tasks to appear in the ListView: Adding a New Task You add a new task in the Task List application by entering the title of a new task into an HTML form and clicking the Add button. Here’s the markup for creating the form: <form id="addTaskForm"> <input id="newTaskTitle" title="New Task" required /> <button>Add</button> </form> Notice that the INPUT element includes a required attribute. In a Metro application, you can take advantage of HTML5 Validation to validate form fields. If you don’t enter a value for the newTaskTitle field then the following validation error message is displayed: For a brief introduction to HTML5 validation, see my previous blog entry: http://stephenwalther.com/blog/archive/2012/03/13/html5-form-validation.aspx When you click the Add button, the form is submitted and the form submit event is raised. The following code is executed in the default.js file: // Handle Add Task document.getElementById("addTaskForm").addEventListener("submit", function (evt) { evt.preventDefault(); var newTaskTitle = document.getElementById("newTaskTitle"); TaskList.addTask({ title: newTaskTitle.value }); newTaskTitle.value = ""; }); The code above retrieves the title of the new task and calls the addTask() method in the tasks.js file. Here’s the code for the addTask() method which is responsible for actually adding the new task to the IndexedDB database: // Add a new task function addTask(taskToAdd) { var transaction = db.transaction("tasks", "readwrite"); var addRequest = transaction.objectStore("tasks").add(taskToAdd); addRequest.onsuccess = function (evt) { taskToAdd.id = evt.target.result; tasks.dataSource.insertAtEnd(null, taskToAdd); } } The code above does two things. First, it adds the new task to the tasks object store in the IndexedDB database. Second, it adds the new task to the data source bound to the ListView. The dataSource.insertAtEnd() method is called to add the new task to the data source so the new task will appear in the ListView (with a nice little animation). Deleting Existing Tasks The Task List application enables you to select one or more tasks by clicking or tapping on one or more tasks in the ListView. When you click the Delete button, the selected tasks are removed from both the IndexedDB database and the ListView. For example, in the following screenshot, two tasks are selected. The selected tasks appear with a teal background and a checkmark: When you click the Delete button, the following code in the default.js file is executed: // Handle Delete Tasks document.getElementById("btnDeleteTasks").addEventListener("click", function (evt) { tasksListView.winControl.selection.getItems().then(function(items) { items.forEach(function (item) { TaskList.deleteTask(item); }); }); }); The selected tasks are retrieved with the TaskList selection.getItem() method. In the code above, the deleteTask() method is called for each of the selected tasks. Here’s the code for the deleteTask() method: // Delete an existing task function deleteTask(listViewItem) { // Database key != ListView key var dbKey = listViewItem.data.id; var listViewKey = listViewItem.key; // Remove item from db and, if success, remove item from ListView var transaction = db.transaction("tasks", “readwrite”); var deleteRequest = transaction.objectStore("tasks").delete(dbKey); deleteRequest.onsuccess = function () { tasks.dataSource.remove(listViewKey); } } This code does two things: it deletes the existing task from the database and removes the existing task from the ListView. In both cases, the right task is removed by using the key associated with the task. However, the task key is different in the case of the database and in the case of the ListView. In the case of the database, the task key is the value of the task id property. In the case of the ListView, on the other hand, the task key is auto-generated by the ListView. When the task is removed from the ListView, an animation is used to collapse the tasks which appear above and below the task which was removed. The Complete Code Above, I did a lot of jumping around between different files in the application and I left out sections of code. For the sake of completeness, I want to include the entire code here: the default.html, default.js, and tasks.js files. Here are the contents of the default.html file. This file contains the UI for the Task List application: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Task List</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- TaskList references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/tasks.js"></script> <style type="text/css"> body { font-size: x-large; } form { display: inline; } #appContainer { margin: 20px; width: 600px; } .win-container { padding: 10px; } </style> </head> <body> <div> <!-- Templates --> <div id="taskTemplate" data-win-control="WinJS.Binding.Template"> <div> <span data-win-bind="innerText:title"></span> </div> </div> <h1>Super Task List</h1> <div id="appContainer"> <form id="addTaskForm"> <input id="newTaskTitle" title="New Task" required /> <button>Add</button> </form> <button id="btnDeleteTasks">Delete</button> <div id="tasksListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: TaskList.tasks.dataSource, itemTemplate: select('#taskTemplate'), tapBehavior: 'toggleSelect', selectionMode: 'multi', layout: { type: WinJS.UI.ListLayout } }"> </div> </div> </div> </body> </html> Here is the code for the default.js file. This code wires up the Add Task form and Delete button: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { // Get reference to Tasks ListView var tasksListView = document.getElementById("tasksListView"); // Handle Add Task document.getElementById("addTaskForm").addEventListener("submit", function (evt) { evt.preventDefault(); var newTaskTitle = document.getElementById("newTaskTitle"); TaskList.addTask({ title: newTaskTitle.value }); newTaskTitle.value = ""; }); // Handle Delete Tasks document.getElementById("btnDeleteTasks").addEventListener("click", function (evt) { tasksListView.winControl.selection.getItems().then(function(items) { items.forEach(function (item) { TaskList.deleteTask(item); }); }); }); }); } }; app.start(); })(); Finally, here is the tasks.js file. This file contains all of the code for opening, creating, and interacting with IndexedDB: (function () { "use strict"; // Create the data source var tasks = new WinJS.Binding.List(); // Open the database var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; // Load the data source with data from the database req.onsuccess = function () { db = req.result; var tran = db.transaction("tasks"); tran.objectStore("tasks").openCursor().onsuccess = function(event) { var cursor = event.target.result; if (cursor) { tasks.dataSource.insertAtEnd(null, cursor.value); cursor.continue(); }; }; }; // Add a new task function addTask(taskToAdd) { var transaction = db.transaction("tasks", "readwrite"); var addRequest = transaction.objectStore("tasks").add(taskToAdd); addRequest.onsuccess = function (evt) { taskToAdd.id = evt.target.result; tasks.dataSource.insertAtEnd(null, taskToAdd); } } // Delete an existing task function deleteTask(listViewItem) { // Database key != ListView key var dbKey = listViewItem.data.id; var listViewKey = listViewItem.key; // Remove item from db and, if success, remove item from ListView var transaction = db.transaction("tasks", "readwrite"); var deleteRequest = transaction.objectStore("tasks").delete(dbKey); deleteRequest.onsuccess = function () { tasks.dataSource.remove(listViewKey); } } // Expose the data source and functions WinJS.Namespace.define("TaskList", { tasks: tasks, addTask: addTask, deleteTask: deleteTask }); })(); Summary I wrote this blog entry because I wanted to create a walkthrough of building a simple database-driven application. In particular, I wanted to demonstrate how you can use a ListView control with an IndexedDB database to store and retrieve database data.

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

  • Metro: Understanding CSS Media Queries

    - by Stephen.Walther
    If you are building a Metro style application then your application needs to look great when used on a wide variety of devices. Your application needs to work on tiny little phones, slates, desktop monitors, and the super high resolution displays of the future. Your application also must support portable devices used with different orientations. If someone tilts their phone from portrait to landscape mode then your application must still be usable. Finally, your Metro style application must look great in different states. For example, your Metro application can be in a “snapped state” when it is shrunk so it can share screen real estate with another application. In this blog post, you learn how to use Cascading Style Sheet media queries to support different devices, different device orientations, and different application states. First, you are provided with an overview of the W3C Media Query recommendation and you learn how to detect standard media features. Next, you learn about the Microsoft extensions to media queries which are supported in Metro style applications. For example, you learn how to use the –ms-view-state feature to detect whether an application is in a “snapped state” or “fill state”. Finally, you learn how to programmatically detect the features of a device and the state of an application. You learn how to use the msMatchMedia() method to execute a media query with JavaScript. Using CSS Media Queries Media queries enable you to apply different styles depending on the features of a device. Media queries are not only supported by Metro style applications, most modern web browsers now support media queries including Google Chrome 4+, Mozilla Firefox 3.5+, Apple Safari 4+, and Microsoft Internet Explorer 9+. Loading Different Style Sheets with Media Queries Imagine, for example, that you want to display different content depending on the horizontal resolution of a device. In that case, you can load different style sheets optimized for different sized devices. Consider the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>U.S. Robotics and Mechanical Men</title> <link href="main.css" rel="stylesheet" type="text/css" /> <!-- Less than 1100px --> <link href="medium.css" rel="stylesheet" type="text/css" media="(max-width:1100px)" /> <!-- Less than 800px --> <link href="small.css" rel="stylesheet" type="text/css" media="(max-width:800px)" /> </head> <body> <div id="header"> <h1>U.S. Robotics and Mechanical Men</h1> </div> <!-- Advertisement Column --> <div id="leftColumn"> <img src="advertisement1.gif" alt="advertisement" /> <img src="advertisement2.jpg" alt="advertisement" /> </div> <!-- Product Search Form --> <div id="mainContentColumn"> <label>Search Products</label> <input id="search" /><button>Search</button> </div> <!-- Deal of the Day Column --> <div id="rightColumn"> <h1>Deal of the Day!</h1> <p> Buy two cameras and get a third camera for free! Offer is good for today only. </p> </div> </body> </html> The HTML page above contains three columns: a leftColumn, mainContentColumn, and rightColumn. When the page is displayed on a low resolution device, such as a phone, only the mainContentColumn appears: When the page is displayed in a medium resolution device, such as a slate, both the leftColumn and the mainContentColumns are displayed: Finally, when the page is displayed in a high-resolution device, such as a computer monitor, all three columns are displayed: Different content is displayed with the help of media queries. The page above contains three style sheet links. Two of the style links include a media attribute: <link href="main.css" rel="stylesheet" type="text/css" /> <!-- Less than 1100px --> <link href="medium.css" rel="stylesheet" type="text/css" media="(max-width:1100px)" /> <!-- Less than 800px --> <link href="small.css" rel="stylesheet" type="text/css" media="(max-width:800px)" /> The main.css style sheet contains default styles for the elements in the page. The medium.css style sheet is applied when the page width is less than 1100px. This style sheet hides the rightColumn and changes the page background color to lime: html { background-color: lime; } #rightColumn { display:none; } Finally, the small.css style sheet is loaded when the page width is less than 800px. This style sheet hides the leftColumn and changes the page background color to red: html { background-color: red; } #leftColumn { display:none; } The different style sheets are applied as you stretch and contract your browser window. You don’t need to refresh the page after changing the size of the page for a media query to be applied: Using the @media Rule You don’t need to divide your styles into separate files to take advantage of media queries. You can group styles by using the @media rule. For example, the following HTML page contains one set of styles which are applied when a device’s orientation is portrait and another set of styles when a device’s orientation is landscape: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Application1</title> <style type="text/css"> html { font-family:'Segoe UI Semilight'; font-size: xx-large; } @media screen and (orientation:landscape) { html { background-color: lime; } p.content { width: 50%; margin: auto; } } @media screen and (orientation:portrait) { html { background-color: red; } p.content { width: 90%; margin: auto; } } </style> </head> <body> <p class="content"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </body> </html> When a device has a landscape orientation then the background color is set to the color lime and the text only takes up 50% of the available horizontal space: When the device has a portrait orientation then the background color is red and the text takes up 90% of the available horizontal space: Using Standard CSS Media Features The official list of standard media features is contained in the W3C CSS Media Query recommendation located here: http://www.w3.org/TR/css3-mediaqueries/ Here is the official list of the 13 media features described in the standard: · width – The current width of the viewport · height – The current height of the viewport · device-width – The width of the device · device-height – The height of the device · orientation – The value portrait or landscape · aspect-ratio – The ratio of width to height · device-aspect-ratio – The ratio of device width to device height · color – The number of bits per color supported by the device · color-index – The number of colors in the color lookup table of the device · monochrome – The number of bits in the monochrome frame buffer · resolution – The density of the pixels supported by the device · scan – The values progressive or interlace (used for TVs) · grid – The values 0 or 1 which indicate whether the device supports a grid or a bitmap Many of the media features in the list above support the min- and max- prefix. For example, you can test for the min-width using a query like this: (min-width:800px) You can use the logical and operator with media queries when you need to check whether a device supports more than one feature. For example, the following query returns true only when the width of the device is between 800 and 1,200 pixels: (min-width:800px) and (max-width:1200px) Finally, you can use the different media types – all, braille, embossed, handheld, print, projection, screen, speech, tty, tv — with a media query. For example, the following media query only applies to a page when a page is being printed in color: print and (color) If you don’t specify a media type then media type all is assumed. Using Metro Style Media Features Microsoft has extended the standard list of media features which you can include in a media query with two custom media features: · -ms-high-contrast – The values any, black-white, white-black · -ms-view-state – The values full-screen, fill, snapped, device-portrait You can take advantage of the –ms-high-contrast media feature to make your web application more accessible to individuals with disabilities. In high contrast mode, you should make your application easier to use for individuals with vision disabilities. The –ms-view-state media feature enables you to detect the state of an application. For example, when an application is snapped, the application only occupies part of the available screen real estate. The snapped application appears on the left or right side of the screen and the rest of the screen real estate is dominated by the fill application (Metro style applications can only be snapped on devices with a horizontal resolution of greater than 1,366 pixels). Here is a page which contains style rules for an application in both a snap and fill application state: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>MyWinWebApp</title> <style type="text/css"> html { font-family:'Segoe UI Semilight'; font-size: xx-large; } @media screen and (-ms-view-state:snapped) { html { background-color: lime; } } @media screen and (-ms-view-state:fill) { html { background-color: red; } } </style> </head> <body> <p class="content"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </p> </body> </html> When the application is snapped, the application appears with a lime background color: When the application state is fill then the background color changes to red: When the application takes up the entire screen real estate – it is not in snapped or fill state – then no special style rules apply and the application appears with a white background color. Querying Media Features with JavaScript You can perform media queries using JavaScript by taking advantage of the window.msMatchMedia() method. This method returns a MSMediaQueryList which has a matches method that represents success or failure. For example, the following code checks whether the current device is in portrait mode: if (window.msMatchMedia("(orientation:portrait)").matches) { console.log("portrait"); } else { console.log("landscape"); } If the matches property returns true, then the device is in portrait mode and the message “portrait” is written to the Visual Studio JavaScript Console window. Otherwise, the message “landscape” is written to the JavaScript Console window. You can create an event listener which triggers code whenever the results of a media query changes. For example, the following code writes a message to the JavaScript Console whenever the current device is switched into or out of Portrait mode: window.msMatchMedia("(orientation:portrait)").addListener(function (mql) { if (mql.matches) { console.log("Switched to portrait"); } }); Be aware that the event listener is triggered whenever the result of the media query changes. So the event listener is triggered both when you switch from landscape to portrait and when you switch from portrait to landscape. For this reason, you need to verify that the matches property has the value true before writing the message. Summary The goal of this blog entry was to explain how CSS media queries work in the context of a Metro style application written with JavaScript. First, you were provided with an overview of the W3C CSS Media Query recommendation. You learned about the standard media features which you can query such as width and orientation. Next, we focused on the Microsoft extensions to media queries. You learned how to use –ms-view-state to detect whether a Metro style application is in “snapped” or “fill” state. You also learned how to use the msMatchMedia() method to perform a media query from JavaScript.

    Read the article

  • CodePlex Daily Summary for Tuesday, June 28, 2011

    CodePlex Daily Summary for Tuesday, June 28, 2011Popular ReleasesCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1.4.3: Fix for prompts not returning the appbarMosaic Project: Mosaic Alpha build 261: - Fixed crash when pinning applications in x64 OS - Added Hub to video widget. It shows videos from Video library (only .wmv and .avi). Can work slow if there are too much files. - Fixed some issues with scrolling - Fixed bug with html widgets - Fixed bug in Gmail widget - Added html today widget missed in previous release - Now Mosaic saves running widgets if you restarting from optionsEnhSim: EnhSim 2.4.9 BETA: 2.4.9 BETAThis release supports WoW patch 4.2 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in some of th....NET Reflector Add-Ins: Reflector V7 Add-Ins: All the add-ins compiled for Reflector V7TerrariViewer: TerrariViewer v4.1 [4.0 Bug Fixes]: Version 4.1 ChangelogChanged how users will Open Player files (This change makes it much easier) This allowed me to remove the "Current player file" labels that were present Changed file control icons Added submit bug button Various Bug Fixes Fixed crashes related to clicking on buffs before a character is loaded Fixed crashes related to selecting "No Buff" when choosing a new buff Fixed crashes related to clicking on a "Max" button on the buff tab before a character is loaded Cor...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta8: ??AcDown???????????????,?????????????????????。????????????????????,??Acfun、Bilibili、???、???、?????,???????????、???????。 AcDown???????????????????????????,???,???????????????????。 AcDown???????C#??,?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ??v3.0 Beta8 ?? ??????????????? ???????????????(??????????) ???????...BlogEngine.NET: BlogEngine.NET 2.5: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.5 instructions. To get started, be sure to check out our installatio...PHP Manager for IIS: PHP Manager 1.2 for IIS 7: This release contains all the functionality available in 62183 plus the following additions: Command Line Support via PowerShell - now it is possible to manage and script PHP installations on IIS by using Windows PowerShell. More information is available at Managing PHP installations with PHP Manager command line. Detection and alert when using local PHP handler - if a web site or a directory has a local copy of PHP handler mapping then the configuration changes made on upper configuration ...MiniTwitter: 1.71: MiniTwitter 1.71 ???? ?? OAuth ???????????? ????????、??????????????????? ???????????????????????SizeOnDisk: 1.0.10.0: Fix: issue 327: size format error when save settings Fix: some UI bindings trouble (sorting, refresh) Fix: user settings file deletion when corrupted Feature: TreeView virtualization (better speed with many folders) Feature: New file type DataGrid column Feature: In KByte view, show size of file < 1024B and > 0 with 3 decimal Feature: New language: Italian Task: Cleanup for speedRawr: Rawr 4.2.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...HD-Trailers.NET Downloader: HD-Trailer.net Downloader 1.86: This version implements a new config flag "ConsiderTheatricalandNumberedTrailersasIdentical" that for the purposes of Exclusions only Teaser Trailer and one Trailer (named Trailer, Theatrical Traler Trailer No. 1, Trailer 1, Trailer No. 2, etc) will be downloaded. This also includes a bug fix where the .nfo file did not include the -trailer if configured for XBMC.N2 CMS: 2.2: * Web platform installer support available ** Nuget support available What's newDinamico Templates (beta) - an MVC3 & Razor based template pack using the template-first! development paradigm Boilerplate CSS & HTML5 Advanced theming with css comipilation (concrete, dark, roadwork, terracotta) Template-first! development style Content, news, listing, slider, image sizes, search, sitemap, globalization, youtube, google map Display Tokens - replaces text tokens with rendered content (usag...Circuit Diagram: Circuit Diagram v0.5 Beta: New in this release: New components: Ammeter (meter) Voltmeter (meter) Undo/redo functionality for placing/moving components Choose resolution when exporting PNG image New logoMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.23: XML input file can now specify both JS and CSS output files in the same XML file. Don't output octal escsape sequences in string literals for strict-mode scripts. Expand expression optimizations to handle more instances of logical-not being smaller. Properly handle comments that look like conditional comments but aren't because no @cc_on statement has been encountered. CSS Important comments should start on a new line, and CSS hacks should not have the ! in them. Various other smaller updates.KinectNUI: Jun 25 Alpha Release: Initial public version. No installer needed, just run the EXE.Terraria World Viewer: Version 1.5: Update June 24th Made compatible with the new tiles found in Terraria 1.0.5Kinect Earth Move: KinectEarthMove sample code: Sample code releasedThis is a sample code for Kinect for Windows SDK beta, which was demonstrated on Channel 9 Kinect for Windows SKD beta launch event on June 17 2011. Using color image and skeleton data from Kinect and user in front of Kinect can manipulate the earth between his/her hands.patterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...DotNetNuke® Community Edition: 06.00.00 Beta: Beta 1 (Build 2300) includes many important enhancements to the user experience. The control panel has been updated for easier access to the most important features and additional forms have been adapted to the new pattern. This release also includes many bug fixes that make it more stable than previous CTP releases. Beta ForumsNew ProjectsAI4CAD-3D: AI4CAD-3D bir 3 boyutlu tasarim ve ince ve kaba insaat metraj programidir.Beginner 2D game Dev -MokoNa: learning to make 2D games from ground up cae2rampage: A top down XNA-based multipalyer game.CamelWiki: Simple wiki software written in Perl using POD as markup language. CamelWiki can use MySQL, PostgreSQL or SQLite as database backend. DmPoster: An assistant to help us to post danmaku to bilibili.tv website. ????? bilibili.tv ?????????。 It's developed in C#. ????? C#。EasyDP: EasyDP is an open source Silverlight application to upload and manage one's display pictures in a website. It can be integrated to websites not limited to ASP.NET. It posts images up with generic HTTP form data thus it can communicate with HTTP handler written in any server-side programming language.E-Commerce TCP: Projeto E-Commerce para a cadeira de TCP da UFRGS. Semestre 01/2011.Encounter: a host-guest interaction energy calculator: Encounter is a simple program for calculating the interaction energy between two molecules using the output from a GAUSSIAN two-component counterpoise correction calculation. The Counterpoise Correction arises due to the Basis Set Superposition Error in quantum modeling.Excel add-in for regular expressions: Match, replace, and search character strings in Excel using the C++0x <regex> library.FlvBugger: ?Flv????????,???????????????????,????????。Frontdesk: Frontdesk is a form creator and autoresponder program designed in Microsoft ASP.NET MVC 2. It contains a custom member/security implementation, admin area, and lots of flexibility in form & autoresponder creation. It uses an integrated WYSIWYG editor & form field drag-and-drop.His2012: his2012ixbShop: ???????? Open-source e-commerce platformJBot: PL: Program sieciowy JBot jest chatterbotem. EN: Network program JBot is chatterbot.JungleSoft_Lux: luxKillstone PaRSS: Killstone PaRSS is a jQuery Plugin that parses an RSS feed and appends the items from the feed to a UL or OL on your webpage.Killstone PHP Framework: A simple PHP framework that helps organize your PHP application in a Model / View / Controller pattern.Orchard-PhotoTag-FamilyTree: This is an Orchard Module that allows you to tag a photo. It comes with a widget and a Page type. In addition to tagging photo's you can create a family tree/org chart to drill down into. (I originally built this for a family reunion).ResX DSL: ResX DSL is a Domain Specific Language created with Visual Studio DSL Tools. It helps to define a ResX DSL-Model with multi-language and multi-typed Ressources and generates with T4-Templates an ordinary Resx-File as well as a Proxy-Class with a given Ressources-Set.Self-Tracking Entity Generator for WPF and Silverlight: An Entity Framework project item to generate self-tracking entity classes for WPF and Silverlight applications.SharePoint 2010 Print List Ribbon Button: SharePoint 2010 Print List button on Top Ribbon for Calendar listStocks Application: This is technical demo to apply F-Sharp (F#) language in real world application. This application is close to real world enterprise application (with very optimal solutions, at least in 2011). The function of this application is to get Stock Quote Data and Historical Stock Prices from Yahoo Finance The Smart Shopping List: The Smart Shopping List makes it easier to keep track of your purchasesUmbraco Flickr API Search - XSLT Extension: An Umbraco XSLT Extension Package that enables calling the Flickr API to retrieve photos from a tag(s), user, group, and text search. The underlying engine is built off of the FlickrNET library (http://www.codeplex.com/FlickrNet).WASTLib: WASTLib is the acronim of Web Application Security Testing Library. Its main purpose is to easly create security tests for your web application. WinPanel: WinPanel ist ein nachgemachtes GNOME-Panel, dass auf Windows läuft. Es wird in Visual Basic 2010 Express programmiert.

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524  | Next Page >