Search Results

Search found 21001 results on 841 pages for 'spelling check'.

Page 242/841 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Customers Discuss: Real-World Operational Reporting with Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} As businesses leverage business intelligence and analytics for day-to-day decision making, operational reporting solutions become more and more common. While some companies can use their production OLTP system for running operational reports, for many it is too much overhead and performance impact for transaction processing systems.  Oracle GoldenGate’s real-time data integration capabilities enable companies to create a real-time replica of their OLTP systems, dedicated for operational reporting. This instance can be optimized for the reports needed as well such as containing only the tables needed from the source. Oracle GoldenGate has certified solutions for many Oracle applications such as EBusiness Suite, Peoplesoft, JD Edwards, to offload operational reporting to another reporting server that has real-time data feeding from the production system. At Oracle OpenWorld we will be hearing from a panel of Oracle GoldenGate customers how they deployed GoldenGate for operational reporting. Comcast, Turk Telekom, and Raymond James will be sharing their experiences and the benefits achieved when implementing GoldenGate’s solution. If you have performance degradation in your production systems due to reporting or ad-hoc queries, and you will be at OpenWorld, don’t miss this informative session: Real-World Operational Reporting with Oracle GoldenGate: Customer Panel-- Tuesday Oct 2nd 11:45am Mascone West 3005. For other data integration sessions at OpenWorld, please check our Focus-On document.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} If you cannot attend OpenWorld, please check out related white paper “Using Oracle GoldenGate to Achieve Operational Reporting for Oracle Applications” to learn more.

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Mobile Deals: the Consumer Wants You in Their Pocket

    - by Mike Stiles
    Mobile deals offer something we talk about a lot in social marketing, relevant content. If a consumer is already predisposed to liking your product and gets a timely deal for it that’s easy and convenient to use, not only do you score on the marketing side, it clearly generates some of that precious ROI that’s being demanded of social. First, a quick gut-check on the public’s adoption of mobile. Nielsen figures have 55.5% of US mobile owners using smartphones. If young people are indeed the future, you can count on the move to mobile exploding exponentially. Teens are the fastest growing segment of smartphone users, and 58% of them have one. But the largest demographic of smartphone users is 25-34 at 74%. That tells you a focus on mobile will yield great results now, and even better results straight ahead. So we can tell both from statistics and from all the faces around you that are buried in their smartphones this is where consumers are. But are they looking at you? Do you have a valid reason why they should? Everybody likes a good deal. BIA/Kelsey says US consumers will spend $3.6 billion this year for daily deals (the Groupons and LivingSocials of the world), up 87% from 2011. The report goes on to say over 26% of small businesses are either "very likely" or "extremely likely" to offer up a deal in the next 6 months. Retail Gazette reports 58% of consumers shop with coupons, a 40% increase in 4 years. When you consider that a deal can be the impetus for a real-world transaction, a first-time visit to a store, an online purchase, entry into a loyalty program, a social referral, a new fan or follower, etc., that 26% figure shows us there’s a lot of opportunity being left on the table by brands. The existing and emerging technologies behind mobile devices make the benefits of offering deals listed above possible. Take how mobile payment systems are being tied into deal delivery and loyalty programs. If it’s really easy to use a coupon or deal, it’ll get used. If it’s complicated, it’ll be passed over as “not worth it.” When you can pay with your mobile via technologies that connects store and user, you get the deal, you get the loyalty credit, you pay, and your receipt is uploaded, all in one easy swipe. Nothing to keep track of, nothing to lose or forget about. And the store “knows” you, so future offers will be based on your tastes. Consider the endgame. A customer who’s a fan of your belt buckle store’s Facebook Page is in one of your physical retail locations. They pull up your app, because they’ve gotten used to a loyalty deal being offered when they go to your store. Voila. A 10% discount active for the next 30 minutes. Maybe the app also surfaces social references to your brand made by friends so they can check out a buckle someone’s raving about. If they aren’t a fan of your Page or don’t have your app, perhaps they’ve opted into location-based deal services so you can still get them that 10% deal while they’re in the store. Or maybe they’ve walked in with a pre-purchased Groupon or Living Social voucher. They pay with one swipe, and you’ve learned about their buying preferences, credited their loyalty account and can encourage them to share a pic of their new buckle on social. Happy customer. Happy belt buckle company. All because the brand was willing to use the tech that’s available to meet consumers where they are, incentivize them, and show them how much they’re valued through rewards.

    Read the article

  • Dallas First Regionals 2012&ndash; For Inspiration and Recognition of Science and Technology

    - by T
    Wow!  That is all I have to say after the last 3 days. Three full fun filled days in a world that fed the geek, sparked the competitor, inspired the humanitarian, encouraged the inventor, and continuously warmed my heart.  As part of the Dallas First Regionals, I was awed by incredible students who teach as much as they learn, inventive and truly caring mentors that make mentoring look easy, and completely passionate and dedicated volunteers that bring meaning to giving all that you have and making events fun and safe for all.  If you have any interest in innovation, robotics, or highly motivated students, I can’t recommend anything any higher than visiting a First Robotics event. This is my third year with First and I was honored enough to serve the Dallas First Regionals as both a Web Site evaluator and as the East Field Volunteer Coordinator.  This was also the first year my daughter volunteered with me.  My daughter and I both recognize how different the First program is from other team events.  The difference with First is that everyone is a first class citizen.   It is a difference we can feel through experiencing and observing interactions between executives, respected engineers, students, and event staff.  Even with a veracious competition, you still find a lot of cooperation and we never witnessed any belittling between teams or individuals. First Robotics coined the term “Gracious Professionalism”.   It's a way of doing things that encourages high-quality work, emphasizes the value of others, and respects individuals and the community.1 I was introduced to this term as the Volunteer Coordinator when I was preparing the volunteer instructional speech.  Through the next few days, I discovered it is truly core to everything that First does.  One of the ways First accomplishes Gracious Professionalism is by utilizing another term they came up with which is “CoopertitionTM”. At FIRST, CoopertitionTM is displaying unqualified kindness and respect in the face of fierce competition.1   One of the things I never liked about sports was the need to “destroy” the other team.  First has really found a way to integrate CoopertitionTM  into the rules so teams can be competitive and part of that competition rewards cooperation. Oh and did I mention it has ROBOTS!!!  This year it had basket ball playing Kinect connected, remote controlled, robots!  There are not words for how exciting these games are.  You HAVE to check out this years game, Rebound Rumble, on youtube or you can view live action on Dallas First Video (as of this posting, the recording haven’t posted yet but should be there soon).  There are also some images below. Whatever it is, these students get it and exemplify it and these mentors ooze with it.  I am glad that First supports these events so we can all learn and be inspired by these exceptional students and mentors.  I know that no matter how much I give, it will never compare to what I gain volunteering at First.  Even if you don’t have time to volunteer, you owe it to yourself to go check out one of these events.  See what the future holds, be inspired, be encouraged, take some knowledge, leave some knowledge and most of all, have FUN.  That is what First is all about and thanks to First, I get it.   First Dallas Regionals 2012 VIEW SLIDE SHOW DOWNLOAD ALL 1.  USFirst http://www.usfirst.org/aboutus/gracious-professionalism

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • SPTI problem with Mode Select

    - by Bob Murphy
    I'm running into a problem in which an attempt to do a "Mode Select" SCSI command using SPTI is returning an error status of 0x02 ("Check Condition"), and hope someone here might have some tips or suggestions. The code in question is intended to work with at a custom SCSI device. I wrote the original support for it using ASPI under WinXP, and am converting it to work with SPTI under 64-bit Windows 7. Here's the problematic code - and what's happening is, sptwb.spt.ScsiStatus is 2, which is a "Check Condition" error. Unfortunately, the device in question doesn't return useful information when you do a "Request Sense" after this problem occurs, so that's no help. void MSSModeSelect(const ModeSelectRequestPacket& inRequest, StatusResponsePacket& outResponse) { IPC_LOG("MSSModeSelect(): PathID=%d, TargetID=%d, LUN=%d", inRequest.m_Device.m_PathId, inRequest.m_Device.m_TargetId, inRequest.m_Device.m_Lun); int adapterIndex = inRequest.m_Device.m_PathId; HANDLE adapterHandle = prvOpenScsiAdapter(inRequest.m_Device.m_PathId); if (adapterHandle == INVALID_HANDLE_VALUE) { outResponse.m_Status = eScsiAdapterErr; return; } SCSI_PASS_THROUGH_WITH_BUFFERS sptwb; memset(&sptwb, 0, sizeof(sptwb)); #define MODESELECT_BUF_SIZE 32 sptwb.spt.Length = sizeof(SCSI_PASS_THROUGH); sptwb.spt.PathId = inRequest.m_Device.m_PathId; sptwb.spt.TargetId = inRequest.m_Device.m_TargetId; sptwb.spt.Lun = inRequest.m_Device.m_Lun; sptwb.spt.CdbLength = CDB6GENERIC_LENGTH; sptwb.spt.SenseInfoLength = 0; sptwb.spt.DataIn = SCSI_IOCTL_DATA_IN; sptwb.spt.DataTransferLength = MODESELECT_BUF_SIZE; sptwb.spt.TimeOutValue = 2; sptwb.spt.DataBufferOffset = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf); sptwb.spt.Cdb[0] = SCSIOP_MODE_SELECT; sptwb.spt.Cdb[4] = MODESELECT_BUF_SIZE; DWORD length = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf) + sptwb.spt.DataTransferLength; memset(sptwb.ucDataBuf, 0, sizeof(sptwb.ucDataBuf)); sptwb.ucDataBuf[2] = 0x10; sptwb.ucDataBuf[16] = (BYTE)0x01; ULONG bytesReturned = 0; BOOL okay = DeviceIoControl(adapterHandle, IOCTL_SCSI_PASS_THROUGH, &sptwb, sizeof(SCSI_PASS_THROUGH), &sptwb, length, &bytesReturned, FALSE); DWORD gle = GetLastError(); IPC_LOG(" DeviceIoControl() %s", okay ? "worked" : "failed"); if (okay) { outResponse.m_Status = (sptwb.spt.ScsiStatus == 0) ? eOk : ePrinterStatusErr; } else { outResponse.m_Status = eScsiPermissionsErr; } CloseHandle(adapterHandle); } A few more remarks, for what it's worth: This is derived from some old ASPI code that does the "Mode Select" flawlessly. This routine opens \\.\SCSI<whatever> at the beginning, via prvOpenScsiAdapter(), and closes the handle at the end. All the other routines for dealing with the device do the same thing, including the routine to do "Reserve Unit". Is this a good idea under SPTI, or should the call to "Reserve Unit" leave the handle open, so this routine and others in the sequence can use the same handle? This uses the IOCTL_SCSI_PASS_THROUGH. Should "Mode Select" use IOCTL_SCSI_PASS_THROUGH_DIRECT instead? Thanks in advance - any help will be greatly appreciated.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • Bowing to User Experience

    As a consumer of geeky news it is hard to check my Google Reader without running into two or three posts about Apples iPad and in particular the changes to the developer guidelines which seemingly restrict developers to using Apples Xcode tool and Objective-C language for iPad apps. One of the alternatives to Objective-C affected, is MonoTouch, an option with some appeal to me as it is based on the Mono implementation of C#. Seemingly restricted is the key word here, as far as I can tell, no official announcement has been made about its fate. For more details around MonoTouch for iPhone OS, check out Miguel de Icazas post: http://tirania.org/blog/archive/2010/Apr-28.html. These restrictions have provoked some outrage as the perception is that Apple is arrogantly restricting developers freedom to create applications as they choose and perhaps unwittingly shortchanging iPhone/iPad users who wont benefit from these now never-to-be-made great applications. Apples response has mostly been to say they are concentrating on providing a certain user experience to their customers, and to do this, they insist everyone uses the tools they approve. Which isnt a surprising line of reasoning given Apple restricts the hardware used and content of the apps already. The vogue term for this approach is curated, as in a benevolent museum director selecting only the finest artifacts for display or a wise gardener arranging the plants in a garden just so. If this is what a curated experience is like it is hard to argue that consumers are not responding. My iPhone is probably the most satisfying piece of technology I own. Coming from the Razr, it really was an revolution in how the form factor, interface and user experience all tied together. While the curated approach reinvented the smart phone genre, it is easy to forget that this is not a new approach for Apple. Macbooks and Macs are Apple hardware that run Apple software. And theyve been successful, but not quite in the same way as the iPhone or iPad (based on early indications). Why not? Well a curated approach can only be wildly successful if the curator a) makes the right choices and b) offers choices that no one else has. Although its advantages are eroding, the iPhone was different from other phones, a unique, focused, touch-centric experience. The iPad is an attempt to define another category of computing. Macs and Macbooks are great devices, but are not fundamentally a different user experience than a PC, you still have windows, file folders, mouse and keyboard, and similar applications. So the big question for Apple is can they hold on to their market advantage, continuing innovating in user experience and stay on top? Or are they going be like Xerox, and the rest of the world says thank you for the windows metaphor, now let me implement that better? It will be exciting to watch, with Android already a viable competitor and Microsoft readying Windows Phone 7. And to close the loop back to the restrictions on developing for iPhone OS. At this point the main target appears to be Adobe and Adobe Flash. Apples calculation is that a) they dont need those developers or b) the developers they want will learn Apples stuff anyway. My guess is that they are correct; that as much as I like the idea of developers having more options, I am not going to buy a competitors product to spite Apple unless that product is just as usable. For a non-technical consumer, I dont know that this conversation even factors into the buying decision. If it did, wed be talking about how Microsoft is trying to retake a slice of market share from the behemoth that is Linux.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • A* algorithm very slow

    - by Amaranth
    I have an programming a RTS game (I use XNA with C#). The pathfinding is working fine, except that when it has a lot of node to search in, there is a lag period of one or two seconds, it happens mainly when there is no path to the target destination, since it that situation there is more nodes to explore. I have the same problem when the path is shorter but selected more than 3 units (can't take the same path since the selected units can be in different part of the map). private List<NodeInfo> FindPath(Unit u, NodeInfo start, NodeInfo end) { Map map = GameInfo.GetInstance().GameMap; _nearestToTarget = start; start.MoveCost = 0; Vector2 endPosition = map.getTileByPos(end.X, end.Y).Position; //getTileByPos simply gets the tile in a 2D array with the X and Y indexes start.EstimatedRemainingCost = (int)(endPosition - map.getTileByPos(start.X, start.Y).Position).Length(); start.Parent = null; List<NodeInfo> openedNodes = new List<NodeInfo>(); ; List<NodeInfo> closedNodes = new List<NodeInfo>(); Point[] movements = GetMovements(u.UnitType); openedNodes.Add(start); while (!closedNodes.Contains(end) && openedNodes.Count > 0) { //Loop in nodes to find lowest cost NodeInfo currentNode = FindLowestCostOpenedNode(openedNodes); openedNodes.Remove(currentNode); closedNodes.Add(currentNode); Vector2 previousMouvement; if (currentNode.Parent == null) { previousMouvement = ConvertRotationToDirectionVector(u.Rotation); } else { previousMouvement = map.getTileByPos(currentNode.X, currentNode.Y).Position - map.getTileByPos(currentNode.Parent.X, currentNode.Parent.Y).Position; previousMouvement.Normalize(); } //For each neighbor foreach (Point movement in movements) { Point exploredGridPos = new Point(currentNode.X + movement.X, currentNode.Y + movement.Y); //Checks if valid move and checks if not if closed nodes list if (ValidNavigableNode(u.UnitType, new Point(currentNode.X, currentNode.Y), exploredGridPos) && !closedNodes.Contains(_gridMap[exploredGridPos.Y, exploredGridPos.X])) { NodeInfo exploredNode = _gridMap[exploredGridPos.Y, exploredGridPos.X]; Tile.TileType exploredTerrain = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).TerrainType; if(openedNodes.Contains(exploredNode)) { int newCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); if (newCost < exploredNode.MoveCost) { exploredNode.Parent = currentNode; exploredNode.MoveCost = newCost; //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); } } else { exploredNode.Parent = currentNode; exploredNode.MoveCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); Vector2 exploredNodeWorldPos = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).Position; exploredNode.EstimatedRemainingCost = (int)(endPosition - exploredNodeWorldPos).Length(); //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); openedNodes.Add(exploredNode); } } } } return closedNodes; } After that, I simply check if the end node is contained in the returned nodes. If so, I add the end node and each parent until I reach the start. If not, I add the nearestToTarget and each parent until I reach the start. I added a condition before calling FindPath so that only one unit can call a find path each frame (60 frame per second), but it makes no difference. I thought maybe I could solve this by allowing the find path to run in background while the game continues to run correctly, even if it takes a few frame (it is currently sequential sonce it is called in the update() of the unit if there's a target location but no path), but I don't really know how... I also though about sorting my opened nodes list by cost so I don't have to loop them, but I don't know if that would have an effect on the performance... Would there be other solutions? P.S. In the code, when I get the Move Cost, I check if the unit has to turn to perform the move, and the terrain type, nothing hard to do.

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Getting 2D Platformer entity collision Response Correct (side-to-side + jumping/landing on heads)

    - by jbrennan
    I've been working on a 2D (tile based) 2D platformer for iOS and I've got basic entity collision detection working, but there's just something not right about it and I can't quite figure out how to solve it. There are 2 forms of collision between player entities as I can tell, either the two players (human controlled) are hitting each other side-to-side (i. e. pushing against one another), or one player has jumped on the head of the other player (naturally, if I wanted to expand this to player vs enemy, the effects would be different, but the types of collisions would be identical, just the reaction should be a little different). In my code I believe I've got the side-to-side code working: If two entities press against one another, then they are both moved back on either side of the intersection rectangle so that they are just pushing on each other. I also have the "landed on the other player's head" part working. The real problem is, if the two players are currently pushing up against each other, and one player jumps, then at one point as they're jumping, the height-difference threshold that counts as a "land on head" is passed and then it registers as a jump. As a life-long player of 2D Mario Bros style games, this feels incorrect to me, but I can't quite figure out how to solve it. My code: (it's really Objective-C but I've put it in pseudo C-style code just to be simpler for non ObjC readers) void checkCollisions() { // For each entity in the scene, compare it with all other entities (but not with one it's already compared against) for (int i = 0; i < _allGameObjects.count(); i++) { // GameObject is an Entity GEGameObject *firstGameObject = _allGameObjects.objectAtIndex(i); // Don't check against yourself or any previous entity for (int j = i+1; j < _allGameObjects.count(); j++) { GEGameObject *secondGameObject = _allGameObjects.objectAtIndex(j); // Get the collision bounds for both entities, then see if they intersect // CGRect is a C-struct with an origin Point (x, y) and a Size (w, h) CGRect firstRect = firstGameObject.collisionBounds(); CGRect secondRect = secondGameObject.collisionBounds(); // Collision of any sort if (CGRectIntersectsRect(firstRect, secondRect)) { //////////////////////////////// // // // Check for jumping first (???) // // //////////////////////////////// if (firstRect.origin.y > (secondRect.origin.y + (secondRect.size.height * 0.7))) { // the top entity could be pretty far down/in to the bottom entity.... firstGameObject.didLandOnEntity(secondGameObject); } else if (secondRect.origin.y > (firstRect.origin.y + (firstRect.size.height * 0.7))) { // second entity was actually on top.... secondGameObject.didLandOnEntity.(firstGameObject); } else if (firstRect.origin.x > secondRect.origin.x && firstRect.origin.x < (secondRect.origin.x + secondRect.size.width)) { // Hit from the RIGHT CGRect intersection = CGRectIntersection(firstRect, secondRect); // The NUDGE just offsets either object back to the left or right // After the nudging, they are exactly pressing against each other with no intersection firstGameObject.nudgeToRightOfIntersection(intersection); secondGameObject.nudgeToLeftOfIntersection(intersection); } else if ((firstRect.origin.x + firstRect.size.width) > secondRect.origin.x) { // hit from the LEFT CGRect intersection = CGRectIntersection(firstRect, secondRect); secondGameObject.nudgeToRightOfIntersection(intersection); firstGameObject.nudgeToLeftOfIntersection(intersection); } } } } } I think my collision detection code is pretty close, but obviously I'm doing something a little wrong. I really think it's to do with the way my jumps are checked (I wanted to make sure that a jump could happen from an angle (instead of if the falling player had been at a right angle to the player below). Can someone please help me here? I haven't been able to find many resources on how to do this properly (and thinking like a game developer is new for me). Thanks in advance!

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • Hosting the Razor Engine for Templating in Non-Web Applications

    - by Rick Strahl
    Microsoft’s new Razor HTML Rendering Engine that is currently shipping with ASP.NET MVC previews can be used outside of ASP.NET. Razor is an alternative view engine that can be used instead of the ASP.NET Page engine that currently works with ASP.NET WebForms and MVC. It provides a simpler and more readable markup syntax and is much more light weight in terms of functionality than the full blown WebForms Page engine, focusing only on features that are more along the lines of a pure view engine (or classic ASP!) with focus on expression and code rendering rather than a complex control/object model. Like the Page engine though, the parser understands .NET code syntax which can be embedded into templates, and behind the scenes the engine compiles markup and script code into an executing piece of .NET code in an assembly. Although it ships as part of the ASP.NET MVC and WebMatrix the Razor Engine itself is not directly dependent on ASP.NET or IIS or HTTP in any way. And although there are some markup and rendering features that are optimized for HTML based output generation, Razor is essentially a free standing template engine. And what’s really nice is that unlike the ASP.NET Runtime, Razor is fairly easy to host inside of your own non-Web applications to provide templating functionality. Templating in non-Web Applications? Yes please! So why might you host a template engine in your non-Web application? Template rendering is useful in many places and I have a number of applications that make heavy use of it. One of my applications – West Wind Html Help Builder - exclusively uses template based rendering to merge user supplied help text content into customizable and executable HTML markup templates that provide HTML output for CHM style HTML Help. This is an older product and it’s not actually using .NET at the moment – and this is one reason I’m looking at Razor for script hosting at the moment. For a few .NET applications though I’ve actually used the ASP.NET Runtime hosting to provide templating and mail merge style functionality and while that works reasonably well it’s a very heavy handed approach. It’s very resource intensive and has potential issues with versioning in various different versions of .NET. The generic implementation I created in the article above requires a lot of fix up to mimic an HTTP request in a non-HTTP environment and there are a lot of little things that have to happen to ensure that the ASP.NET runtime works properly most of it having nothing to do with the templating aspect but just satisfying ASP.NET’s requirements. The Razor Engine on the other hand is fairly light weight and completely decoupled from the ASP.NET runtime and the HTTP processing. Rather it’s a pure template engine whose sole purpose is to render text templates. Hosting this engine in your own applications can be accomplished with a reasonable amount of code (actually just a few lines with the tools I’m about to describe) and without having to fake HTTP requests. It’s also much lighter on resource usage and you can easily attach custom properties to your base template implementation to easily pass context from the parent application into templates all of which was rather complicated with ASP.NET runtime hosting. Installing the Razor Template Engine You can get Razor as part of the MVC 3 (RC and later) or Web Matrix. Both are available as downloadable components from the Web Platform Installer Version 3.0 (!important – V2 doesn’t show these components). If you already have that version of the WPI installed just fire it up. You can get the latest version of the Web Platform Installer from here: http://www.microsoft.com/web/gallery/install.aspx Once the platform Installer 3.0 is installed install either MVC 3 or ASP.NET Web Pages. Once installed you’ll find a System.Web.Razor assembly in C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\System.Web.Razor.dll which you can add as a reference to your project. Creating a Wrapper The basic Razor Hosting API is pretty simple and you can host Razor with a (large-ish) handful of lines of code. I’ll show the basics of it later in this article. However, if you want to customize the rendering and handle assembly and namespace includes for the markup as well as deal with text and file inputs as well as forcing Razor to run in a separate AppDomain so you can unload the code-generated assemblies and deal with assembly caching for re-used templates little more work is required to create something that is more easily reusable. For this reason I created a Razor Hosting wrapper project that combines a bunch of this functionality into an easy to use hosting class, a hosting factory that can load the engine in a separate AppDomain and a couple of hosting containers that provided folder based and string based caching for templates for an easily embeddable and reusable engine with easy to use syntax. If you just want the code and play with the samples and source go grab the latest code from the Subversion Repository at: http://www.west-wind.com:8080/svn/articles/trunk/RazorHosting/ or a snapshot from: http://www.west-wind.com/files/tools/RazorHosting.zip Getting Started Before I get into how hosting with Razor works, let’s take a look at how you can get up and running quickly with the wrapper classes provided. It only takes a few lines of code. The easiest way to use these Razor Hosting Wrappers is to use one of the two HostContainers provided. One is for hosting Razor scripts in a directory and rendering them as relative paths from these script files on disk. The other HostContainer serves razor scripts from string templates… Let’s start with a very simple template that displays some simple expressions, some code blocks and demonstrates rendering some data from contextual data that you pass to the template in the form of a ‘context’. Here’s a simple Razor template: @using System.Reflection Hello @Context.FirstName! Your entry was entered on: @Context.Entered @{ // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); } AppDomain Id: @AppDomain.CurrentDomain.FriendlyName Assembly: @Assembly.GetExecutingAssembly().FullName Code based output: @{ // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } Response.Write(output); } Pretty easy to see what’s going on here. The only unusual thing in this code is the Context object which is an arbitrary object I’m passing from the host to the template by way of the template base class. I’m also displaying the current AppDomain and the executing Assembly name so you can see how compiling and running a template actually loads up new assemblies. Also note that as part of my context I’m passing a reference to the current Windows Form down to the template and changing the title from within the script. It’s a silly example, but it demonstrates two-way communication between host and template and back which can be very powerful. The easiest way to quickly render this template is to use the RazorEngine<TTemplateBase> class. The generic parameter specifies a template base class type that is used by Razor internally to generate the class it generates from a template. The default implementation provided in my RazorHosting wrapper is RazorTemplateBase. Here’s a simple one that renders from a string and outputs a string: var engine = new RazorEngine<RazorTemplateBase>(); // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; string output = engine.RenderTemplate(this.txtSource.Text new string[] { "System.Windows.Forms.dll" }, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; Simple enough. This code renders a template from a string input and returns a result back as a string. It  creates a custom context and passes that to the template which can then access the Context’s properties. Note that anything passed as ‘context’ must be serializable (or MarshalByRefObject) – otherwise you get an exception when passing the reference over AppDomain boundaries (discussed later). Passing a context is optional, but is a key feature in being able to share data between the host application and the template. Note that we use the Context object to access FirstName, Entered and even the host Windows Form object which is used in the template to change the Window caption from within the script! In the code above all the work happens in the RenderTemplate method which provide a variety of overloads to read and write to and from strings, files and TextReaders/Writers. Here’s another example that renders from a file input using a TextReader: using (reader = new StreamReader("templates\\simple.csHtml", true)) { result = host.RenderTemplate(reader, new string[] { "System.Windows.Forms.dll" }, this.CustomContext); } RenderTemplate() is fairly high level and it handles loading of the runtime, compiling into an assembly and rendering of the template. If you want more control you can use the lower level methods to control each step of the way which is important for the HostContainers I’ll discuss later. Basically for those scenarios you want to separate out loading of the engine, compiling into an assembly and then rendering the template from the assembly. Why? So we can keep assemblies cached. In the code above a new assembly is created for each template rendered which is inefficient and uses up resources. Depending on the size of your templates and how often you fire them you can chew through memory very quickly. This slighter lower level approach is only a couple of extra steps: // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; var engine = new RazorEngine<RazorTemplateBase>(); string assId = null; using (StringReader reader = new StringReader(this.txtSource.Text)) { assId = engine.ParseAndCompileTemplate(new string[] { "System.Windows.Forms.dll" }, reader); } string output = engine.RenderTemplateFromAssembly(assId, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; The difference here is that you can capture the assembly – or rather an Id to it – and potentially hold on to it to render again later assuming the template hasn’t changed. The HostContainers take advantage of this feature to cache the assemblies based on certain criteria like a filename and file time step or a string hash that if not change indicate that an assembly can be reused. Note that ParseAndCompileTemplate returns an assembly Id rather than the assembly itself. This is done so that that the assembly always stays in the host’s AppDomain and is not passed across AppDomain boundaries which would cause load failures. We’ll talk more about this in a minute but for now just realize that assemblies references are stored in a list and are accessible by this ID to allow locating and re-executing of the assembly based on that id. Reuse of the assembly avoids recompilation overhead and creation of yet another assembly that loads into the current AppDomain. You can play around with several different versions of the above code in the main sample form:   Using Hosting Containers for more Control and Caching The above examples simply render templates into assemblies each and every time they are executed. While this works and is even reasonably fast, it’s not terribly efficient. If you render templates more than once it would be nice if you could cache the generated assemblies for example to avoid re-compiling and creating of a new assembly each time. Additionally it would be nice to load template assemblies into a separate AppDomain optionally to be able to be able to unload assembli es and also to protect your host application from scripting attacks with malicious template code. Hosting containers provide also provide a wrapper around the RazorEngine<T> instance, a factory (which allows creation in separate AppDomains) and an easy way to start and stop the container ‘runtime’. The Razor Hosting samples provide two hosting containers: RazorFolderHostContainer and StringHostContainer. The folder host provides a simple runtime environment for a folder structure similar in the way that the ASP.NET runtime handles a virtual directory as it’s ‘application' root. Templates are loaded from disk in relative paths and the resulting assemblies are cached unless the template on disk is changed. The string host also caches templates based on string hashes – if the same string is passed a second time a cached version of the assembly is used. Here’s how HostContainers work. I’ll use the FolderHostContainer because it’s likely the most common way you’d use templates – from disk based templates that can be easily edited and maintained on disk. The first step is to create an instance of it and keep it around somewhere (in the example it’s attached as a property to the Form): RazorFolderHostContainer Host = new RazorFolderHostContainer(); public RazorFolderHostForm() { InitializeComponent(); // The base path for templates - templates are rendered with relative paths // based on this path. Host.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Add any assemblies you want reference in your templates Host.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container Host.Start(); } Next anytime you want to render a template you can use simple code like this: private void RenderTemplate(string fileName) { // Pass the template path via the Context var relativePath = Utilities.GetRelativePath(fileName, Host.TemplatePath); if (!Host.RenderTemplate(relativePath, this.Context, Host.RenderingOutputFile)) { MessageBox.Show("Error: " + Host.ErrorMessage); return; } this.webBrowser1.Navigate("file://" + Host.RenderingOutputFile); } You can also render the output to a string instead of to a file: string result = Host.RenderTemplateToString(relativePath,context); Finally if you want to release the engine and shut down the hosting AppDomain you can simply do: Host.Stop(); Stopping the AppDomain and restarting it (ie. calling Stop(); followed by Start()) is also a nice way to release all resources in the AppDomain. The FolderBased domain also supports partial Rendering based on root path based relative paths with the same caching characteristics as the main templates. From within a template you can call out to a partial like this: @RenderPartial(@"partials\PartialRendering.cshtml", Context) where partials\PartialRendering.cshtml is a relative to the template root folder. The folder host example lets you load up templates from disk and display the result in a Web Browser control which demonstrates using Razor HTML output from templates that contain HTML syntax which happens to me my target scenario for Html Help Builder.   The Razor Engine Wrapper Project The project I created to wrap Razor hosting has a fair bit of code and a number of classes associated with it. Most of the components are internally used and as you can see using the final RazorEngine<T> and HostContainer classes is pretty easy. The classes are extensible and I suspect developers will want to build more customized host containers for their applications. Host containers are the key to wrapping up all functionality – Engine, BaseTemplate, AppDomain Hosting, Caching etc in a logical piece that is ready to be plugged into an application. When looking at the code there are a couple of core features provided: Core Razor Engine Hosting This is the core Razor hosting which provides the basics of loading a template, compiling it into an assembly and executing it. This is fairly straightforward, but without a host container that can cache assemblies based on some criteria templates are recompiled and re-created each time which is inefficient (although pretty fast). The base engine wrapper implementation also supports hosting the Razor runtime in a separate AppDomain for security and the ability to unload it on demand. Host Containers The engine hosting itself doesn’t provide any sort of ‘runtime’ service like picking up files from disk, caching assemblies and so forth. So my implementation provides two HostContainers: RazorFolderHostContainer and RazorStringHostContainer. The FolderHost works off a base directory and loads templates based on relative paths (sort of like the ASP.NET runtime does off a virtual). The HostContainers also deal with caching of template assemblies – for the folder host the file date is tracked and checked for updates and unless the template is changed a cached assembly is reused. The StringHostContainer similiarily checks string hashes to figure out whether a particular string template was previously compiled and executed. The HostContainers also act as a simple startup environment and a single reference to easily store and reuse in an application. TemplateBase Classes The template base classes are the base classes that from which the Razor engine generates .NET code. A template is parsed into a class with an Execute() method and the class is based on this template type you can specify. RazorEngine<TBaseTemplate> can receive this type and the HostContainers default to specific templates in their base implementations. Template classes are customizable to allow you to create templates that provide application specific features and interaction from the template to your host application. How does the RazorEngine wrapper work? You can browse the source code in the links above or in the repository or download the source, but I’ll highlight some key features here. Here’s part of the RazorEngine implementation that can be used to host the runtime and that demonstrates the key code required to host the Razor runtime. The RazorEngine class is implemented as a generic class to reflect the Template base class type: public class RazorEngine<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase The generic type is used to internally provide easier access to the template type and assignments on it as part of the template processing. The class also inherits MarshalByRefObject to allow execution over AppDomain boundaries – something that all the classes discussed here need to do since there is much interaction between the host and the template. The first two key methods deal with creating a template assembly: /// <summary> /// Creates an instance of the RazorHost with various options applied. /// Applies basic namespace imports and the name of the class to generate /// </summary> /// <param name="generatedNamespace"></param> /// <param name="generatedClass"></param> /// <returns></returns> protected RazorTemplateEngine CreateHost(string generatedNamespace, string generatedClass) { Type baseClassType = typeof(TBaseTemplateType); RazorEngineHost host = new RazorEngineHost(new CSharpRazorCodeLanguage()); host.DefaultBaseClass = baseClassType.FullName; host.DefaultClassName = generatedClass; host.DefaultNamespace = generatedNamespace; host.NamespaceImports.Add("System"); host.NamespaceImports.Add("System.Text"); host.NamespaceImports.Add("System.Collections.Generic"); host.NamespaceImports.Add("System.Linq"); host.NamespaceImports.Add("System.IO"); return new RazorTemplateEngine(host); } /// <summary> /// Parses and compiles a markup template into an assembly and returns /// an assembly name. The name is an ID that can be passed to /// ExecuteTemplateByAssembly which picks up a cached instance of the /// loaded assembly. /// /// </summary> /// <param name="namespaceOfGeneratedClass">The namespace of the class to generate from the template</param> /// <param name="generatedClassName">The name of the class to generate from the template</param> /// <param name="ReferencedAssemblies">Any referenced assemblies by dll name only. Assemblies must be in execution path of host or in GAC.</param> /// <param name="templateSourceReader">Textreader that loads the template</param> /// <remarks> /// The actual assembly isn't returned here to allow for cross-AppDomain /// operation. If the assembly was returned it would fail for cross-AppDomain /// calls. /// </remarks> /// <returns>An assembly Id. The Assembly is cached in memory and can be used with RenderFromAssembly.</returns> public string ParseAndCompileTemplate( string namespaceOfGeneratedClass, string generatedClassName, string[] ReferencedAssemblies, TextReader templateSourceReader) { RazorTemplateEngine engine = CreateHost(namespaceOfGeneratedClass, generatedClassName); // Generate the template class as CodeDom GeneratorResults razorResults = engine.GenerateCode(templateSourceReader); // Create code from the codeDom and compile CSharpCodeProvider codeProvider = new CSharpCodeProvider(); CodeGeneratorOptions options = new CodeGeneratorOptions(); // Capture Code Generated as a string for error info // and debugging LastGeneratedCode = null; using (StringWriter writer = new StringWriter()) { codeProvider.GenerateCodeFromCompileUnit(razorResults.GeneratedCode, writer, options); LastGeneratedCode = writer.ToString(); } CompilerParameters compilerParameters = new CompilerParameters(ReferencedAssemblies); // Standard Assembly References compilerParameters.ReferencedAssemblies.Add("System.dll"); compilerParameters.ReferencedAssemblies.Add("System.Core.dll"); compilerParameters.ReferencedAssemblies.Add("Microsoft.CSharp.dll"); // dynamic support! // Also add the current assembly so RazorTemplateBase is available compilerParameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().CodeBase.Substring(8)); compilerParameters.GenerateInMemory = Configuration.CompileToMemory; if (!Configuration.CompileToMemory) compilerParameters.OutputAssembly = Path.Combine(Configuration.TempAssemblyPath, "_" + Guid.NewGuid().ToString("n") + ".dll"); CompilerResults compilerResults = codeProvider.CompileAssemblyFromDom(compilerParameters, razorResults.GeneratedCode); if (compilerResults.Errors.Count > 0) { var compileErrors = new StringBuilder(); foreach (System.CodeDom.Compiler.CompilerError compileError in compilerResults.Errors) compileErrors.Append(String.Format(Resources.LineX0TColX1TErrorX2RN, compileError.Line, compileError.Column, compileError.ErrorText)); this.SetError(compileErrors.ToString() + "\r\n" + LastGeneratedCode); return null; } AssemblyCache.Add(compilerResults.CompiledAssembly.FullName, compilerResults.CompiledAssembly); return compilerResults.CompiledAssembly.FullName; } Think of the internal CreateHost() method as setting up the assembly generated from each template. Each template compiles into a separate assembly. It sets up namespaces, and assembly references, the base class used and the name and namespace for the generated class. ParseAndCompileTemplate() then calls the CreateHost() method to receive the template engine generator which effectively generates a CodeDom from the template – the template is turned into .NET code. The code generated from our earlier example looks something like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace RazorTest { using System; using System.Text; using System.Collections.Generic; using System.Linq; using System.IO; using System.Reflection; public class RazorTemplate : RazorHosting.RazorTemplateBase { #line hidden public RazorTemplate() { } public override void Execute() { WriteLiteral("Hello "); Write(Context.FirstName); WriteLiteral("! Your entry was entered on: "); Write(Context.Entered); WriteLiteral("\r\n\r\n"); // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); WriteLiteral("\r\nAppDomain Id:\r\n "); Write(AppDomain.CurrentDomain.FriendlyName); WriteLiteral("\r\n \r\nAssembly:\r\n "); Write(Assembly.GetExecutingAssembly().FullName); WriteLiteral("\r\n\r\nCode based output: \r\n"); // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } } } } Basically the template’s body is turned into code in an Execute method that is called. Internally the template’s Write method is fired to actually generate the output. Note that the class inherits from RazorTemplateBase which is the generic parameter I used to specify the base class when creating an instance in my RazorEngine host: var engine = new RazorEngine<RazorTemplateBase>(); This template class must be provided and it must implement an Execute() and Write() method. Beyond that you can create any class you chose and attach your own properties. My RazorTemplateBase class implementation is very simple: public class RazorTemplateBase : MarshalByRefObject, IDisposable { /// <summary> /// You can pass in a generic context object /// to use in your template code /// </summary> public dynamic Context { get; set; } /// <summary> /// Class that generates output. Currently ultra simple /// with only Response.Write() implementation. /// </summary> public RazorResponse Response { get; set; } public object HostContainer {get; set; } public object Engine { get; set; } public RazorTemplateBase() { Response = new RazorResponse(); } public virtual void Write(object value) { Response.Write(value); } public virtual void WriteLiteral(object value) { Response.Write(value); } /// <summary> /// Razor Parser implements this method /// </summary> public virtual void Execute() {} public virtual void Dispose() { if (Response != null) { Response.Dispose(); Response = null; } } } Razor fills in the Execute method when it generates its subclass and uses the Write() method to output content. As you can see I use a RazorResponse() class here to generate output. This isn’t necessary really, as you could use a StringBuilder or StringWriter() directly, but I prefer using Response object so I can extend the Response behavior as needed. The RazorResponse class is also very simple and merely acts as a wrapper around a TextWriter: public class RazorResponse : IDisposable { /// <summary> /// Internal text writer - default to StringWriter() /// </summary> public TextWriter Writer = new StringWriter(); public virtual void Write(object value) { Writer.Write(value); } public virtual void WriteLine(object value) { Write(value); Write("\r\n"); } public virtual void WriteFormat(string format, params object[] args) { Write(string.Format(format, args)); } public override string ToString() { return Writer.ToString(); } public virtual void Dispose() { Writer.Close(); } public virtual void SetTextWriter(TextWriter writer) { // Close original writer if (Writer != null) Writer.Close(); Writer = writer; } } The Rendering Methods of RazorEngine At this point I’ve talked about the assembly generation logic and the template implementation itself. What’s left is that once you’ve generated the assembly is to execute it. The code to do this is handled in the various RenderXXX methods of the RazorEngine class. Let’s look at the lowest level one of these which is RenderTemplateFromAssembly() and a couple of internal support methods that handle instantiating and invoking of the generated template method: public string RenderTemplateFromAssembly( string assemblyId, string generatedNamespace, string generatedClass, object context, TextWriter outputWriter) { this.SetError(); Assembly generatedAssembly = AssemblyCache[assemblyId]; if (generatedAssembly == null) { this.SetError(Resources.PreviouslyCompiledAssemblyNotFound); return null; } string className = generatedNamespace + "." + generatedClass; Type type; try { type = generatedAssembly.GetType(className); } catch (Exception ex) { this.SetError(Resources.UnableToCreateType + className + ": " + ex.Message); return null; } // Start with empty non-error response (if we use a writer) string result = string.Empty; using(TBaseTemplateType instance = InstantiateTemplateClass(type)) { if (instance == null) return null; if (outputWriter != null) instance.Response.SetTextWriter(outputWriter); if (!InvokeTemplateInstance(instance, context)) return null; // Capture string output if implemented and return // otherwise null is returned if (outputWriter == null) result = instance.Response.ToString(); } return result; } protected virtual TBaseTemplateType InstantiateTemplateClass(Type type) { TBaseTemplateType instance = Activator.CreateInstance(type) as TBaseTemplateType; if (instance == null) { SetError(Resources.CouldnTActivateTypeInstance + type.FullName); return null; } instance.Engine = this; // If a HostContainer was set pass that to the template too instance.HostContainer = this.HostContainer; return instance; } /// <summary> /// Internally executes an instance of the template, /// captures errors on execution and returns true or false /// </summary> /// <param name="instance">An instance of the generated template</param> /// <returns>true or false - check ErrorMessage for errors</returns> protected virtual bool InvokeTemplateInstance(TBaseTemplateType instance, object context) { try { instance.Context = context; instance.Execute(); } catch (Exception ex) { this.SetError(Resources.TemplateExecutionError + ex.Message); return false; } finally { // Must make sure Response is closed instance.Response.Dispose(); } return true; } The RenderTemplateFromAssembly method basically requires the namespace and class to instantate and creates an instance of the class using InstantiateTemplateClass(). It then invokes the method with InvokeTemplateInstance(). These two methods are broken out because they are re-used by various other rendering methods and also to allow subclassing and providing additional configuration tasks to set properties and pass values to templates at execution time. In the default mode instantiation sets the Engine and HostContainer (discussed later) so the template can call back into the template engine, and the context is set when the template method is invoked. The various RenderXXX methods use similar code although they create the assemblies first. If you’re after potentially cashing assemblies the method is the one to call and that’s exactly what the two HostContainer classes do. More on that in a minute, but before we get into HostContainers let’s talk about AppDomain hosting and the like. Running Templates in their own AppDomain With the RazorEngine class above, when a template is parsed into an assembly and executed the assembly is created (in memory or on disk – you can configure that) and cached in the current AppDomain. In .NET once an assembly has been loaded it can never be unloaded so if you’re loading lots of templates and at some time you want to release them there’s no way to do so. If however you load the assemblies in a separate AppDomain that new AppDomain can be unloaded and the assemblies loaded in it with it. In order to host the templates in a separate AppDomain the easiest thing to do is to run the entire RazorEngine in a separate AppDomain. Then all interaction occurs in the other AppDomain and no further changes have to be made. To facilitate this there is a RazorEngineFactory which has methods that can instantiate the RazorHost in a separate AppDomain as well as in the local AppDomain. The host creates the remote instance and then hangs on to it to keep it alive as well as providing methods to shut down the AppDomain and reload the engine. Sounds complicated but cross-AppDomain invocation is actually fairly easy to implement. Here’s some of the relevant code from the RazorEngineFactory class. Like the RazorEngine this class is generic and requires a template base type in the generic class name: public class RazorEngineFactory<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase Here are the key methods of interest: /// <summary> /// Creates an instance of the RazorHost in a new AppDomain. This /// version creates a static singleton that that is cached and you /// can call UnloadRazorHostInAppDomain to unload it. /// </summary> /// <returns></returns> public static RazorEngine<TBaseTemplateType> CreateRazorHostInAppDomain() { if (Current == null) Current = new RazorEngineFactory<TBaseTemplateType>(); return Current.GetRazorHostInAppDomain(); } public static void UnloadRazorHostInAppDomain() { if (Current != null) Current.UnloadHost(); Current = null; } /// <summary> /// Instance method that creates a RazorHost in a new AppDomain. /// This method requires that you keep the Factory around in /// order to keep the AppDomain alive and be able to unload it. /// </summary> /// <returns></returns> public RazorEngine<TBaseTemplateType> GetRazorHostInAppDomain() { LocalAppDomain = CreateAppDomain(null); if (LocalAppDomain == null) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! RazorEngine<TBaseTemplateType> host = null; try { Assembly ass = Assembly.GetExecutingAssembly(); string AssemblyPath = ass.Location; host = (RazorEngine<TBaseTemplateType>) LocalAppDomain.CreateInstanceFrom(AssemblyPath, typeof(RazorEngine<TBaseTemplateType>).FullName).Unwrap(); } catch (Exception ex) { ErrorMessage = ex.Message; return null; } return host; } /// <summary> /// Internally creates a new AppDomain in which Razor templates can /// be run. /// </summary> /// <param name="appDomainName"></param> /// <returns></returns> private AppDomain CreateAppDomain(string appDomainName) { if (appDomainName == null) appDomainName = "RazorHost_" + Guid.NewGuid().ToString("n"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; AppDomain localDomain = AppDomain.CreateDomain(appDomainName, null, setup); return localDomain; } /// <summary> /// Allow unloading of the created AppDomain to release resources /// All internal resources in the AppDomain are released including /// in memory compiled Razor assemblies. /// </summary> public void UnloadHost() { if (this.LocalAppDomain != null) { AppDomain.Unload(this.LocalAppDomain); this.LocalAppDomain = null; } } The static CreateRazorHostInAppDomain() is the key method that startup code usually calls. It uses a Current singleton instance to an instance of itself that is created cross AppDomain and is kept alive because it’s static. GetRazorHostInAppDomain actually creates a cross-AppDomain instance which first creates a new AppDomain and then loads the RazorEngine into it. The remote Proxy instance is returned as a result to the method and can be used the same as a local instance. The code to run with a remote AppDomain is simple: private RazorEngine<RazorTemplateBase> CreateHost() { if (this.Host != null) return this.Host; // Use Static Methods - no error message if host doesn't load this.Host = RazorEngineFactory<RazorTemplateBase>.CreateRazorHostInAppDomain(); if (this.Host == null) { MessageBox.Show("Unable to load Razor Template Host", "Razor Hosting", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } return this.Host; } This code relies on a local reference of the Host which is kept around for the duration of the app (in this case a form reference). To use this you’d simply do: this.Host = CreateHost(); if (host == null) return; string result = host.RenderTemplate( this.txtSource.Text, new string[] { "System.Windows.Forms.dll", "Westwind.Utilities.dll" }, this.CustomContext); if (result == null) { MessageBox.Show(host.ErrorMessage, "Template Execution Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); return; } this.txtResult.Text = result; Now all templates run in a remote AppDomain and can be unloaded with simple code like this: RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Host = null; One Step further – Providing a caching ‘Runtime’ Once we can load templates in a remote AppDomain we can add some additional functionality like assembly caching based on application specific features. One of my typical scenarios is to render templates out of a scripts folder. So all templates live in a folder and they change infrequently. So a Folder based host that can compile these templates once and then only recompile them if something changes would be ideal. Enter host containers which are basically wrappers around the RazorEngine<t> and RazorEngineFactory<t>. They provide additional logic for things like file caching based on changes on disk or string hashes for string based template inputs. The folder host also provides for partial rendering logic through a custom template base implementation. There’s a base implementation in RazorBaseHostContainer, which provides the basics for hosting a RazorEngine, which includes the ability to start and stop the engine, cache assemblies and add references: public abstract class RazorBaseHostContainer<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase, new() { public RazorBaseHostContainer() { UseAppDomain = true; GeneratedNamespace = "__RazorHost"; } /// <summary> /// Determines whether the Container hosts Razor /// in a separate AppDomain. Seperate AppDomain /// hosting allows unloading and releasing of /// resources. /// </summary> public bool UseAppDomain { get; set; } /// <summary> /// Base folder location where the AppDomain /// is hosted. By default uses the same folder /// as the host application. /// /// Determines where binary dependencies are /// found for assembly references. /// </summary> public string BaseBinaryFolder { get; set; } /// <summary> /// List of referenced assemblies as string values. /// Must be in GAC or in the current folder of the host app/ /// base BinaryFolder /// </summary> public List<string> ReferencedAssemblies = new List<string>(); /// <summary> /// Name of the generated namespace for template classes /// </summary> public string GeneratedNamespace {get; set; } /// <summary> /// Any error messages /// </summary> public string ErrorMessage { get; set; } /// <summary> /// Cached instance of the Host. Required to keep the /// reference to the host alive for multiple uses. /// </summary> public RazorEngine<TBaseTemplateType> Engine; /// <summary> /// Cached instance of the Host Factory - so we can unload /// the host and its associated AppDomain. /// </summary> protected RazorEngineFactory<TBaseTemplateType> EngineFactory; /// <summary> /// Keep track of each compiled assembly /// and when it was compiled. /// /// Use a hash of the string to identify string /// changes. /// </summary> protected Dictionary<int, CompiledAssemblyItem> LoadedAssemblies = new Dictionary<int, CompiledAssemblyItem>(); /// <summary> /// Call to start the Host running. Follow by a calls to RenderTemplate to /// render individual templates. Call Stop when done. /// </summary> /// <returns>true or false - check ErrorMessage on false </returns> public virtual bool Start() { if (Engine == null) { if (UseAppDomain) Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHostInAppDomain(); else Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHost(); Engine.Configuration.CompileToMemory = true; Engine.HostContainer = this; if (Engine == null) { this.ErrorMessage = EngineFactory.ErrorMessage; return false; } } return true; } /// <summary> /// Stops the Host and releases the host AppDomain and cached /// assemblies. /// </summary> /// <returns>true or false</returns> public bool Stop() { this.LoadedAssemblies.Clear(); RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Engine = null; return true; } … } This base class provides most of the mechanics to host the runtime, but no application specific implementation for rendering. There are rendering functions but they just call the engine directly and provide no caching – there’s no context to decide how to cache and reuse templates. The key methods are Start and Stop and their main purpose is to start a new AppDomain (optionally) and shut it down when requested. The RazorFolderHostContainer – Folder Based Runtime Hosting Let’s look at the more application specific RazorFolderHostContainer implementation which is defined like this: public class RazorFolderHostContainer : RazorBaseHostContainer<RazorTemplateFolderHost> Note that a customized RazorTemplateFolderHost class template is used for this implementation that supports partial rendering in form of a RenderPartial() method that’s available to templates. The folder host’s features are: Render templates based on a Template Base Path (a ‘virtual’ if you will) Cache compiled assemblies based on the relative path and file time stamp File changes on templates cause templates to be recompiled into new assemblies Support for partial rendering using base folder relative pathing As shown in the startup examples earlier host containers require some startup code with a HostContainer tied to a persistent property (like a Form property): // The base path for templates - templates are rendered with relative paths // based on this path. HostContainer.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Default output rendering disk location HostContainer.RenderingOutputFile = Path.Combine(HostContainer.TemplatePath, "__Preview.htm"); // Add any assemblies you want reference in your templates HostContainer.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container HostContainer.Start(); Once that’s done, you can render templates with the host container: // Pass the template path for full filename seleted with OpenFile Dialog // relativepath is: subdir\file.cshtml or file.cshtml or ..\file.cshtml var relativePath = Utilities.GetRelativePath(fileName, HostContainer.TemplatePath); if (!HostContainer.RenderTemplate(relativePath, Context, HostContainer.RenderingOutputFile)) { MessageBox.Show("Error: " + HostContainer.ErrorMessage); return; } webBrowser1.Navigate("file://" + HostContainer.RenderingOutputFile); The most critical task of the RazorFolderHostContainer implementation is to retrieve a template from disk, compile and cache it and then deal with deciding whether subsequent requests need to re-compile the template or simply use a cached version. Internally the GetAssemblyFromFileAndCache() handles this task: /// <summary> /// Internally checks if a cached assembly exists and if it does uses it /// else creates and compiles one. Returns an assembly Id to be /// used with the LoadedAssembly list. /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> protected virtual CompiledAssemblyItem GetAssemblyFromFileAndCache(string relativePath) { string fileName = Path.Combine(TemplatePath, relativePath).ToLower(); int fileNameHash = fileName.GetHashCode(); if (!File.Exists(fileName)) { this.SetError(Resources.TemplateFileDoesnTExist + fileName); return null; } CompiledAssemblyItem item = null; this.LoadedAssemblies.TryGetValue(fileNameHash, out item); string assemblyId = null; // Check for cached instance if (item != null) { var fileTime = File.GetLastWriteTimeUtc(fileName); if (fileTime <= item.CompileTimeUtc) assemblyId = item.AssemblyId; } else item = new CompiledAssemblyItem(); // No cached instance - create assembly and cache if (assemblyId == null) { string safeClassName = GetSafeClassName(fileName); StreamReader reader = null; try { reader = new StreamReader(fileName, true); } catch (Exception ex) { this.SetError(Resources.ErrorReadingTemplateFile + fileName); return null; } assemblyId = Engine.ParseAndCompileTemplate(this.ReferencedAssemblies.ToArray(), reader); // need to ensure reader is closed if (reader != null) reader.Close(); if (assemblyId == null) { this.SetError(Engine.ErrorMessage); return null; } item.AssemblyId = assemblyId; item.CompileTimeUtc = DateTime.UtcNow; item.FileName = fileName; item.SafeClassName = safeClassName; this.LoadedAssemblies[fileNameHash] = item; } return item; } This code uses a LoadedAssembly dictionary which is comprised of a structure that holds a reference to a compiled assembly, a full filename and file timestamp and an assembly id. LoadedAssemblies (defined on the base class shown earlier) is essentially a cache for compiled assemblies and they are identified by a hash id. In the case of files the hash is a GetHashCode() from the full filename of the template. The template is checked for in the cache and if not found the file stamp is checked. If that’s newer than the cache’s compilation date the template is recompiled otherwise the version in the cache is used. All the core work defers to a RazorEngine<T> instance to ParseAndCompileTemplate(). The three rendering specific methods then are rather simple implementations with just a few lines of code dealing with parameter and return value parsing: /// <summary> /// Renders a template to a TextWriter. Useful to write output into a stream or /// the Response object. Used for partial rendering. /// </summary> /// <param name="relativePath">Relative path to the file in the folder structure</param> /// <param name="context">Optional context object or null</param> /// <param name="writer">The textwriter to write output into</param> /// <returns></returns> public bool RenderTemplate(string relativePath, object context, TextWriter writer) { // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; CompiledAssemblyItem item = GetAssemblyFromFileAndCache(relativePath); if (item == null) { writer.Close(); return false; } try { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error string result = Engine.RenderTemplateFromAssembly(item.AssemblyId, context, writer); if (result == null) { this.SetError(Engine.ErrorMessage); return false; } } catch (Exception ex) { this.SetError(ex.Message); return false; } finally { writer.Close(); } return true; } /// <summary> /// Render a template from a source file on disk to a specified outputfile. /// </summary> /// <param name="relativePath">Relative path off the template root folder. Format: path/filename.cshtml</param> /// <param name="context">Any object that will be available in the template as a dynamic of this.Context</param> /// <param name="outputFile">Optional - output file where output is written to. If not specified the /// RenderingOutputFile property is used instead /// </param> /// <returns>true if rendering succeeds, false on failure - check ErrorMessage</returns> public bool RenderTemplate(string relativePath, object context, string outputFile) { if (outputFile == null) outputFile = RenderingOutputFile; try { using (StreamWriter writer = new StreamWriter(outputFile, false, Engine.Configuration.OutputEncoding, Engine.Configuration.StreamBufferSize)) { return RenderTemplate(relativePath, context, writer); } } catch (Exception ex) { this.SetError(ex.Message); return false; } return true; } /// <summary> /// Renders a template to string. Useful for RenderTemplate /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> public string RenderTemplateToString(string relativePath, object context) { string result = string.Empty; try { using (StringWriter writer = new StringWriter()) { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error if (!RenderTemplate(relativePath, context, writer)) { this.SetError(Engine.ErrorMessage); return null; } result = writer.ToString(); } } catch (Exception ex) { this.SetError(ex.Message); return null; } return result; } The idea is that you can create custom host container implementations that do exactly what you want fairly easily. Take a look at both the RazorFolderHostContainer and RazorStringHostContainer classes for the basic concepts you can use to create custom implementations. Notice also that you can set the engine’s PerRequestConfigurationData() from the host container: // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; which when set to a non-null value is passed to the Template’s InitializeTemplate() method. This method receives an object parameter which you can cast as needed: public override void InitializeTemplate(object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; } With this data you can then configure any custom properties or objects on your main template class. It’s an easy way to pass data from the HostContainer all the way down into the template. The type you use is of type object so you have to cast it yourself, and it must be serializable since it will likely run in a separate AppDomain. This might seem like an ugly way to pass data around – normally I’d use an event delegate to call back from the engine to the host, but since this is running over AppDomain boundaries events get really tricky and passing a template instance back up into the host over AppDomain boundaries doesn’t work due to serialization issues. So it’s easier to pass the data from the host down into the template using this rather clumsy approach of set and forward. It’s ugly, but it’s something that can be hidden in the host container implementation as I’ve done here. It’s also not something you have to do in every implementation so this is kind of an edge case, but I know I’ll need to pass a bunch of data in some of my applications and this will be the easiest way to do so. Summing Up Hosting the Razor runtime is something I got jazzed up about quite a bit because I have an immediate need for this type of templating/merging/scripting capability in an application I’m working on. I’ve also been using templating in many apps and it’s always been a pain to deal with. The Razor engine makes this whole experience a lot cleaner and more light weight and with these wrappers I can now plug .NET based templating into my code literally with a few lines of code. That’s something to cheer about… I hope some of you will find this useful as well… Resources The examples and code require that you download the Razor runtimes. Projects are for Visual Studio 2010 running on .NET 4.0 Platform Installer 3.0 (install WebMatrix or MVC 3 for Razor Runtimes) Latest Code in Subversion Repository Download Snapshot of the Code Documentation (CHM Help File) © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  .NET  

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >