Search Results

Search found 1507 results on 61 pages for 'on demand'.

Page 61/61 | < Previous Page | 57 58 59 60 61 

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • JBoss 7.1.1 + Spring 3.1 + JPA 2 (Hibernate 3.6.8) > Cannot parse beans.xml

    - by Mashrur
    Trying to deploy a JSF 2 app with Spring 3.1 + JPA 2 (Hibernte 3.6.8) into JBoss 7.1.1 AS. The app was working fine on tomcat 7. Now, I have already added and changed some configurations. Added jboss-deployment-structure.xml <jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1"> <deployment> <exclusions> <module name="org.apache.log4j" /> <module name="javax.faces.api" slot="main"/> <module name="com.sun.jsf-impl" slot="main"/> <module name="org.hibernate"/> <module name="org.javassist" /> <module name="javaee.api" /> <module name="org.hibernate.validator" /> <module name="org.jboss.as.web" /> <module name="org.jboss.logging" /> <module name="javax.persistence.api" /> <module name="org.jboss.interceptor" /> </exclusions> </deployment> </jboss-deployment-structure> 2. Inside web.xml added these lines: <persistence-unit-ref> <persistence-unit-ref-name>persistence/persistenceUnit</persistence-unit-ref-name> <persistence-unit-name>persistenceUnit</persistence-unit-name> </persistence-unit-ref> 3. Inside applicationContext.xml, changed bean definition for entityManagerFactory by adding this property <property name="persistenceXmlLocation" value="classpath*:META-INF/persistence.xml"/> Now, do I still need to do something more than these which is obvious to you? Because, while I am trying to deploy it from eclipse indigo SR2, getting this 14:10:32,046 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA 14:10:33,054 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA 14:10:33,200 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting 14:10:36,375 INFO [org.xnio] XNIO Version 3.0.3.GA 14:10:36,384 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http) 14:10:36,432 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA 14:10:36,462 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA 14:10:36,587 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers 14:10:36,714 INFO [org.jboss.as.security] (ServerService Thread Pool -- 44) JBAS013101: Activating Security Subsystem 14:10:36,735 INFO [org.jboss.as.naming] (MSC service thread 1-2) JBAS011802: Starting Naming Service 14:10:37,043 INFO [org.jboss.as.mail.extension] (MSC service thread 1-6) JBAS015400: Bound mail session [java:jboss/mail/Default] 14:10:37,208 INFO [org.jboss.as.connector] (MSC service thread 1-7) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final) 14:10:37,295 INFO [org.jboss.as.security] (MSC service thread 1-7) JBAS013100: Current PicketBox version=4.0.7.Final 14:10:37,740 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 14:10:38,792 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-3) JBoss Web Services - Stack CXF Server 4.0.2.GA 14:10:38,833 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-2) Starting Coyote HTTP/1.1 on http-localhost-127.0.0.1-8080 14:10:39,534 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-1) JBAS015012: Started FileSystemDeploymentService for directory F:\work\softwares\Application Servers\JBoss\jboss-as-7.1.1.Final\jboss-as-7.1.1.Final\standalone\deployments 14:10:39,618 INFO [org.jboss.as.remoting] (MSC service thread 1-3) JBAS017100: Listening on localhost/127.0.0.1:4447 14:10:39,623 INFO [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on /127.0.0.1:9999 14:10:39,698 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment treasury.war 14:10:39,706 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found com.misl.treasury.ui.war in deployment directory. To trigger deployment create a file called com.misl.treasury.ui.war.dodeploy 14:10:40,105 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS] 14:10:40,399 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "com.misl.treasury.ui.war" 14:10:40,405 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "treasury.war" 14:10:55,283 WARN [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,289 WARN [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,428 INFO [org.jboss.as.jpa] (MSC service thread 1-7) JBAS011401: Read persistence.xml for persistenceUnit 14:10:55,843 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,849 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection' for service type 'com.sun.faces.spi.injectionprovider' 14:10:56,010 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC00001: Failed to start service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "com.misl.treasury.ui.war" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_23] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_23] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_23] Caused by: java.lang.NoClassDefFoundError: javax/servlet/ServletOutputStream at java.lang.Class.getDeclaredConstructors0(Native Method) [rt.jar:1.6.0_23] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) [rt.jar:1.6.0_23] at java.lang.Class.getConstructor0(Class.java:2699) [rt.jar:1.6.0_23] at java.lang.Class.getConstructor(Class.java:1657) [rt.jar:1.6.0_23] at org.jboss.as.web.deployment.jsf.JsfManagedBeanProcessor.deploy(JsfManagedBeanProcessor.java:108) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: java.lang.ClassNotFoundException: javax.servlet.ServletOutputStream from [Module "deployment.com.misl.treasury.ui.war:main" from Service Module Loader] at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190) at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:423) at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398) at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:120) ... 11 more 14:10:56,628 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC00001: Failed to start service jboss.deployment.unit."treasury.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."treasury.war".PARSE: Failed to process phase PARSE of deployment "treasury.war" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_23] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_23] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_23] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: SAXException parsing vfs:/F:/work/softwares/Application Servers/JBoss/jboss-as-7.1.1.Final/jboss-as-7.1.1.Final/bin/content/treasury.war/WEB-INF/beans.xml at org.jboss.as.weld.deployment.BeansXmlParser.parse(BeansXmlParser.java:100) at org.jboss.as.weld.deployment.processors.BeansXmlProcessor.parseBeansXml(BeansXmlProcessor.java:133) at org.jboss.as.weld.deployment.processors.BeansXmlProcessor.deploy(BeansXmlProcessor.java:97) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.xml.sax.SAXParseException: Premature end of file. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(XMLDocumentScannerImpl.java:903) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) at org.jboss.as.weld.deployment.BeansXmlParser.parse(BeansXmlParser.java:94) ... 8 more 14:10:56,670 INFO [org.jboss.as] (MSC service thread 1-1) JBAS015951: Admin console listening on http://127.0.0.1:9990 14:10:56,672 ERROR [org.jboss.as] (MSC service thread 1-1) JBAS015875: JBoss AS 7.1.1.Final "Brontes" started (with errors) in 25527ms - Started 143 of 222 services (2 services failed or missing dependencies, 76 services are passive or on-demand) 14:10:56,671 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015871: Deploy of deployment "treasury.war" was rolled back with no failure message 14:10:56,679 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015870: Deploy of deployment "com.misl.treasury.ui.war" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE: Failed to process phase POST_MODULE of deployment \"com.misl.treasury.ui.war\""}} 14:10:56,851 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) JBAS015877: Stopped deployment com.misl.treasury.ui.war in 172ms 14:10:57,068 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015877: Stopped deployment treasury.war in 395ms 14:10:57,070 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."treasury.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."treasury.war".PARSE: Failed to process phase PARSE of deployment "treasury.war" service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "com.misl.treasury.ui.war" 14:10:57,079 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE: Failed to process phase POST_MODULE of deployment \"com.misl.treasury.ui.war\""}}}} 14:10:57,087 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back And by the way, JBOSS_HOME\modules\javax\api\main folder only contains a module.xml, no jars. Tried to add jboss-servlet-api_3.0_spec-1.0.0.Final.jar file in that directory and updated module.xml too. But, it shows a very long trail of stacktraces :) I have even removed beans.xml. No change. I have never tried JBoss before. Any help would be highly appreciated.

    Read the article

  • How to display data stored in core data in a table view?

    - by Dipanjan Dutta
    Hello All, I have developed a core data model for my application. I need to display the saved data into a table view. For my app I have selected split view controller. I am writing down my codes below. Please help me in this regard and write me the code that needs to be added. This is very important as my continuation in my company depends on this. #import "RootViewController.h" #import "DetailViewController.h" #import "AddViewController.h" #import "EmployeeDetailsAppDelegate.h" /* This template does not ensure user interface consistency during editing operations in the table view. You must implement appropriate methods to provide the user experience you require. */ @interface RootViewController () - (void)configureCell:(UITableViewCell *)cell atIndexPath:(NSIndexPath *)indexPath; @end @implementation RootViewController @synthesize detailViewController, fetchedResultsController, managedObjectContext, results, empName; #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { results = [[NSMutableDictionary alloc]init]; [results setObject:empName.text forKey:@"EmployeeName"]; [self.tableView reloadData]; [super viewDidLoad]; } /* - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } */ /* - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Ensure that the view controller supports rotation and that the split view can therefore show in both portrait and landscape. return YES; } - (void)configureCell:(UITableViewCell *)cell atIndexPath:(NSIndexPath *)indexPath { NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; cell.textLabel.text = [[managedObject valueForKey:@"EmployeeName"] description]; } #pragma mark - #pragma mark Add a new object - (void)insertNewObject:(id)sender { AddViewController *add = [[AddViewController alloc]initWithNibName:@"AddViewController" bundle:nil]; self.modalPresentationStyle = UIModalPresentationFormSheet; add.wantsFullScreenLayout = NO; [self presentModalViewController:add animated:YES]; [add release]; } #pragma mark - #pragma mark Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 1; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell. NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; cell.textLabel.text = [[managedObject valueForKey:@"EmployeeName"] description]; return cell; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the managed object. NSManagedObject *objectToDelete = [self.fetchedResultsController objectAtIndexPath:indexPath]; if (self.detailViewController.detailItem == objectToDelete) { self.detailViewController.detailItem = nil; } NSManagedObjectContext *context = [self.fetchedResultsController managedObjectContext]; [context deleteObject:objectToDelete]; NSError *error; if (![context save:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } } } - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // The table view should not be re-orderable. return NO; } #pragma mark - #pragma mark Table view delegate - (void)tableView:(UITableView *)aTableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Set the detail item in the detail view controller. NSManagedObject *selectedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; self.detailViewController.detailItem = selectedObject; } #pragma mark - #pragma mark Fetched results controller - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Details" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Set the batch size to a suitable number. [fetchRequest setFetchBatchSize:20]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"EmployeeName" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; return fetchedResultsController; } #pragma mark - #pragma mark Fetched results controller delegate - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { UITableView *tableView = self.tableView; switch(type) { case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self configureCell:[tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath]withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc. that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)dealloc { [detailViewController release]; [fetchedResultsController release]; [managedObjectContext release]; [super dealloc]; } @end // // AddViewController.m // EmployeeDetails // // Created by Dipanjan on 15/02/11. // Copyright 2011 __MyCompanyName__. All rights reserved. // #import "AddViewController.h" #import "EmployeeDetailsAppDelegate.h" #import "RootViewController.h" @implementation AddViewController @synthesize empName; @synthesize empID; @synthesize empDepartment; @synthesize backButton; // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. /* - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization. } return self; } */ /* // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } */ -(void)saveDetails{ EmployeeDetailsAppDelegate *appDelegate = [[UIApplication sharedApplication]delegate]; NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSManagedObject *newDetails; newDetails = [NSEntityDescription insertNewObjectForEntityForName:@"Details" inManagedObjectContext:context]; [newDetails setValue:empID.text forKey:@"EmployeeID"]; [newDetails setValue:empName.text forKey:@"EmployeeName"]; [newDetails setValue:empDepartment.text forKey:@"EmployeeDepartment"]; empID.text = @""; empName.text = @""; empDepartment.text = @""; NSLog(@"%@........----->>>...", newDetails); NSError *error; [context save:&error]; [self dismissModalViewControllerAnimated:YES]; } -(void)findDetails { EmployeeDetailsAppDelegate *appDelegate = [[UIApplication sharedApplication]delegate]; NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSEntityDescription *entityDesc = [NSEntityDescription entityForName:@"Details" inManagedObjectContext:context]; NSFetchRequest *request = [[NSFetchRequest alloc]init]; [request setEntity:entityDesc]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"(EmployeeName = %@)", empName.text]; [request setPredicate:pred]; NSManagedObject *matches = nil; NSError *error; NSArray *objects = [context executeFetchRequest:request error:&error]; if ([objects count] == 0) { } else { matches = [objects objectAtIndex:0]; empID.text = [matches valueForKey:@"EmployeeID"]; empDepartment.text = [matches valueForKey:@"EmployeeDepartment"]; } [request release]; [self dismissModalViewControllerAnimated:YES]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Overriden to allow any orientation. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc. that aren't in use. } - (void)viewDidUnload { self.empName = nil; self.empID = nil; self.empDepartment = nil; [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [empID release]; [empName release]; [empDepartment release]; [super dealloc]; } @end Please let me know the answer as soon as possible. Thank you. Regards, Dipanjan

    Read the article

  • Multiple errors while adding searching to app

    - by Thijs
    Hi, I'm fairly new at iOS programming, but I managed to make a (in my opinion quite nice) app for the app store. The main function for my next update will be a search option for the app. I followed a tutorial I found on the internet and adapted it to fit my app. I got back quite some errors, most of which I managed to fix. But now I'm completely stuck and don't know what to do next. I know it's a lot to ask, but if anyone could take a look at the code underneath, it would be greatly appreciated. Thanks! // // RootViewController.m // GGZ info // // Created by Thijs Beckers on 29-12-10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "RootViewController.h" // Always import the next level view controller header(s) #import "CourseCodes.h" @implementation RootViewController @synthesize dataForCurrentLevel, tableViewData; #pragma mark - #pragma mark View lifecycle // OVERRIDE METHOD - (void)viewDidLoad { [super viewDidLoad]; // Go get the data for the app... // Create a custom string that points to the right location in the app bundle NSString *pathToPlist = [[NSBundle mainBundle] pathForResource:@"SCSCurriculum" ofType:@"plist"]; // Now, place the result into the dictionary property // Note that we must retain it to keep it around dataForCurrentLevel = [[NSDictionary dictionaryWithContentsOfFile:pathToPlist] retain]; // Place the top level keys (the program codes) in an array for the table view // Note that we must retain it to keep it around // NSDictionary has a really useful instance method - allKeys // The allKeys method returns an array with all of the keys found in (this level of) a dictionary tableViewData = [[[dataForCurrentLevel allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)] retain]; //Initialize the copy array. copyListOfItems = [[NSMutableArray alloc] init]; // Set the nav bar title self.title = @"GGZ info"; //Add the search bar self.tableView.tableHeaderView = searchBar; searchBar.autocorrectionType = UITextAutocorrectionTypeNo; searching = NO; letUserSelectRow = YES; } /* - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } */ /* - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ //RootViewController.m - (void) searchBarTextDidBeginEditing:(UISearchBar *)theSearchBar { searching = YES; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; //Add the done button. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(doneSearching_Clicked:)] autorelease]; } - (NSIndexPath *)tableView :(UITableView *)theTableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { if(letUserSelectRow) return indexPath; else return nil; } //RootViewController.m - (void)searchBar:(UISearchBar *)theSearchBar textDidChange:(NSString *)searchText { //Remove all objects first. [copyListOfItems removeAllObjects]; if([searchText length] &gt; 0) { searching = YES; letUserSelectRow = YES; self.tableView.scrollEnabled = YES; [self searchTableView]; } else { searching = NO; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; } [self.tableView reloadData]; } //RootViewController.m - (void) searchBarSearchButtonClicked:(UISearchBar *)theSearchBar { [self searchTableView]; } - (void) searchTableView { NSString *searchText = searchBar.text; NSMutableArray *searchArray = [[NSMutableArray alloc] init]; for (NSDictionary *dictionary in listOfItems) { NSArray *array = [dictionary objectForKey:@"Countries"]; [searchArray addObjectsFromArray:array]; } for (NSString *sTemp in searchArray) { NSRange titleResultsRange = [sTemp rangeOfString:searchText options:NSCaseInsensitiveSearch]; if (titleResultsRange.length &gt; 0) [copyListOfItems addObject:sTemp]; } [searchArray release]; searchArray = nil; } //RootViewController.m - (void) doneSearching_Clicked:(id)sender { searchBar.text = @""; [searchBar resignFirstResponder]; letUserSelectRow = YES; searching = NO; self.navigationItem.rightBarButtonItem = nil; self.tableView.scrollEnabled = YES; [self.tableView reloadData]; } //RootViewController.m - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { if (searching) return 1; else return [listOfItems count]; } // Customize the number of rows in the table view. - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (searching) return [copyListOfItems count]; else { //Number of rows it should expect should be based on the section NSDictionary *dictionary = [listOfItems objectAtIndex:section]; NSArray *array = [dictionary objectForKey:@"Countries"]; return [array count]; } } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { if(searching) return @""; if(section == 0) return @"Countries to visit"; else return @"Countries visited"; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... if(searching) cell.text = [copyListOfItems objectAtIndex:indexPath.row]; else { //First get the dictionary object NSDictionary *dictionary = [listOfItems objectAtIndex:indexPath.section]; NSArray *array = [dictionary objectForKey:@"Countries"]; NSString *cellValue = [array objectAtIndex:indexPath.row]; cell.text = cellValue; } return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //Get the selected country NSString *selectedCountry = nil; if(searching) selectedCountry = [copyListOfItems objectAtIndex:indexPath.row]; else { NSDictionary *dictionary = [listOfItems objectAtIndex:indexPath.section]; NSArray *array = [dictionary objectForKey:@"Countries"]; selectedCountry = [array objectAtIndex:indexPath.row]; } //Initialize the detail view controller and display it. DetailViewController *dvController = [[DetailViewController alloc] initWithNibName:@"DetailView" bundle:[NSBundle mainBundle]]; dvController.selectedCountry = selectedCountry; [self.navigationController pushViewController:dvController animated:YES]; [dvController release]; dvController = nil; } //RootViewController.m - (void) searchBarTextDidBeginEditing:(UISearchBar *)theSearchBar { //Add the overlay view. if(ovController == nil) ovController = [[OverlayViewController alloc] initWithNibName:@"OverlayView" bundle:[NSBundle mainBundle]]; CGFloat yaxis = self.navigationController.navigationBar.frame.size.height; CGFloat width = self.view.frame.size.width; CGFloat height = self.view.frame.size.height; //Parameters x = origion on x-axis, y = origon on y-axis. CGRect frame = CGRectMake(0, yaxis, width, height); ovController.view.frame = frame; ovController.view.backgroundColor = [UIColor grayColor]; ovController.view.alpha = 0.5; ovController.rvController = self; [self.tableView insertSubview:ovController.view aboveSubview:self.parentViewController.view]; searching = YES; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; //Add the done button. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(doneSearching_Clicked:)] autorelease]; } // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)dealloc { [dataForCurrentLevel release]; [tableViewData release]; [super dealloc]; } #pragma mark - #pragma mark Table view methods // DATA SOURCE METHOD - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } // DATA SOURCE METHOD - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // How many rows should be displayed? return [tableViewData count]; } // DELEGATE METHOD - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // Cell reuse block static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell's contents - we want the program code, and a disclosure indicator cell.textLabel.text = [tableViewData objectAtIndex:indexPath.row]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; return cell; } //RootViewController.m - (void)searchBar:(UISearchBar *)theSearchBar textDidChange:(NSString *)searchText { //Remove all objects first. [copyListOfItems removeAllObjects]; if([searchText length] &gt; 0) { [ovController.view removeFromSuperview]; searching = YES; letUserSelectRow = YES; self.tableView.scrollEnabled = YES; [self searchTableView]; } else { [self.tableView insertSubview:ovController.view aboveSubview:self.parentViewController.view]; searching = NO; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; } [self.tableView reloadData]; } //RootViewController.m - (void) doneSearching_Clicked:(id)sender { searchBar.text = @""; [searchBar resignFirstResponder]; letUserSelectRow = YES; searching = NO; self.navigationItem.rightBarButtonItem = nil; self.tableView.scrollEnabled = YES; [ovController.view removeFromSuperview]; [ovController release]; ovController = nil; [self.tableView reloadData]; } // DELEGATE METHOD - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // In any navigation-based application, you follow the same pattern: // 1. Create an instance of the next-level view controller // 2. Configure that instance, with settings and data if necessary // 3. Push it on to the navigation stack // In this situation, the next level view controller is another table view // Therefore, we really don't need a nib file (do you see a CourseCodes.xib? no, there isn't one) // So, a UITableViewController offers an initializer that programmatically creates a view // 1. Create the next level view controller // ======================================== CourseCodes *nextVC = [[CourseCodes alloc] initWithStyle:UITableViewStylePlain]; // 2. Configure it... // ================== // It needs data from the dictionary - the "value" for the current "key" (that was tapped) NSDictionary *nextLevelDictionary = [dataForCurrentLevel objectForKey:[tableViewData objectAtIndex:indexPath.row]]; nextVC.dataForCurrentLevel = nextLevelDictionary; // Set the view title nextVC.title = [tableViewData objectAtIndex:indexPath.row]; // 3. Push it on to the navigation stack // ===================================== [self.navigationController pushViewController:nextVC animated:YES]; // Memory manage it [nextVC release]; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source. [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view. } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ @end

    Read the article

  • Zen and the Art of File and Folder Organization

    - by Mark Virtue
    Is your desk a paragon of neatness, or does it look like a paper-bomb has gone off? If you’ve been putting off getting organized because the task is too huge or daunting, or you don’t know where to start, we’ve got 40 tips to get you on the path to zen mastery of your filing system. For all those readers who would like to get their files and folders organized, or, if they’re already organized, better organized—we have compiled a complete guide to getting organized and staying organized, a comprehensive article that will hopefully cover every possible tip you could want. Signs that Your Computer is Poorly Organized If your computer is a mess, you’re probably already aware of it.  But just in case you’re not, here are some tell-tale signs: Your Desktop has over 40 icons on it “My Documents” contains over 300 files and 60 folders, including MP3s and digital photos You use the Windows’ built-in search facility whenever you need to find a file You can’t find programs in the out-of-control list of programs in your Start Menu You save all your Word documents in one folder, all your spreadsheets in a second folder, etc Any given file that you’re looking for may be in any one of four different sets of folders But before we start, here are some quick notes: We’re going to assume you know what files and folders are, and how to create, save, rename, copy and delete them The organization principles described in this article apply equally to all computer systems.  However, the screenshots here will reflect how things look on Windows (usually Windows 7).  We will also mention some useful features of Windows that can help you get organized. Everyone has their own favorite methodology of organizing and filing, and it’s all too easy to get into “My Way is Better than Your Way” arguments.  The reality is that there is no perfect way of getting things organized.  When I wrote this article, I tried to keep a generalist and objective viewpoint.  I consider myself to be unusually well organized (to the point of obsession, truth be told), and I’ve had 25 years experience in collecting and organizing files on computers.  So I’ve got a lot to say on the subject.  But the tips I have described here are only one way of doing it.  Hopefully some of these tips will work for you too, but please don’t read this as any sort of “right” way to do it. At the end of the article we’ll be asking you, the reader, for your own organization tips. Why Bother Organizing At All? For some, the answer to this question is self-evident. And yet, in this era of powerful desktop search software (the search capabilities built into the Windows Vista and Windows 7 Start Menus, and third-party programs like Google Desktop Search), the question does need to be asked, and answered. I have a friend who puts every file he ever creates, receives or downloads into his My Documents folder and doesn’t bother filing them into subfolders at all.  He relies on the search functionality built into his Windows operating system to help him find whatever he’s looking for.  And he always finds it.  He’s a Search Samurai.  For him, filing is a waste of valuable time that could be spent enjoying life! It’s tempting to follow suit.  On the face of it, why would anyone bother to take the time to organize their hard disk when such excellent search software is available?  Well, if all you ever want to do with the files you own is to locate and open them individually (for listening, editing, etc), then there’s no reason to ever bother doing one scrap of organization.  But consider these common tasks that are not achievable with desktop search software: Find files manually.  Often it’s not convenient, speedy or even possible to utilize your desktop search software to find what you want.  It doesn’t work 100% of the time, or you may not even have it installed.  Sometimes its just plain faster to go straight to the file you want, if you know it’s in a particular sub-folder, rather than trawling through hundreds of search results. Find groups of similar files (e.g. all your “work” files, all the photos of your Europe holiday in 2008, all your music videos, all the MP3s from Dark Side of the Moon, all your letters you wrote to your wife, all your tax returns).  Clever naming of the files will only get you so far.  Sometimes it’s the date the file was created that’s important, other times it’s the file format, and other times it’s the purpose of the file.  How do you name a collection of files so that they’re easy to isolate based on any of the above criteria?  Short answer, you can’t. Move files to a new computer.  It’s time to upgrade your computer.  How do you quickly grab all the files that are important to you?  Or you decide to have two computers now – one for home and one for work.  How do you quickly isolate only the work-related files to move them to the work computer? Synchronize files to other computers.  If you have more than one computer, and you need to mirror some of your files onto the other computer (e.g. your music collection), then you need a way to quickly determine which files are to be synced and which are not.  Surely you don’t want to synchronize everything? Choose which files to back up.  If your backup regime calls for multiple backups, or requires speedy backups, then you’ll need to be able to specify which files are to be backed up, and which are not.  This is not possible if they’re all in the same folder. Finally, if you’re simply someone who takes pleasure in being organized, tidy and ordered (me! me!), then you don’t even need a reason.  Being disorganized is simply unthinkable. Tips on Getting Organized Here we present our 40 best tips on how to get organized.  Or, if you’re already organized, to get better organized. Tip #1.  Choose Your Organization System Carefully The reason that most people are not organized is that it takes time.  And the first thing that takes time is deciding upon a system of organization.  This is always a matter of personal preference, and is not something that a geek on a website can tell you.  You should always choose your own system, based on how your own brain is organized (which makes the assumption that your brain is, in fact, organized). We can’t instruct you, but we can make suggestions: You may want to start off with a system based on the users of the computer.  i.e. “My Files”, “My Wife’s Files”, My Son’s Files”, etc.  Inside “My Files”, you might then break it down into “Personal” and “Business”.  You may then realize that there are overlaps.  For example, everyone may want to share access to the music library, or the photos from the school play.  So you may create another folder called “Family”, for the “common” files. You may decide that the highest-level breakdown of your files is based on the “source” of each file.  In other words, who created the files.  You could have “Files created by ME (business or personal)”, “Files created by people I know (family, friends, etc)”, and finally “Files created by the rest of the world (MP3 music files, downloaded or ripped movies or TV shows, software installation files, gorgeous desktop wallpaper images you’ve collected, etc).”  This system happens to be the one I use myself.  See below:  Mark is for files created by meVC is for files created by my company (Virtual Creations)Others is for files created by my friends and familyData is the rest of the worldAlso, Settings is where I store the configuration files and other program data files for my installed software (more on this in tip #34, below). Each folder will present its own particular set of requirements for further sub-organization.  For example, you may decide to organize your music collection into sub-folders based on the artist’s name, while your digital photos might get organized based on the date they were taken.  It can be different for every sub-folder! Another strategy would be based on “currentness”.  Files you have yet to open and look at live in one folder.  Ones that have been looked at but not yet filed live in another place.  Current, active projects live in yet another place.  All other files (your “archive”, if you like) would live in a fourth folder. (And of course, within that last folder you’d need to create a further sub-system based on one of the previous bullet points). Put some thought into this – changing it when it proves incomplete can be a big hassle!  Before you go to the trouble of implementing any system you come up with, examine a wide cross-section of the files you own and see if they will all be able to find a nice logical place to sit within your system. Tip #2.  When You Decide on Your System, Stick to It! There’s nothing more pointless than going to all the trouble of creating a system and filing all your files, and then whenever you create, receive or download a new file, you simply dump it onto your Desktop.  You need to be disciplined – forever!  Every new file you get, spend those extra few seconds to file it where it belongs!  Otherwise, in just a month or two, you’ll be worse off than before – half your files will be organized and half will be disorganized – and you won’t know which is which! Tip #3.  Choose the Root Folder of Your Structure Carefully Every data file (document, photo, music file, etc) that you create, own or is important to you, no matter where it came from, should be found within one single folder, and that one single folder should be located at the root of your C: drive (as a sub-folder of C:\).  In other words, do not base your folder structure in standard folders like “My Documents”.  If you do, then you’re leaving it up to the operating system engineers to decide what folder structure is best for you.  And every operating system has a different system!  In Windows 7 your files are found in C:\Users\YourName, whilst on Windows XP it was C:\Documents and Settings\YourName\My Documents.  In UNIX systems it’s often /home/YourName. These standard default folders tend to fill up with junk files and folders that are not at all important to you.  “My Documents” is the worst offender.  Every second piece of software you install, it seems, likes to create its own folder in the “My Documents” folder.  These folders usually don’t fit within your organizational structure, so don’t use them!  In fact, don’t even use the “My Documents” folder at all.  Allow it to fill up with junk, and then simply ignore it.  It sounds heretical, but: Don’t ever visit your “My Documents” folder!  Remove your icons/links to “My Documents” and replace them with links to the folders you created and you care about! Create your own file system from scratch!  Probably the best place to put it would be on your D: drive – if you have one.  This way, all your files live on one drive, while all the operating system and software component files live on the C: drive – simply and elegantly separated.  The benefits of that are profound.  Not only are there obvious organizational benefits (see tip #10, below), but when it comes to migrate your data to a new computer, you can (sometimes) simply unplug your D: drive and plug it in as the D: drive of your new computer (this implies that the D: drive is actually a separate physical disk, and not a partition on the same disk as C:).  You also get a slight speed improvement (again, only if your C: and D: drives are on separate physical disks). Warning:  From tip #12, below, you will see that it’s actually a good idea to have exactly the same file system structure – including the drive it’s filed onon all of the computers you own.  So if you decide to use the D: drive as the storage system for your own files, make sure you are able to use the D: drive on all the computers you own.  If you can’t ensure that, then you can still use a clever geeky trick to store your files on the D: drive, but still access them all via the C: drive (see tip #17, below). If you only have one hard disk (C:), then create a dedicated folder that will contain all your files – something like C:\Files.  The name of the folder is not important, but make it a single, brief word. There are several reasons for this: When creating a backup regime, it’s easy to decide what files should be backed up – they’re all in the one folder! If you ever decide to trade in your computer for a new one, you know exactly which files to migrate You will always know where to begin a search for any file If you synchronize files with other computers, it makes your synchronization routines very simple.   It also causes all your shortcuts to continue to work on the other machines (more about this in tip #24, below). Once you’ve decided where your files should go, then put all your files in there – Everything!  Completely disregard the standard, default folders that are created for you by the operating system (“My Music”, “My Pictures”, etc).  In fact, you can actually relocate many of those folders into your own structure (more about that below, in tip #6). The more completely you get all your data files (documents, photos, music, etc) and all your configuration settings into that one folder, then the easier it will be to perform all of the above tasks. Once this has been done, and all your files live in one folder, all the other folders in C:\ can be thought of as “operating system” folders, and therefore of little day-to-day interest for us. Here’s a screenshot of a nicely organized C: drive, where all user files are located within the \Files folder:   Tip #4.  Use Sub-Folders This would be our simplest and most obvious tip.  It almost goes without saying.  Any organizational system you decide upon (see tip #1) will require that you create sub-folders for your files.  Get used to creating folders on a regular basis. Tip #5.  Don’t be Shy About Depth Create as many levels of sub-folders as you need.  Don’t be scared to do so.  Every time you notice an opportunity to group a set of related files into a sub-folder, do so.  Examples might include:  All the MP3s from one music CD, all the photos from one holiday, or all the documents from one client. It’s perfectly okay to put files into a folder called C:\Files\Me\From Others\Services\WestCo Bank\Statements\2009.  That’s only seven levels deep.  Ten levels is not uncommon.  Of course, it’s possible to take this too far.  If you notice yourself creating a sub-folder to hold only one file, then you’ve probably become a little over-zealous.  On the other hand, if you simply create a structure with only two levels (for example C:\Files\Work) then you really haven’t achieved any level of organization at all (unless you own only six files!).  Your “Work” folder will have become a dumping ground, just like your Desktop was, with most likely hundreds of files in it. Tip #6.  Move the Standard User Folders into Your Own Folder Structure Most operating systems, including Windows, create a set of standard folders for each of its users.  These folders then become the default location for files such as documents, music files, digital photos and downloaded Internet files.  In Windows 7, the full list is shown below: Some of these folders you may never use nor care about (for example, the Favorites folder, if you’re not using Internet Explorer as your browser).  Those ones you can leave where they are.  But you may be using some of the other folders to store files that are important to you.  Even if you’re not using them, Windows will still often treat them as the default storage location for many types of files.  When you go to save a standard file type, it can become annoying to be automatically prompted to save it in a folder that’s not part of your own file structure. But there’s a simple solution:  Move the folders you care about into your own folder structure!  If you do, then the next time you go to save a file of the corresponding type, Windows will prompt you to save it in the new, moved location. Moving the folders is easy.  Simply drag-and-drop them to the new location.  Here’s a screenshot of the default My Music folder being moved to my custom personal folder (Mark): Tip #7.  Name Files and Folders Intelligently This is another one that almost goes without saying, but we’ll say it anyway:  Do not allow files to be created that have meaningless names like Document1.doc, or folders called New Folder (2).  Take that extra 20 seconds and come up with a meaningful name for the file/folder – one that accurately divulges its contents without repeating the entire contents in the name. Tip #8.  Watch Out for Long Filenames Another way to tell if you have not yet created enough depth to your folder hierarchy is that your files often require really long names.  If you need to call a file Johnson Sales Figures March 2009.xls (which might happen to live in the same folder as Abercrombie Budget Report 2008.xls), then you might want to create some sub-folders so that the first file could be simply called March.xls, and living in the Clients\Johnson\Sales Figures\2009 folder. A well-placed file needs only a brief filename! Tip #9.  Use Shortcuts!  Everywhere! This is probably the single most useful and important tip we can offer.  A shortcut allows a file to be in two places at once. Why would you want that?  Well, the file and folder structure of every popular operating system on the market today is hierarchical.  This means that all objects (files and folders) always live within exactly one parent folder.  It’s a bit like a tree.  A tree has branches (folders) and leaves (files).  Each leaf, and each branch, is supported by exactly one parent branch, all the way back to the root of the tree (which, incidentally, is exactly why C:\ is called the “root folder” of the C: drive). That hard disks are structured this way may seem obvious and even necessary, but it’s only one way of organizing data.  There are others:  Relational databases, for example, organize structured data entirely differently.  The main limitation of hierarchical filing structures is that a file can only ever be in one branch of the tree – in only one folder – at a time.  Why is this a problem?  Well, there are two main reasons why this limitation is a problem for computer users: The “correct” place for a file, according to our organizational rationale, is very often a very inconvenient place for that file to be located.  Just because it’s correctly filed doesn’t mean it’s easy to get to.  Your file may be “correctly” buried six levels deep in your sub-folder structure, but you may need regular and speedy access to this file every day.  You could always move it to a more convenient location, but that would mean that you would need to re-file back to its “correct” location it every time you’d finished working on it.  Most unsatisfactory. A file may simply “belong” in two or more different locations within your file structure.  For example, say you’re an accountant and you have just completed the 2009 tax return for John Smith.  It might make sense to you to call this file 2009 Tax Return.doc and file it under Clients\John Smith.  But it may also be important to you to have the 2009 tax returns from all your clients together in the one place.  So you might also want to call the file John Smith.doc and file it under Tax Returns\2009.  The problem is, in a purely hierarchical filing system, you can’t put it in both places.  Grrrrr! Fortunately, Windows (and most other operating systems) offers a way for you to do exactly that:  It’s called a “shortcut” (also known as an “alias” on Macs and a “symbolic link” on UNIX systems).  Shortcuts allow a file to exist in one place, and an icon that represents the file to be created and put anywhere else you please.  In fact, you can create a dozen such icons and scatter them all over your hard disk.  Double-clicking on one of these icons/shortcuts opens up the original file, just as if you had double-clicked on the original file itself. Consider the following two icons: The one on the left is the actual Word document, while the one on the right is a shortcut that represents the Word document.  Double-clicking on either icon will open the same file.  There are two main visual differences between the icons: The shortcut will have a small arrow in the lower-left-hand corner (on Windows, anyway) The shortcut is allowed to have a name that does not include the file extension (the “.docx” part, in this case) You can delete the shortcut at any time without losing any actual data.  The original is still intact.  All you lose is the ability to get to that data from wherever the shortcut was. So why are shortcuts so great?  Because they allow us to easily overcome the main limitation of hierarchical file systems, and put a file in two (or more) places at the same time.  You will always have files that don’t play nice with your organizational rationale, and can’t be filed in only one place.  They demand to exist in two places.  Shortcuts allow this!  Furthermore, they allow you to collect your most often-opened files and folders together in one spot for convenient access.  The cool part is that the original files stay where they are, safe forever in their perfectly organized location. So your collection of most often-opened files can – and should – become a collection of shortcuts! If you’re still not convinced of the utility of shortcuts, consider the following well-known areas of a typical Windows computer: The Start Menu (and all the programs that live within it) The Quick Launch bar (or the Superbar in Windows 7) The “Favorite folders” area in the top-left corner of the Windows Explorer window (in Windows Vista or Windows 7) Your Internet Explorer Favorites or Firefox Bookmarks Each item in each of these areas is a shortcut!  Each of those areas exist for one purpose only:  For convenience – to provide you with a collection of the files and folders you access most often. It should be easy to see by now that shortcuts are designed for one single purpose:  To make accessing your files more convenient.  Each time you double-click on a shortcut, you are saved the hassle of locating the file (or folder, or program, or drive, or control panel icon) that it represents. Shortcuts allow us to invent a golden rule of file and folder organization: “Only ever have one copy of a file – never have two copies of the same file.  Use a shortcut instead” (this rule doesn’t apply to copies created for backup purposes, of course!) There are also lesser rules, like “don’t move a file into your work area – create a shortcut there instead”, and “any time you find yourself frustrated with how long it takes to locate a file, create a shortcut to it and place that shortcut in a convenient location.” So how to we create these massively useful shortcuts?  There are two main ways: “Copy” the original file or folder (click on it and type Ctrl-C, or right-click on it and select Copy):  Then right-click in an empty area of the destination folder (the place where you want the shortcut to go) and select Paste shortcut: Right-drag (drag with the right mouse button) the file from the source folder to the destination folder.  When you let go of the mouse button at the destination folder, a menu pops up: Select Create shortcuts here. Note that when shortcuts are created, they are often named something like Shortcut to Budget Detail.doc (windows XP) or Budget Detail – Shortcut.doc (Windows 7).   If you don’t like those extra words, you can easily rename the shortcuts after they’re created, or you can configure Windows to never insert the extra words in the first place (see our article on how to do this). And of course, you can create shortcuts to folders too, not just to files! Bottom line: Whenever you have a file that you’d like to access from somewhere else (whether it’s convenience you’re after, or because the file simply belongs in two places), create a shortcut to the original file in the new location. Tip #10.  Separate Application Files from Data Files Any digital organization guru will drum this rule into you.  Application files are the components of the software you’ve installed (e.g. Microsoft Word, Adobe Photoshop or Internet Explorer).  Data files are the files that you’ve created for yourself using that software (e.g. Word Documents, digital photos, emails or playlists). Software gets installed, uninstalled and upgraded all the time.  Hopefully you always have the original installation media (or downloaded set-up file) kept somewhere safe, and can thus reinstall your software at any time.  This means that the software component files are of little importance.  Whereas the files you have created with that software is, by definition, important.  It’s a good rule to always separate unimportant files from important files. So when your software prompts you to save a file you’ve just created, take a moment and check out where it’s suggesting that you save the file.  If it’s suggesting that you save the file into the same folder as the software itself, then definitely don’t follow that suggestion.  File it in your own folder!  In fact, see if you can find the program’s configuration option that determines where files are saved by default (if it has one), and change it. Tip #11.  Organize Files Based on Purpose, Not on File Type If you have, for example a folder called Work\Clients\Johnson, and within that folder you have two sub-folders, Word Documents and Spreadsheets (in other words, you’re separating “.doc” files from “.xls” files), then chances are that you’re not optimally organized.  It makes little sense to organize your files based on the program that created them.  Instead, create your sub-folders based on the purpose of the file.  For example, it would make more sense to create sub-folders called Correspondence and Financials.  It may well be that all the files in a given sub-folder are of the same file-type, but this should be more of a coincidence and less of a design feature of your organization system. Tip #12.  Maintain the Same Folder Structure on All Your Computers In other words, whatever organizational system you create, apply it to every computer that you can.  There are several benefits to this: There’s less to remember.  No matter where you are, you always know where to look for your files If you copy or synchronize files from one computer to another, then setting up the synchronization job becomes very simple Shortcuts can be copied or moved from one computer to another with ease (assuming the original files are also copied/moved).  There’s no need to find the target of the shortcut all over again on the second computer Ditto for linked files (e.g Word documents that link to data in a separate Excel file), playlists, and any files that reference the exact file locations of other files. This applies even to the drive that your files are stored on.  If your files are stored on C: on one computer, make sure they’re stored on C: on all your computers.  Otherwise all your shortcuts, playlists and linked files will stop working! Tip #13.  Create an “Inbox” Folder Create yourself a folder where you store all files that you’re currently working on, or that you haven’t gotten around to filing yet.  You can think of this folder as your “to-do” list.  You can call it “Inbox” (making it the same metaphor as your email system), or “Work”, or “To-Do”, or “Scratch”, or whatever name makes sense to you.  It doesn’t matter what you call it – just make sure you have one! Once you have finished working on a file, you then move it from the “Inbox” to its correct location within your organizational structure. You may want to use your Desktop as this “Inbox” folder.  Rightly or wrongly, most people do.  It’s not a bad place to put such files, but be careful:  If you do decide that your Desktop represents your “to-do” list, then make sure that no other files find their way there.  In other words, make sure that your “Inbox”, wherever it is, Desktop or otherwise, is kept free of junk – stray files that don’t belong there. So where should you put this folder, which, almost by definition, lives outside the structure of the rest of your filing system?  Well, first and foremost, it has to be somewhere handy.  This will be one of your most-visited folders, so convenience is key.  Putting it on the Desktop is a great option – especially if you don’t have any other folders on your Desktop:  the folder then becomes supremely easy to find in Windows Explorer: You would then create shortcuts to this folder in convenient spots all over your computer (“Favorite Links”, “Quick Launch”, etc). Tip #14.  Ensure You have Only One “Inbox” Folder Once you’ve created your “Inbox” folder, don’t use any other folder location as your “to-do list”.  Throw every incoming or created file into the Inbox folder as you create/receive it.  This keeps the rest of your computer pristine and free of randomly created or downloaded junk.  The last thing you want to be doing is checking multiple folders to see all your current tasks and projects.  Gather them all together into one folder. Here are some tips to help ensure you only have one Inbox: Set the default “save” location of all your programs to this folder. Set the default “download” location for your browser to this folder. If this folder is not your desktop (recommended) then also see if you can make a point of not putting “to-do” files on your desktop.  This keeps your desktop uncluttered and Zen-like: (the Inbox folder is in the bottom-right corner) Tip #15.  Be Vigilant about Clearing Your “Inbox” Folder This is one of the keys to staying organized.  If you let your “Inbox” overflow (i.e. allow there to be more than, say, 30 files or folders in there), then you’re probably going to start feeling like you’re overwhelmed:  You’re not keeping up with your to-do list.  Once your Inbox gets beyond a certain point (around 30 files, studies have shown), then you’ll simply start to avoid it.  You may continue to put files in there, but you’ll be scared to look at it, fearing the “out of control” feeling that all overworked, chaotic or just plain disorganized people regularly feel. So, here’s what you can do: Visit your Inbox/to-do folder regularly (at least five times per day). Scan the folder regularly for files that you have completed working on and are ready for filing.  File them immediately. Make it a source of pride to keep the number of files in this folder as small as possible.  If you value peace of mind, then make the emptiness of this folder one of your highest (computer) priorities If you know that a particular file has been in the folder for more than, say, six weeks, then admit that you’re not actually going to get around to processing it, and move it to its final resting place. Tip #16.  File Everything Immediately, and Use Shortcuts for Your Active Projects As soon as you create, receive or download a new file, store it away in its “correct” folder immediately.  Then, whenever you need to work on it (possibly straight away), create a shortcut to it in your “Inbox” (“to-do”) folder or your desktop.  That way, all your files are always in their “correct” locations, yet you still have immediate, convenient access to your current, active files.  When you finish working on a file, simply delete the shortcut. Ideally, your “Inbox” folder – and your Desktop – should contain no actual files or folders.  They should simply contain shortcuts. Tip #17.  Use Directory Symbolic Links (or Junctions) to Maintain One Unified Folder Structure Using this tip, we can get around a potential hiccup that we can run into when creating our organizational structure – the issue of having more than one drive on our computer (C:, D:, etc).  We might have files we need to store on the D: drive for space reasons, and yet want to base our organized folder structure on the C: drive (or vice-versa). Your chosen organizational structure may dictate that all your files must be accessed from the C: drive (for example, the root folder of all your files may be something like C:\Files).  And yet you may still have a D: drive and wish to take advantage of the hundreds of spare Gigabytes that it offers.  Did you know that it’s actually possible to store your files on the D: drive and yet access them as if they were on the C: drive?  And no, we’re not talking about shortcuts here (although the concept is very similar). By using the shell command mklink, you can essentially take a folder that lives on one drive and create an alias for it on a different drive (you can do lots more than that with mklink – for a full rundown on this programs capabilities, see our dedicated article).  These aliases are called directory symbolic links (and used to be known as junctions).  You can think of them as “virtual” folders.  They function exactly like regular folders, except they’re physically located somewhere else. For example, you may decide that your entire D: drive contains your complete organizational file structure, but that you need to reference all those files as if they were on the C: drive, under C:\Files.  If that was the case you could create C:\Files as a directory symbolic link – a link to D:, as follows: mklink /d c:\files d:\ Or it may be that the only files you wish to store on the D: drive are your movie collection.  You could locate all your movie files in the root of your D: drive, and then link it to C:\Files\Media\Movies, as follows: mklink /d c:\files\media\movies d:\ (Needless to say, you must run these commands from a command prompt – click the Start button, type cmd and press Enter) Tip #18. Customize Your Folder Icons This is not strictly speaking an organizational tip, but having unique icons for each folder does allow you to more quickly visually identify which folder is which, and thus saves you time when you’re finding files.  An example is below (from my folder that contains all files downloaded from the Internet): To learn how to change your folder icons, please refer to our dedicated article on the subject. Tip #19.  Tidy Your Start Menu The Windows Start Menu is usually one of the messiest parts of any Windows computer.  Every program you install seems to adopt a completely different approach to placing icons in this menu.  Some simply put a single program icon.  Others create a folder based on the name of the software.  And others create a folder based on the name of the software manufacturer.  It’s chaos, and can make it hard to find the software you want to run. Thankfully we can avoid this chaos with useful operating system features like Quick Launch, the Superbar or pinned start menu items. Even so, it would make a lot of sense to get into the guts of the Start Menu itself and give it a good once-over.  All you really need to decide is how you’re going to organize your applications.  A structure based on the purpose of the application is an obvious candidate.  Below is an example of one such structure: In this structure, Utilities means software whose job it is to keep the computer itself running smoothly (configuration tools, backup software, Zip programs, etc).  Applications refers to any productivity software that doesn’t fit under the headings Multimedia, Graphics, Internet, etc. In case you’re not aware, every icon in your Start Menu is a shortcut and can be manipulated like any other shortcut (copied, moved, deleted, etc). With the Windows Start Menu (all version of Windows), Microsoft has decided that there be two parallel folder structures to store your Start Menu shortcuts.  One for you (the logged-in user of the computer) and one for all users of the computer.  Having two parallel structures can often be redundant:  If you are the only user of the computer, then having two parallel structures is totally redundant.  Even if you have several users that regularly log into the computer, most of your installed software will need to be made available to all users, and should thus be moved out of the “just you” version of the Start Menu and into the “all users” area. To take control of your Start Menu, so you can start organizing it, you’ll need to know how to access the actual folders and shortcut files that make up the Start Menu (both versions of it).  To find these folders and files, click the Start button and then right-click on the All Programs text (Windows XP users should right-click on the Start button itself): The Open option refers to the “just you” version of the Start Menu, while the Open All Users option refers to the “all users” version.  Click on the one you want to organize. A Windows Explorer window then opens with your chosen version of the Start Menu selected.  From there it’s easy.  Double-click on the Programs folder and you’ll see all your folders and shortcuts.  Now you can delete/rename/move until it’s just the way you want it. Note:  When you’re reorganizing your Start Menu, you may want to have two Explorer windows open at the same time – one showing the “just you” version and one showing the “all users” version.  You can drag-and-drop between the windows. Tip #20.  Keep Your Start Menu Tidy Once you have a perfectly organized Start Menu, try to be a little vigilant about keeping it that way.  Every time you install a new piece of software, the icons that get created will almost certainly violate your organizational structure. So to keep your Start Menu pristine and organized, make sure you do the following whenever you install a new piece of software: Check whether the software was installed into the “just you” area of the Start Menu, or the “all users” area, and then move it to the correct area. Remove all the unnecessary icons (like the “Read me” icon, the “Help” icon (you can always open the help from within the software itself when it’s running), the “Uninstall” icon, the link(s)to the manufacturer’s website, etc) Rename the main icon(s) of the software to something brief that makes sense to you.  For example, you might like to rename Microsoft Office Word 2010 to simply Word Move the icon(s) into the correct folder based on your Start Menu organizational structure And don’t forget:  when you uninstall a piece of software, the software’s uninstall routine is no longer going to be able to remove the software’s icon from the Start Menu (because you moved and/or renamed it), so you’ll need to remove that icon manually. Tip #21.  Tidy C:\ The root of your C: drive (C:\) is a common dumping ground for files and folders – both by the users of your computer and by the software that you install on your computer.  It can become a mess. There’s almost no software these days that requires itself to be installed in C:\.  99% of the time it can and should be installed into C:\Program Files.  And as for your own files, well, it’s clear that they can (and almost always should) be stored somewhere else. In an ideal world, your C:\ folder should look like this (on Windows 7): Note that there are some system files and folders in C:\ that are usually and deliberately “hidden” (such as the Windows virtual memory file pagefile.sys, the boot loader file bootmgr, and the System Volume Information folder).  Hiding these files and folders is a good idea, as they need to stay where they are and are almost never needed to be opened or even seen by you, the user.  Hiding them prevents you from accidentally messing with them, and enhances your sense of order and well-being when you look at your C: drive folder. Tip #22.  Tidy Your Desktop The Desktop is probably the most abused part of a Windows computer (from an organization point of view).  It usually serves as a dumping ground for all incoming files, as well as holding icons to oft-used applications, plus some regularly opened files and folders.  It often ends up becoming an uncontrolled mess.  See if you can avoid this.  Here’s why… Application icons (Word, Internet Explorer, etc) are often found on the Desktop, but it’s unlikely that this is the optimum place for them.  The “Quick Launch” bar (or the Superbar in Windows 7) is always visible and so represents a perfect location to put your icons.  You’ll only be able to see the icons on your Desktop when all your programs are minimized.  It might be time to get your application icons off your desktop… You may have decided that the Inbox/To-do folder on your computer (see tip #13, above) should be your Desktop.  If so, then enough said.  Simply be vigilant about clearing it and preventing it from being polluted by junk files (see tip #15, above).  On the other hand, if your Desktop is not acting as your “Inbox” folder, then there’s no reason for it to have any data files or folders on it at all, except perhaps a couple of shortcuts to often-opened files and folders (either ongoing or current projects).  Everything else should be moved to your “Inbox” folder. In an ideal world, it might look like this: Tip #23.  Move Permanent Items on Your Desktop Away from the Top-Left Corner When files/folders are dragged onto your desktop in a Windows Explorer window, or when shortcuts are created on your Desktop from Internet Explorer, those icons are always placed in the top-left corner – or as close as they can get.  If you have other files, folders or shortcuts that you keep on the Desktop permanently, then it’s a good idea to separate these permanent icons from the transient ones, so that you can quickly identify which ones the transients are.  An easy way to do this is to move all your permanent icons to the right-hand side of your Desktop.  That should keep them separated from incoming items. Tip #24.  Synchronize If you have more than one computer, you’ll almost certainly want to share files between them.  If the computers are permanently attached to the same local network, then there’s no need to store multiple copies of any one file or folder – shortcuts will suffice.  However, if the computers are not always on the same network, then you will at some point need to copy files between them.  For files that need to permanently live on both computers, the ideal way to do this is to synchronize the files, as opposed to simply copying them. We only have room here to write a brief summary of synchronization, not a full article.  In short, there are several different types of synchronization: Where the contents of one folder are accessible anywhere, such as with Dropbox Where the contents of any number of folders are accessible anywhere, such as with Windows Live Mesh Where any files or folders from anywhere on your computer are synchronized with exactly one other computer, such as with the Windows “Briefcase”, Microsoft SyncToy, or (much more powerful, yet still free) SyncBack from 2BrightSparks.  This only works when both computers are on the same local network, at least temporarily. A great advantage of synchronization solutions is that once you’ve got it configured the way you want it, then the sync process happens automatically, every time.  Click a button (or schedule it to happen automatically) and all your files are automagically put where they’re supposed to be. If you maintain the same file and folder structure on both computers, then you can also sync files depend upon the correct location of other files, like shortcuts, playlists and office documents that link to other office documents, and the synchronized files still work on the other computer! Tip #25.  Hide Files You Never Need to See If you have your files well organized, you will often be able to tell if a file is out of place just by glancing at the contents of a folder (for example, it should be pretty obvious if you look in a folder that contains all the MP3s from one music CD and see a Word document in there).  This is a good thing – it allows you to determine if there are files out of place with a quick glance.  Yet sometimes there are files in a folder that seem out of place but actually need to be there, such as the “folder art” JPEGs in music folders, and various files in the root of the C: drive.  If such files never need to be opened by you, then a good idea is to simply hide them.  Then, the next time you glance at the folder, you won’t have to remember whether that file was supposed to be there or not, because you won’t see it at all! To hide a file, simply right-click on it and choose Properties: Then simply tick the Hidden tick-box:   Tip #26.  Keep Every Setup File These days most software is downloaded from the Internet.  Whenever you download a piece of software, keep it.  You’ll never know when you need to reinstall the software. Further, keep with it an Internet shortcut that links back to the website where you originally downloaded it, in case you ever need to check for updates. See tip #33 below for a full description of the excellence of organizing your setup files. Tip #27.  Try to Minimize the Number of Folders that Contain Both Files and Sub-folders Some of the folders in your organizational structure will contain only files.  Others will contain only sub-folders.  And you will also have some folders that contain both files and sub-folders.  You will notice slight improvements in how long it takes you to locate a file if you try to avoid this third type of folder.  It’s not always possible, of course – you’ll always have some of these folders, but see if you can avoid it. One way of doing this is to take all the leftover files that didn’t end up getting stored in a sub-folder and create a special “Miscellaneous” or “Other” folder for them. Tip #28.  Starting a Filename with an Underscore Brings it to the Top of a List Further to the previous tip, if you name that “Miscellaneous” or “Other” folder in such a way that its name begins with an underscore “_”, then it will appear at the top of the list of files/folders. The screenshot below is an example of this.  Each folder in the list contains a set of digital photos.  The folder at the top of the list, _Misc, contains random photos that didn’t deserve their own dedicated folder: Tip #29.  Clean Up those CD-ROMs and (shudder!) Floppy Disks Have you got a pile of CD-ROMs stacked on a shelf of your office?  Old photos, or files you archived off onto CD-ROM (or even worse, floppy disks!) because you didn’t have enough disk space at the time?  In the meantime have you upgraded your computer and now have 500 Gigabytes of space you don’t know what to do with?  If so, isn’t it time you tidied up that stack of disks and filed them into your gorgeous new folder structure? So what are you waiting for?  Bite the bullet, copy them all back onto your computer, file them in their appropriate folders, and then back the whole lot up onto a shiny new 1000Gig external hard drive! Useful Folders to Create This next section suggests some useful folders that you might want to create within your folder structure.  I’ve personally found them to be indispensable. The first three are all about convenience – handy folders to create and then put somewhere that you can always access instantly.  For each one, it’s not so important where the actual folder is located, but it’s very important where you put the shortcut(s) to the folder.  You might want to locate the shortcuts: On your Desktop In your “Quick Launch” area (or pinned to your Windows 7 Superbar) In your Windows Explorer “Favorite Links” area Tip #30.  Create an “Inbox” (“To-Do”) Folder This has already been mentioned in depth (see tip #13), but we wanted to reiterate its importance here.  This folder contains all the recently created, received or downloaded files that you have not yet had a chance to file away properly, and it also may contain files that you have yet to process.  In effect, it becomes a sort of “to-do list”.  It doesn’t have to be called “Inbox” – you can call it whatever you want. Tip #31.  Create a Folder where Your Current Projects are Collected Rather than going hunting for them all the time, or dumping them all on your desktop, create a special folder where you put links (or work folders) for each of the projects you’re currently working on. You can locate this folder in your “Inbox” folder, on your desktop, or anywhere at all – just so long as there’s a way of getting to it quickly, such as putting a link to it in Windows Explorer’s “Favorite Links” area: Tip #32.  Create a Folder for Files and Folders that You Regularly Open You will always have a few files that you open regularly, whether it be a spreadsheet of your current accounts, or a favorite playlist.  These are not necessarily “current projects”, rather they’re simply files that you always find yourself opening.  Typically such files would be located on your desktop (or even better, shortcuts to those files).  Why not collect all such shortcuts together and put them in their own special folder? As with the “Current Projects” folder (above), you would want to locate that folder somewhere convenient.  Below is an example of a folder called “Quick links”, with about seven files (shortcuts) in it, that is accessible through the Windows Quick Launch bar: See tip #37 below for a full explanation of the power of the Quick Launch bar. Tip #33.  Create a “Set-ups” Folder A typical computer has dozens of applications installed on it.  For each piece of software, there are often many different pieces of information you need to keep track of, including: The original installation setup file(s).  This can be anything from a simple 100Kb setup.exe file you downloaded from a website, all the way up to a 4Gig ISO file that you copied from a DVD-ROM that you purchased. The home page of the software manufacturer (in case you need to look up something on their support pages, their forum or their online help) The page containing the download link for your actual file (in case you need to re-download it, or download an upgraded version) The serial number Your proof-of-purchase documentation Any other template files, plug-ins, themes, etc that also need to get installed For each piece of software, it’s a great idea to gather all of these files together and put them in a single folder.  The folder can be the name of the software (plus possibly a very brief description of what it’s for – in case you can’t remember what the software does based in its name).  Then you would gather all of these folders together into one place, and call it something like “Software” or “Setups”. If you have enough of these folders (I have several hundred, being a geek, collected over 20 years), then you may want to further categorize them.  My own categorization structure is based on “platform” (operating system): The last seven folders each represents one platform/operating system, while _Operating Systems contains set-up files for installing the operating systems themselves.  _Hardware contains ROMs for hardware I own, such as routers. Within the Windows folder (above), you can see the beginnings of the vast library of software I’ve compiled over the years: An example of a typical application folder looks like this: Tip #34.  Have a “Settings” Folder We all know that our documents are important.  So are our photos and music files.  We save all of these files into folders, and then locate them afterwards and double-click on them to open them.  But there are many files that are important to us that can’t be saved into folders, and then searched for and double-clicked later on.  These files certainly contain important information that we need, but are often created internally by an application, and saved wherever that application feels is appropriate. A good example of this is the “PST” file that Outlook creates for us and uses to store all our emails, contacts, appointments and so forth.  Another example would be the collection of Bookmarks that Firefox stores on your behalf. And yet another example would be the customized settings and configuration files of our all our software.  Granted, most Windows programs store their configuration in the Registry, but there are still many programs that use configuration files to store their settings. Imagine if you lost all of the above files!  And yet, when people are backing up their computers, they typically only back up the files they know about – those that are stored in the “My Documents” folder, etc.  If they had a hard disk failure or their computer was lost or stolen, their backup files would not include some of the most vital files they owned.  Also, when migrating to a new computer, it’s vital to ensure that these files make the journey. It can be a very useful idea to create yourself a folder to store all your “settings” – files that are important to you but which you never actually search for by name and double-click on to open them.  Otherwise, next time you go to set up a new computer just the way you want it, you’ll need to spend hours recreating the configuration of your previous computer! So how to we get our important files into this folder?  Well, we have a few options: Some programs (such as Outlook and its PST files) allow you to place these files wherever you want.  If you delve into the program’s options, you will find a setting somewhere that controls the location of the important settings files (or “personal storage” – PST – when it comes to Outlook) Some programs do not allow you to change such locations in any easy way, but if you get into the Registry, you can sometimes find a registry key that refers to the location of the file(s).  Simply move the file into your Settings folder and adjust the registry key to refer to the new location. Some programs stubbornly refuse to allow their settings files to be placed anywhere other then where they stipulate.  When faced with programs like these, you have three choices:  (1) You can ignore those files, (2) You can copy the files into your Settings folder (let’s face it – settings don’t change very often), or (3) you can use synchronization software, such as the Windows Briefcase, to make synchronized copies of all your files in your Settings folder.  All you then have to do is to remember to run your sync software periodically (perhaps just before you run your backup software!). There are some other things you may decide to locate inside this new “Settings” folder: Exports of registry keys (from the many applications that store their configurations in the Registry).  This is useful for backup purposes or for migrating to a new computer Notes you’ve made about all the specific customizations you have made to a particular piece of software (so that you’ll know how to do it all again on your next computer) Shortcuts to webpages that detail how to tweak certain aspects of your operating system or applications so they are just the way you like them (such as how to remove the words “Shortcut to” from the beginning of newly created shortcuts).  In other words, you’d want to create shortcuts to half the pages on the How-To Geek website! Here’s an example of a “Settings” folder: Windows Features that Help with Organization This section details some of the features of Microsoft Windows that are a boon to anyone hoping to stay optimally organized. Tip #35.  Use the “Favorite Links” Area to Access Oft-Used Folders Once you’ve created your great new filing system, work out which folders you access most regularly, or which serve as great starting points for locating the rest of the files in your folder structure, and then put links to those folders in your “Favorite Links” area of the left-hand side of the Windows Explorer window (simply called “Favorites” in Windows 7):   Some ideas for folders you might want to add there include: Your “Inbox” folder (or whatever you’ve called it) – most important! The base of your filing structure (e.g. C:\Files) A folder containing shortcuts to often-accessed folders on other computers around the network (shown above as Network Folders) A folder containing shortcuts to your current projects (unless that folder is in your “Inbox” folder) Getting folders into this area is very simple – just locate the folder you’re interested in and drag it there! Tip #36.  Customize the Places Bar in the File/Open and File/Save Boxes Consider the screenshot below: The highlighted icons (collectively known as the “Places Bar”) can be customized to refer to any folder location you want, allowing instant access to any part of your organizational structure. Note:  These File/Open and File/Save boxes have been superseded by new versions that use the Windows Vista/Windows 7 “Favorite Links”, but the older versions (shown above) are still used by a surprisingly large number of applications. The easiest way to customize these icons is to use the Group Policy Editor, but not everyone has access to this program.  If you do, open it up and navigate to: User Configuration > Administrative Templates > Windows Components > Windows Explorer > Common Open File Dialog If you don’t have access to the Group Policy Editor, then you’ll need to get into the Registry.  Navigate to: HKEY_CURRENT_USER \ Software \ Microsoft  \ Windows \ CurrentVersion \ Policies \ comdlg32 \ Placesbar It should then be easy to make the desired changes.  Log off and log on again to allow the changes to take effect. Tip #37.  Use the Quick Launch Bar as a Application and File Launcher That Quick Launch bar (to the right of the Start button) is a lot more useful than people give it credit for.  Most people simply have half a dozen icons in it, and use it to start just those programs.  But it can actually be used to instantly access just about anything in your filing system: For complete instructions on how to set this up, visit our dedicated article on this topic. Tip #38.  Put a Shortcut to Windows Explorer into Your Quick Launch Bar This is only necessary in Windows Vista and Windows XP.  The Microsoft boffins finally got wise and added it to the Windows 7 Superbar by default. Windows Explorer – the program used for managing your files and folders – is one of the most useful programs in Windows.  Anyone who considers themselves serious about being organized needs instant access to this program at any time.  A great place to create a shortcut to this program is in the Windows XP and Windows Vista “Quick Launch” bar: To get it there, locate it in your Start Menu (usually under “Accessories”) and then right-drag it down into your Quick Launch bar (and create a copy). Tip #39.  Customize the Starting Folder for Your Windows 7 Explorer Superbar Icon If you’re on Windows 7, your Superbar will include a Windows Explorer icon.  Clicking on the icon will launch Windows Explorer (of course), and will start you off in your “Libraries” folder.  Libraries may be fine as a starting point, but if you have created yourself an “Inbox” folder, then it would probably make more sense to start off in this folder every time you launch Windows Explorer. To change this default/starting folder location, then first right-click the Explorer icon in the Superbar, and then right-click Properties:Then, in Target field of the Windows Explorer Properties box that appears, type %windir%\explorer.exe followed by the path of the folder you wish to start in.  For example: %windir%\explorer.exe C:\Files If that folder happened to be on the Desktop (and called, say, “Inbox”), then you would use the following cleverness: %windir%\explorer.exe shell:desktop\Inbox Then click OK and test it out. Tip #40.  Ummmmm…. No, that’s it.  I can’t think of another one.  That’s all of the tips I can come up with.  I only created this one because 40 is such a nice round number… Case Study – An Organized PC To finish off the article, I have included a few screenshots of my (main) computer (running Vista).  The aim here is twofold: To give you a sense of what it looks like when the above, sometimes abstract, tips are applied to a real-life computer, and To offer some ideas about folders and structure that you may want to steal to use on your own PC. Let’s start with the C: drive itself.  Very minimal.  All my files are contained within C:\Files.  I’ll confine the rest of the case study to this folder: That folder contains the following: Mark: My personal files VC: My business (Virtual Creations, Australia) Others contains files created by friends and family Data contains files from the rest of the world (can be thought of as “public” files, usually downloaded from the Net) Settings is described above in tip #34 The Data folder contains the following sub-folders: Audio:  Radio plays, audio books, podcasts, etc Development:  Programmer and developer resources, sample source code, etc (see below) Humour:  Jokes, funnies (those emails that we all receive) Movies:  Downloaded and ripped movies (all legal, of course!), their scripts, DVD covers, etc. Music:  (see below) Setups:  Installation files for software (explained in full in tip #33) System:  (see below) TV:  Downloaded TV shows Writings:  Books, instruction manuals, etc (see below) The Music folder contains the following sub-folders: Album covers:  JPEG scans Guitar tabs:  Text files of guitar sheet music Lists:  e.g. “Top 1000 songs of all time” Lyrics:  Text files MIDI:  Electronic music files MP3 (representing 99% of the Music folder):  MP3s, either ripped from CDs or downloaded, sorted by artist/album name Music Video:  Video clips Sheet Music:  usually PDFs The Data\Writings folder contains the following sub-folders: (all pretty self-explanatory) The Data\Development folder contains the following sub-folders: Again, all pretty self-explanatory (if you’re a geek) The Data\System folder contains the following sub-folders: These are usually themes, plug-ins and other downloadable program-specific resources. The Mark folder contains the following sub-folders: From Others:  Usually letters that other people (friends, family, etc) have written to me For Others:  Letters and other things I have created for other people Green Book:  None of your business Playlists:  M3U files that I have compiled of my favorite songs (plus one M3U playlist file for every album I own) Writing:  Fiction, philosophy and other musings of mine Mark Docs:  Shortcut to C:\Users\Mark Settings:  Shortcut to C:\Files\Settings\Mark The Others folder contains the following sub-folders: The VC (Virtual Creations, my business – I develop websites) folder contains the following sub-folders: And again, all of those are pretty self-explanatory. Conclusion These tips have saved my sanity and helped keep me a productive geek, but what about you? What tips and tricks do you have to keep your files organized?  Please share them with us in the comments.  Come on, don’t be shy… Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Print or Create a Text File List of the Contents in a Directory the Easy WayCustomize the Windows 7 or Vista Send To MenuAdd Copy To / Move To on Windows 7 or Vista Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • Hosting the Razor Engine for Templating in Non-Web Applications

    - by Rick Strahl
    Microsoft’s new Razor HTML Rendering Engine that is currently shipping with ASP.NET MVC previews can be used outside of ASP.NET. Razor is an alternative view engine that can be used instead of the ASP.NET Page engine that currently works with ASP.NET WebForms and MVC. It provides a simpler and more readable markup syntax and is much more light weight in terms of functionality than the full blown WebForms Page engine, focusing only on features that are more along the lines of a pure view engine (or classic ASP!) with focus on expression and code rendering rather than a complex control/object model. Like the Page engine though, the parser understands .NET code syntax which can be embedded into templates, and behind the scenes the engine compiles markup and script code into an executing piece of .NET code in an assembly. Although it ships as part of the ASP.NET MVC and WebMatrix the Razor Engine itself is not directly dependent on ASP.NET or IIS or HTTP in any way. And although there are some markup and rendering features that are optimized for HTML based output generation, Razor is essentially a free standing template engine. And what’s really nice is that unlike the ASP.NET Runtime, Razor is fairly easy to host inside of your own non-Web applications to provide templating functionality. Templating in non-Web Applications? Yes please! So why might you host a template engine in your non-Web application? Template rendering is useful in many places and I have a number of applications that make heavy use of it. One of my applications – West Wind Html Help Builder - exclusively uses template based rendering to merge user supplied help text content into customizable and executable HTML markup templates that provide HTML output for CHM style HTML Help. This is an older product and it’s not actually using .NET at the moment – and this is one reason I’m looking at Razor for script hosting at the moment. For a few .NET applications though I’ve actually used the ASP.NET Runtime hosting to provide templating and mail merge style functionality and while that works reasonably well it’s a very heavy handed approach. It’s very resource intensive and has potential issues with versioning in various different versions of .NET. The generic implementation I created in the article above requires a lot of fix up to mimic an HTTP request in a non-HTTP environment and there are a lot of little things that have to happen to ensure that the ASP.NET runtime works properly most of it having nothing to do with the templating aspect but just satisfying ASP.NET’s requirements. The Razor Engine on the other hand is fairly light weight and completely decoupled from the ASP.NET runtime and the HTTP processing. Rather it’s a pure template engine whose sole purpose is to render text templates. Hosting this engine in your own applications can be accomplished with a reasonable amount of code (actually just a few lines with the tools I’m about to describe) and without having to fake HTTP requests. It’s also much lighter on resource usage and you can easily attach custom properties to your base template implementation to easily pass context from the parent application into templates all of which was rather complicated with ASP.NET runtime hosting. Installing the Razor Template Engine You can get Razor as part of the MVC 3 (RC and later) or Web Matrix. Both are available as downloadable components from the Web Platform Installer Version 3.0 (!important – V2 doesn’t show these components). If you already have that version of the WPI installed just fire it up. You can get the latest version of the Web Platform Installer from here: http://www.microsoft.com/web/gallery/install.aspx Once the platform Installer 3.0 is installed install either MVC 3 or ASP.NET Web Pages. Once installed you’ll find a System.Web.Razor assembly in C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\System.Web.Razor.dll which you can add as a reference to your project. Creating a Wrapper The basic Razor Hosting API is pretty simple and you can host Razor with a (large-ish) handful of lines of code. I’ll show the basics of it later in this article. However, if you want to customize the rendering and handle assembly and namespace includes for the markup as well as deal with text and file inputs as well as forcing Razor to run in a separate AppDomain so you can unload the code-generated assemblies and deal with assembly caching for re-used templates little more work is required to create something that is more easily reusable. For this reason I created a Razor Hosting wrapper project that combines a bunch of this functionality into an easy to use hosting class, a hosting factory that can load the engine in a separate AppDomain and a couple of hosting containers that provided folder based and string based caching for templates for an easily embeddable and reusable engine with easy to use syntax. If you just want the code and play with the samples and source go grab the latest code from the Subversion Repository at: http://www.west-wind.com:8080/svn/articles/trunk/RazorHosting/ or a snapshot from: http://www.west-wind.com/files/tools/RazorHosting.zip Getting Started Before I get into how hosting with Razor works, let’s take a look at how you can get up and running quickly with the wrapper classes provided. It only takes a few lines of code. The easiest way to use these Razor Hosting Wrappers is to use one of the two HostContainers provided. One is for hosting Razor scripts in a directory and rendering them as relative paths from these script files on disk. The other HostContainer serves razor scripts from string templates… Let’s start with a very simple template that displays some simple expressions, some code blocks and demonstrates rendering some data from contextual data that you pass to the template in the form of a ‘context’. Here’s a simple Razor template: @using System.Reflection Hello @Context.FirstName! Your entry was entered on: @Context.Entered @{ // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); } AppDomain Id: @AppDomain.CurrentDomain.FriendlyName Assembly: @Assembly.GetExecutingAssembly().FullName Code based output: @{ // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } Response.Write(output); } Pretty easy to see what’s going on here. The only unusual thing in this code is the Context object which is an arbitrary object I’m passing from the host to the template by way of the template base class. I’m also displaying the current AppDomain and the executing Assembly name so you can see how compiling and running a template actually loads up new assemblies. Also note that as part of my context I’m passing a reference to the current Windows Form down to the template and changing the title from within the script. It’s a silly example, but it demonstrates two-way communication between host and template and back which can be very powerful. The easiest way to quickly render this template is to use the RazorEngine<TTemplateBase> class. The generic parameter specifies a template base class type that is used by Razor internally to generate the class it generates from a template. The default implementation provided in my RazorHosting wrapper is RazorTemplateBase. Here’s a simple one that renders from a string and outputs a string: var engine = new RazorEngine<RazorTemplateBase>(); // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; string output = engine.RenderTemplate(this.txtSource.Text new string[] { "System.Windows.Forms.dll" }, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; Simple enough. This code renders a template from a string input and returns a result back as a string. It  creates a custom context and passes that to the template which can then access the Context’s properties. Note that anything passed as ‘context’ must be serializable (or MarshalByRefObject) – otherwise you get an exception when passing the reference over AppDomain boundaries (discussed later). Passing a context is optional, but is a key feature in being able to share data between the host application and the template. Note that we use the Context object to access FirstName, Entered and even the host Windows Form object which is used in the template to change the Window caption from within the script! In the code above all the work happens in the RenderTemplate method which provide a variety of overloads to read and write to and from strings, files and TextReaders/Writers. Here’s another example that renders from a file input using a TextReader: using (reader = new StreamReader("templates\\simple.csHtml", true)) { result = host.RenderTemplate(reader, new string[] { "System.Windows.Forms.dll" }, this.CustomContext); } RenderTemplate() is fairly high level and it handles loading of the runtime, compiling into an assembly and rendering of the template. If you want more control you can use the lower level methods to control each step of the way which is important for the HostContainers I’ll discuss later. Basically for those scenarios you want to separate out loading of the engine, compiling into an assembly and then rendering the template from the assembly. Why? So we can keep assemblies cached. In the code above a new assembly is created for each template rendered which is inefficient and uses up resources. Depending on the size of your templates and how often you fire them you can chew through memory very quickly. This slighter lower level approach is only a couple of extra steps: // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; var engine = new RazorEngine<RazorTemplateBase>(); string assId = null; using (StringReader reader = new StringReader(this.txtSource.Text)) { assId = engine.ParseAndCompileTemplate(new string[] { "System.Windows.Forms.dll" }, reader); } string output = engine.RenderTemplateFromAssembly(assId, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; The difference here is that you can capture the assembly – or rather an Id to it – and potentially hold on to it to render again later assuming the template hasn’t changed. The HostContainers take advantage of this feature to cache the assemblies based on certain criteria like a filename and file time step or a string hash that if not change indicate that an assembly can be reused. Note that ParseAndCompileTemplate returns an assembly Id rather than the assembly itself. This is done so that that the assembly always stays in the host’s AppDomain and is not passed across AppDomain boundaries which would cause load failures. We’ll talk more about this in a minute but for now just realize that assemblies references are stored in a list and are accessible by this ID to allow locating and re-executing of the assembly based on that id. Reuse of the assembly avoids recompilation overhead and creation of yet another assembly that loads into the current AppDomain. You can play around with several different versions of the above code in the main sample form:   Using Hosting Containers for more Control and Caching The above examples simply render templates into assemblies each and every time they are executed. While this works and is even reasonably fast, it’s not terribly efficient. If you render templates more than once it would be nice if you could cache the generated assemblies for example to avoid re-compiling and creating of a new assembly each time. Additionally it would be nice to load template assemblies into a separate AppDomain optionally to be able to be able to unload assembli es and also to protect your host application from scripting attacks with malicious template code. Hosting containers provide also provide a wrapper around the RazorEngine<T> instance, a factory (which allows creation in separate AppDomains) and an easy way to start and stop the container ‘runtime’. The Razor Hosting samples provide two hosting containers: RazorFolderHostContainer and StringHostContainer. The folder host provides a simple runtime environment for a folder structure similar in the way that the ASP.NET runtime handles a virtual directory as it’s ‘application' root. Templates are loaded from disk in relative paths and the resulting assemblies are cached unless the template on disk is changed. The string host also caches templates based on string hashes – if the same string is passed a second time a cached version of the assembly is used. Here’s how HostContainers work. I’ll use the FolderHostContainer because it’s likely the most common way you’d use templates – from disk based templates that can be easily edited and maintained on disk. The first step is to create an instance of it and keep it around somewhere (in the example it’s attached as a property to the Form): RazorFolderHostContainer Host = new RazorFolderHostContainer(); public RazorFolderHostForm() { InitializeComponent(); // The base path for templates - templates are rendered with relative paths // based on this path. Host.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Add any assemblies you want reference in your templates Host.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container Host.Start(); } Next anytime you want to render a template you can use simple code like this: private void RenderTemplate(string fileName) { // Pass the template path via the Context var relativePath = Utilities.GetRelativePath(fileName, Host.TemplatePath); if (!Host.RenderTemplate(relativePath, this.Context, Host.RenderingOutputFile)) { MessageBox.Show("Error: " + Host.ErrorMessage); return; } this.webBrowser1.Navigate("file://" + Host.RenderingOutputFile); } You can also render the output to a string instead of to a file: string result = Host.RenderTemplateToString(relativePath,context); Finally if you want to release the engine and shut down the hosting AppDomain you can simply do: Host.Stop(); Stopping the AppDomain and restarting it (ie. calling Stop(); followed by Start()) is also a nice way to release all resources in the AppDomain. The FolderBased domain also supports partial Rendering based on root path based relative paths with the same caching characteristics as the main templates. From within a template you can call out to a partial like this: @RenderPartial(@"partials\PartialRendering.cshtml", Context) where partials\PartialRendering.cshtml is a relative to the template root folder. The folder host example lets you load up templates from disk and display the result in a Web Browser control which demonstrates using Razor HTML output from templates that contain HTML syntax which happens to me my target scenario for Html Help Builder.   The Razor Engine Wrapper Project The project I created to wrap Razor hosting has a fair bit of code and a number of classes associated with it. Most of the components are internally used and as you can see using the final RazorEngine<T> and HostContainer classes is pretty easy. The classes are extensible and I suspect developers will want to build more customized host containers for their applications. Host containers are the key to wrapping up all functionality – Engine, BaseTemplate, AppDomain Hosting, Caching etc in a logical piece that is ready to be plugged into an application. When looking at the code there are a couple of core features provided: Core Razor Engine Hosting This is the core Razor hosting which provides the basics of loading a template, compiling it into an assembly and executing it. This is fairly straightforward, but without a host container that can cache assemblies based on some criteria templates are recompiled and re-created each time which is inefficient (although pretty fast). The base engine wrapper implementation also supports hosting the Razor runtime in a separate AppDomain for security and the ability to unload it on demand. Host Containers The engine hosting itself doesn’t provide any sort of ‘runtime’ service like picking up files from disk, caching assemblies and so forth. So my implementation provides two HostContainers: RazorFolderHostContainer and RazorStringHostContainer. The FolderHost works off a base directory and loads templates based on relative paths (sort of like the ASP.NET runtime does off a virtual). The HostContainers also deal with caching of template assemblies – for the folder host the file date is tracked and checked for updates and unless the template is changed a cached assembly is reused. The StringHostContainer similiarily checks string hashes to figure out whether a particular string template was previously compiled and executed. The HostContainers also act as a simple startup environment and a single reference to easily store and reuse in an application. TemplateBase Classes The template base classes are the base classes that from which the Razor engine generates .NET code. A template is parsed into a class with an Execute() method and the class is based on this template type you can specify. RazorEngine<TBaseTemplate> can receive this type and the HostContainers default to specific templates in their base implementations. Template classes are customizable to allow you to create templates that provide application specific features and interaction from the template to your host application. How does the RazorEngine wrapper work? You can browse the source code in the links above or in the repository or download the source, but I’ll highlight some key features here. Here’s part of the RazorEngine implementation that can be used to host the runtime and that demonstrates the key code required to host the Razor runtime. The RazorEngine class is implemented as a generic class to reflect the Template base class type: public class RazorEngine<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase The generic type is used to internally provide easier access to the template type and assignments on it as part of the template processing. The class also inherits MarshalByRefObject to allow execution over AppDomain boundaries – something that all the classes discussed here need to do since there is much interaction between the host and the template. The first two key methods deal with creating a template assembly: /// <summary> /// Creates an instance of the RazorHost with various options applied. /// Applies basic namespace imports and the name of the class to generate /// </summary> /// <param name="generatedNamespace"></param> /// <param name="generatedClass"></param> /// <returns></returns> protected RazorTemplateEngine CreateHost(string generatedNamespace, string generatedClass) { Type baseClassType = typeof(TBaseTemplateType); RazorEngineHost host = new RazorEngineHost(new CSharpRazorCodeLanguage()); host.DefaultBaseClass = baseClassType.FullName; host.DefaultClassName = generatedClass; host.DefaultNamespace = generatedNamespace; host.NamespaceImports.Add("System"); host.NamespaceImports.Add("System.Text"); host.NamespaceImports.Add("System.Collections.Generic"); host.NamespaceImports.Add("System.Linq"); host.NamespaceImports.Add("System.IO"); return new RazorTemplateEngine(host); } /// <summary> /// Parses and compiles a markup template into an assembly and returns /// an assembly name. The name is an ID that can be passed to /// ExecuteTemplateByAssembly which picks up a cached instance of the /// loaded assembly. /// /// </summary> /// <param name="namespaceOfGeneratedClass">The namespace of the class to generate from the template</param> /// <param name="generatedClassName">The name of the class to generate from the template</param> /// <param name="ReferencedAssemblies">Any referenced assemblies by dll name only. Assemblies must be in execution path of host or in GAC.</param> /// <param name="templateSourceReader">Textreader that loads the template</param> /// <remarks> /// The actual assembly isn't returned here to allow for cross-AppDomain /// operation. If the assembly was returned it would fail for cross-AppDomain /// calls. /// </remarks> /// <returns>An assembly Id. The Assembly is cached in memory and can be used with RenderFromAssembly.</returns> public string ParseAndCompileTemplate( string namespaceOfGeneratedClass, string generatedClassName, string[] ReferencedAssemblies, TextReader templateSourceReader) { RazorTemplateEngine engine = CreateHost(namespaceOfGeneratedClass, generatedClassName); // Generate the template class as CodeDom GeneratorResults razorResults = engine.GenerateCode(templateSourceReader); // Create code from the codeDom and compile CSharpCodeProvider codeProvider = new CSharpCodeProvider(); CodeGeneratorOptions options = new CodeGeneratorOptions(); // Capture Code Generated as a string for error info // and debugging LastGeneratedCode = null; using (StringWriter writer = new StringWriter()) { codeProvider.GenerateCodeFromCompileUnit(razorResults.GeneratedCode, writer, options); LastGeneratedCode = writer.ToString(); } CompilerParameters compilerParameters = new CompilerParameters(ReferencedAssemblies); // Standard Assembly References compilerParameters.ReferencedAssemblies.Add("System.dll"); compilerParameters.ReferencedAssemblies.Add("System.Core.dll"); compilerParameters.ReferencedAssemblies.Add("Microsoft.CSharp.dll"); // dynamic support! // Also add the current assembly so RazorTemplateBase is available compilerParameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().CodeBase.Substring(8)); compilerParameters.GenerateInMemory = Configuration.CompileToMemory; if (!Configuration.CompileToMemory) compilerParameters.OutputAssembly = Path.Combine(Configuration.TempAssemblyPath, "_" + Guid.NewGuid().ToString("n") + ".dll"); CompilerResults compilerResults = codeProvider.CompileAssemblyFromDom(compilerParameters, razorResults.GeneratedCode); if (compilerResults.Errors.Count > 0) { var compileErrors = new StringBuilder(); foreach (System.CodeDom.Compiler.CompilerError compileError in compilerResults.Errors) compileErrors.Append(String.Format(Resources.LineX0TColX1TErrorX2RN, compileError.Line, compileError.Column, compileError.ErrorText)); this.SetError(compileErrors.ToString() + "\r\n" + LastGeneratedCode); return null; } AssemblyCache.Add(compilerResults.CompiledAssembly.FullName, compilerResults.CompiledAssembly); return compilerResults.CompiledAssembly.FullName; } Think of the internal CreateHost() method as setting up the assembly generated from each template. Each template compiles into a separate assembly. It sets up namespaces, and assembly references, the base class used and the name and namespace for the generated class. ParseAndCompileTemplate() then calls the CreateHost() method to receive the template engine generator which effectively generates a CodeDom from the template – the template is turned into .NET code. The code generated from our earlier example looks something like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace RazorTest { using System; using System.Text; using System.Collections.Generic; using System.Linq; using System.IO; using System.Reflection; public class RazorTemplate : RazorHosting.RazorTemplateBase { #line hidden public RazorTemplate() { } public override void Execute() { WriteLiteral("Hello "); Write(Context.FirstName); WriteLiteral("! Your entry was entered on: "); Write(Context.Entered); WriteLiteral("\r\n\r\n"); // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); WriteLiteral("\r\nAppDomain Id:\r\n "); Write(AppDomain.CurrentDomain.FriendlyName); WriteLiteral("\r\n \r\nAssembly:\r\n "); Write(Assembly.GetExecutingAssembly().FullName); WriteLiteral("\r\n\r\nCode based output: \r\n"); // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } } } } Basically the template’s body is turned into code in an Execute method that is called. Internally the template’s Write method is fired to actually generate the output. Note that the class inherits from RazorTemplateBase which is the generic parameter I used to specify the base class when creating an instance in my RazorEngine host: var engine = new RazorEngine<RazorTemplateBase>(); This template class must be provided and it must implement an Execute() and Write() method. Beyond that you can create any class you chose and attach your own properties. My RazorTemplateBase class implementation is very simple: public class RazorTemplateBase : MarshalByRefObject, IDisposable { /// <summary> /// You can pass in a generic context object /// to use in your template code /// </summary> public dynamic Context { get; set; } /// <summary> /// Class that generates output. Currently ultra simple /// with only Response.Write() implementation. /// </summary> public RazorResponse Response { get; set; } public object HostContainer {get; set; } public object Engine { get; set; } public RazorTemplateBase() { Response = new RazorResponse(); } public virtual void Write(object value) { Response.Write(value); } public virtual void WriteLiteral(object value) { Response.Write(value); } /// <summary> /// Razor Parser implements this method /// </summary> public virtual void Execute() {} public virtual void Dispose() { if (Response != null) { Response.Dispose(); Response = null; } } } Razor fills in the Execute method when it generates its subclass and uses the Write() method to output content. As you can see I use a RazorResponse() class here to generate output. This isn’t necessary really, as you could use a StringBuilder or StringWriter() directly, but I prefer using Response object so I can extend the Response behavior as needed. The RazorResponse class is also very simple and merely acts as a wrapper around a TextWriter: public class RazorResponse : IDisposable { /// <summary> /// Internal text writer - default to StringWriter() /// </summary> public TextWriter Writer = new StringWriter(); public virtual void Write(object value) { Writer.Write(value); } public virtual void WriteLine(object value) { Write(value); Write("\r\n"); } public virtual void WriteFormat(string format, params object[] args) { Write(string.Format(format, args)); } public override string ToString() { return Writer.ToString(); } public virtual void Dispose() { Writer.Close(); } public virtual void SetTextWriter(TextWriter writer) { // Close original writer if (Writer != null) Writer.Close(); Writer = writer; } } The Rendering Methods of RazorEngine At this point I’ve talked about the assembly generation logic and the template implementation itself. What’s left is that once you’ve generated the assembly is to execute it. The code to do this is handled in the various RenderXXX methods of the RazorEngine class. Let’s look at the lowest level one of these which is RenderTemplateFromAssembly() and a couple of internal support methods that handle instantiating and invoking of the generated template method: public string RenderTemplateFromAssembly( string assemblyId, string generatedNamespace, string generatedClass, object context, TextWriter outputWriter) { this.SetError(); Assembly generatedAssembly = AssemblyCache[assemblyId]; if (generatedAssembly == null) { this.SetError(Resources.PreviouslyCompiledAssemblyNotFound); return null; } string className = generatedNamespace + "." + generatedClass; Type type; try { type = generatedAssembly.GetType(className); } catch (Exception ex) { this.SetError(Resources.UnableToCreateType + className + ": " + ex.Message); return null; } // Start with empty non-error response (if we use a writer) string result = string.Empty; using(TBaseTemplateType instance = InstantiateTemplateClass(type)) { if (instance == null) return null; if (outputWriter != null) instance.Response.SetTextWriter(outputWriter); if (!InvokeTemplateInstance(instance, context)) return null; // Capture string output if implemented and return // otherwise null is returned if (outputWriter == null) result = instance.Response.ToString(); } return result; } protected virtual TBaseTemplateType InstantiateTemplateClass(Type type) { TBaseTemplateType instance = Activator.CreateInstance(type) as TBaseTemplateType; if (instance == null) { SetError(Resources.CouldnTActivateTypeInstance + type.FullName); return null; } instance.Engine = this; // If a HostContainer was set pass that to the template too instance.HostContainer = this.HostContainer; return instance; } /// <summary> /// Internally executes an instance of the template, /// captures errors on execution and returns true or false /// </summary> /// <param name="instance">An instance of the generated template</param> /// <returns>true or false - check ErrorMessage for errors</returns> protected virtual bool InvokeTemplateInstance(TBaseTemplateType instance, object context) { try { instance.Context = context; instance.Execute(); } catch (Exception ex) { this.SetError(Resources.TemplateExecutionError + ex.Message); return false; } finally { // Must make sure Response is closed instance.Response.Dispose(); } return true; } The RenderTemplateFromAssembly method basically requires the namespace and class to instantate and creates an instance of the class using InstantiateTemplateClass(). It then invokes the method with InvokeTemplateInstance(). These two methods are broken out because they are re-used by various other rendering methods and also to allow subclassing and providing additional configuration tasks to set properties and pass values to templates at execution time. In the default mode instantiation sets the Engine and HostContainer (discussed later) so the template can call back into the template engine, and the context is set when the template method is invoked. The various RenderXXX methods use similar code although they create the assemblies first. If you’re after potentially cashing assemblies the method is the one to call and that’s exactly what the two HostContainer classes do. More on that in a minute, but before we get into HostContainers let’s talk about AppDomain hosting and the like. Running Templates in their own AppDomain With the RazorEngine class above, when a template is parsed into an assembly and executed the assembly is created (in memory or on disk – you can configure that) and cached in the current AppDomain. In .NET once an assembly has been loaded it can never be unloaded so if you’re loading lots of templates and at some time you want to release them there’s no way to do so. If however you load the assemblies in a separate AppDomain that new AppDomain can be unloaded and the assemblies loaded in it with it. In order to host the templates in a separate AppDomain the easiest thing to do is to run the entire RazorEngine in a separate AppDomain. Then all interaction occurs in the other AppDomain and no further changes have to be made. To facilitate this there is a RazorEngineFactory which has methods that can instantiate the RazorHost in a separate AppDomain as well as in the local AppDomain. The host creates the remote instance and then hangs on to it to keep it alive as well as providing methods to shut down the AppDomain and reload the engine. Sounds complicated but cross-AppDomain invocation is actually fairly easy to implement. Here’s some of the relevant code from the RazorEngineFactory class. Like the RazorEngine this class is generic and requires a template base type in the generic class name: public class RazorEngineFactory<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase Here are the key methods of interest: /// <summary> /// Creates an instance of the RazorHost in a new AppDomain. This /// version creates a static singleton that that is cached and you /// can call UnloadRazorHostInAppDomain to unload it. /// </summary> /// <returns></returns> public static RazorEngine<TBaseTemplateType> CreateRazorHostInAppDomain() { if (Current == null) Current = new RazorEngineFactory<TBaseTemplateType>(); return Current.GetRazorHostInAppDomain(); } public static void UnloadRazorHostInAppDomain() { if (Current != null) Current.UnloadHost(); Current = null; } /// <summary> /// Instance method that creates a RazorHost in a new AppDomain. /// This method requires that you keep the Factory around in /// order to keep the AppDomain alive and be able to unload it. /// </summary> /// <returns></returns> public RazorEngine<TBaseTemplateType> GetRazorHostInAppDomain() { LocalAppDomain = CreateAppDomain(null); if (LocalAppDomain == null) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! RazorEngine<TBaseTemplateType> host = null; try { Assembly ass = Assembly.GetExecutingAssembly(); string AssemblyPath = ass.Location; host = (RazorEngine<TBaseTemplateType>) LocalAppDomain.CreateInstanceFrom(AssemblyPath, typeof(RazorEngine<TBaseTemplateType>).FullName).Unwrap(); } catch (Exception ex) { ErrorMessage = ex.Message; return null; } return host; } /// <summary> /// Internally creates a new AppDomain in which Razor templates can /// be run. /// </summary> /// <param name="appDomainName"></param> /// <returns></returns> private AppDomain CreateAppDomain(string appDomainName) { if (appDomainName == null) appDomainName = "RazorHost_" + Guid.NewGuid().ToString("n"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; AppDomain localDomain = AppDomain.CreateDomain(appDomainName, null, setup); return localDomain; } /// <summary> /// Allow unloading of the created AppDomain to release resources /// All internal resources in the AppDomain are released including /// in memory compiled Razor assemblies. /// </summary> public void UnloadHost() { if (this.LocalAppDomain != null) { AppDomain.Unload(this.LocalAppDomain); this.LocalAppDomain = null; } } The static CreateRazorHostInAppDomain() is the key method that startup code usually calls. It uses a Current singleton instance to an instance of itself that is created cross AppDomain and is kept alive because it’s static. GetRazorHostInAppDomain actually creates a cross-AppDomain instance which first creates a new AppDomain and then loads the RazorEngine into it. The remote Proxy instance is returned as a result to the method and can be used the same as a local instance. The code to run with a remote AppDomain is simple: private RazorEngine<RazorTemplateBase> CreateHost() { if (this.Host != null) return this.Host; // Use Static Methods - no error message if host doesn't load this.Host = RazorEngineFactory<RazorTemplateBase>.CreateRazorHostInAppDomain(); if (this.Host == null) { MessageBox.Show("Unable to load Razor Template Host", "Razor Hosting", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } return this.Host; } This code relies on a local reference of the Host which is kept around for the duration of the app (in this case a form reference). To use this you’d simply do: this.Host = CreateHost(); if (host == null) return; string result = host.RenderTemplate( this.txtSource.Text, new string[] { "System.Windows.Forms.dll", "Westwind.Utilities.dll" }, this.CustomContext); if (result == null) { MessageBox.Show(host.ErrorMessage, "Template Execution Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); return; } this.txtResult.Text = result; Now all templates run in a remote AppDomain and can be unloaded with simple code like this: RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Host = null; One Step further – Providing a caching ‘Runtime’ Once we can load templates in a remote AppDomain we can add some additional functionality like assembly caching based on application specific features. One of my typical scenarios is to render templates out of a scripts folder. So all templates live in a folder and they change infrequently. So a Folder based host that can compile these templates once and then only recompile them if something changes would be ideal. Enter host containers which are basically wrappers around the RazorEngine<t> and RazorEngineFactory<t>. They provide additional logic for things like file caching based on changes on disk or string hashes for string based template inputs. The folder host also provides for partial rendering logic through a custom template base implementation. There’s a base implementation in RazorBaseHostContainer, which provides the basics for hosting a RazorEngine, which includes the ability to start and stop the engine, cache assemblies and add references: public abstract class RazorBaseHostContainer<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase, new() { public RazorBaseHostContainer() { UseAppDomain = true; GeneratedNamespace = "__RazorHost"; } /// <summary> /// Determines whether the Container hosts Razor /// in a separate AppDomain. Seperate AppDomain /// hosting allows unloading and releasing of /// resources. /// </summary> public bool UseAppDomain { get; set; } /// <summary> /// Base folder location where the AppDomain /// is hosted. By default uses the same folder /// as the host application. /// /// Determines where binary dependencies are /// found for assembly references. /// </summary> public string BaseBinaryFolder { get; set; } /// <summary> /// List of referenced assemblies as string values. /// Must be in GAC or in the current folder of the host app/ /// base BinaryFolder /// </summary> public List<string> ReferencedAssemblies = new List<string>(); /// <summary> /// Name of the generated namespace for template classes /// </summary> public string GeneratedNamespace {get; set; } /// <summary> /// Any error messages /// </summary> public string ErrorMessage { get; set; } /// <summary> /// Cached instance of the Host. Required to keep the /// reference to the host alive for multiple uses. /// </summary> public RazorEngine<TBaseTemplateType> Engine; /// <summary> /// Cached instance of the Host Factory - so we can unload /// the host and its associated AppDomain. /// </summary> protected RazorEngineFactory<TBaseTemplateType> EngineFactory; /// <summary> /// Keep track of each compiled assembly /// and when it was compiled. /// /// Use a hash of the string to identify string /// changes. /// </summary> protected Dictionary<int, CompiledAssemblyItem> LoadedAssemblies = new Dictionary<int, CompiledAssemblyItem>(); /// <summary> /// Call to start the Host running. Follow by a calls to RenderTemplate to /// render individual templates. Call Stop when done. /// </summary> /// <returns>true or false - check ErrorMessage on false </returns> public virtual bool Start() { if (Engine == null) { if (UseAppDomain) Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHostInAppDomain(); else Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHost(); Engine.Configuration.CompileToMemory = true; Engine.HostContainer = this; if (Engine == null) { this.ErrorMessage = EngineFactory.ErrorMessage; return false; } } return true; } /// <summary> /// Stops the Host and releases the host AppDomain and cached /// assemblies. /// </summary> /// <returns>true or false</returns> public bool Stop() { this.LoadedAssemblies.Clear(); RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Engine = null; return true; } … } This base class provides most of the mechanics to host the runtime, but no application specific implementation for rendering. There are rendering functions but they just call the engine directly and provide no caching – there’s no context to decide how to cache and reuse templates. The key methods are Start and Stop and their main purpose is to start a new AppDomain (optionally) and shut it down when requested. The RazorFolderHostContainer – Folder Based Runtime Hosting Let’s look at the more application specific RazorFolderHostContainer implementation which is defined like this: public class RazorFolderHostContainer : RazorBaseHostContainer<RazorTemplateFolderHost> Note that a customized RazorTemplateFolderHost class template is used for this implementation that supports partial rendering in form of a RenderPartial() method that’s available to templates. The folder host’s features are: Render templates based on a Template Base Path (a ‘virtual’ if you will) Cache compiled assemblies based on the relative path and file time stamp File changes on templates cause templates to be recompiled into new assemblies Support for partial rendering using base folder relative pathing As shown in the startup examples earlier host containers require some startup code with a HostContainer tied to a persistent property (like a Form property): // The base path for templates - templates are rendered with relative paths // based on this path. HostContainer.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Default output rendering disk location HostContainer.RenderingOutputFile = Path.Combine(HostContainer.TemplatePath, "__Preview.htm"); // Add any assemblies you want reference in your templates HostContainer.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container HostContainer.Start(); Once that’s done, you can render templates with the host container: // Pass the template path for full filename seleted with OpenFile Dialog // relativepath is: subdir\file.cshtml or file.cshtml or ..\file.cshtml var relativePath = Utilities.GetRelativePath(fileName, HostContainer.TemplatePath); if (!HostContainer.RenderTemplate(relativePath, Context, HostContainer.RenderingOutputFile)) { MessageBox.Show("Error: " + HostContainer.ErrorMessage); return; } webBrowser1.Navigate("file://" + HostContainer.RenderingOutputFile); The most critical task of the RazorFolderHostContainer implementation is to retrieve a template from disk, compile and cache it and then deal with deciding whether subsequent requests need to re-compile the template or simply use a cached version. Internally the GetAssemblyFromFileAndCache() handles this task: /// <summary> /// Internally checks if a cached assembly exists and if it does uses it /// else creates and compiles one. Returns an assembly Id to be /// used with the LoadedAssembly list. /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> protected virtual CompiledAssemblyItem GetAssemblyFromFileAndCache(string relativePath) { string fileName = Path.Combine(TemplatePath, relativePath).ToLower(); int fileNameHash = fileName.GetHashCode(); if (!File.Exists(fileName)) { this.SetError(Resources.TemplateFileDoesnTExist + fileName); return null; } CompiledAssemblyItem item = null; this.LoadedAssemblies.TryGetValue(fileNameHash, out item); string assemblyId = null; // Check for cached instance if (item != null) { var fileTime = File.GetLastWriteTimeUtc(fileName); if (fileTime <= item.CompileTimeUtc) assemblyId = item.AssemblyId; } else item = new CompiledAssemblyItem(); // No cached instance - create assembly and cache if (assemblyId == null) { string safeClassName = GetSafeClassName(fileName); StreamReader reader = null; try { reader = new StreamReader(fileName, true); } catch (Exception ex) { this.SetError(Resources.ErrorReadingTemplateFile + fileName); return null; } assemblyId = Engine.ParseAndCompileTemplate(this.ReferencedAssemblies.ToArray(), reader); // need to ensure reader is closed if (reader != null) reader.Close(); if (assemblyId == null) { this.SetError(Engine.ErrorMessage); return null; } item.AssemblyId = assemblyId; item.CompileTimeUtc = DateTime.UtcNow; item.FileName = fileName; item.SafeClassName = safeClassName; this.LoadedAssemblies[fileNameHash] = item; } return item; } This code uses a LoadedAssembly dictionary which is comprised of a structure that holds a reference to a compiled assembly, a full filename and file timestamp and an assembly id. LoadedAssemblies (defined on the base class shown earlier) is essentially a cache for compiled assemblies and they are identified by a hash id. In the case of files the hash is a GetHashCode() from the full filename of the template. The template is checked for in the cache and if not found the file stamp is checked. If that’s newer than the cache’s compilation date the template is recompiled otherwise the version in the cache is used. All the core work defers to a RazorEngine<T> instance to ParseAndCompileTemplate(). The three rendering specific methods then are rather simple implementations with just a few lines of code dealing with parameter and return value parsing: /// <summary> /// Renders a template to a TextWriter. Useful to write output into a stream or /// the Response object. Used for partial rendering. /// </summary> /// <param name="relativePath">Relative path to the file in the folder structure</param> /// <param name="context">Optional context object or null</param> /// <param name="writer">The textwriter to write output into</param> /// <returns></returns> public bool RenderTemplate(string relativePath, object context, TextWriter writer) { // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; CompiledAssemblyItem item = GetAssemblyFromFileAndCache(relativePath); if (item == null) { writer.Close(); return false; } try { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error string result = Engine.RenderTemplateFromAssembly(item.AssemblyId, context, writer); if (result == null) { this.SetError(Engine.ErrorMessage); return false; } } catch (Exception ex) { this.SetError(ex.Message); return false; } finally { writer.Close(); } return true; } /// <summary> /// Render a template from a source file on disk to a specified outputfile. /// </summary> /// <param name="relativePath">Relative path off the template root folder. Format: path/filename.cshtml</param> /// <param name="context">Any object that will be available in the template as a dynamic of this.Context</param> /// <param name="outputFile">Optional - output file where output is written to. If not specified the /// RenderingOutputFile property is used instead /// </param> /// <returns>true if rendering succeeds, false on failure - check ErrorMessage</returns> public bool RenderTemplate(string relativePath, object context, string outputFile) { if (outputFile == null) outputFile = RenderingOutputFile; try { using (StreamWriter writer = new StreamWriter(outputFile, false, Engine.Configuration.OutputEncoding, Engine.Configuration.StreamBufferSize)) { return RenderTemplate(relativePath, context, writer); } } catch (Exception ex) { this.SetError(ex.Message); return false; } return true; } /// <summary> /// Renders a template to string. Useful for RenderTemplate /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> public string RenderTemplateToString(string relativePath, object context) { string result = string.Empty; try { using (StringWriter writer = new StringWriter()) { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error if (!RenderTemplate(relativePath, context, writer)) { this.SetError(Engine.ErrorMessage); return null; } result = writer.ToString(); } } catch (Exception ex) { this.SetError(ex.Message); return null; } return result; } The idea is that you can create custom host container implementations that do exactly what you want fairly easily. Take a look at both the RazorFolderHostContainer and RazorStringHostContainer classes for the basic concepts you can use to create custom implementations. Notice also that you can set the engine’s PerRequestConfigurationData() from the host container: // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; which when set to a non-null value is passed to the Template’s InitializeTemplate() method. This method receives an object parameter which you can cast as needed: public override void InitializeTemplate(object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; } With this data you can then configure any custom properties or objects on your main template class. It’s an easy way to pass data from the HostContainer all the way down into the template. The type you use is of type object so you have to cast it yourself, and it must be serializable since it will likely run in a separate AppDomain. This might seem like an ugly way to pass data around – normally I’d use an event delegate to call back from the engine to the host, but since this is running over AppDomain boundaries events get really tricky and passing a template instance back up into the host over AppDomain boundaries doesn’t work due to serialization issues. So it’s easier to pass the data from the host down into the template using this rather clumsy approach of set and forward. It’s ugly, but it’s something that can be hidden in the host container implementation as I’ve done here. It’s also not something you have to do in every implementation so this is kind of an edge case, but I know I’ll need to pass a bunch of data in some of my applications and this will be the easiest way to do so. Summing Up Hosting the Razor runtime is something I got jazzed up about quite a bit because I have an immediate need for this type of templating/merging/scripting capability in an application I’m working on. I’ve also been using templating in many apps and it’s always been a pain to deal with. The Razor engine makes this whole experience a lot cleaner and more light weight and with these wrappers I can now plug .NET based templating into my code literally with a few lines of code. That’s something to cheer about… I hope some of you will find this useful as well… Resources The examples and code require that you download the Razor runtimes. Projects are for Visual Studio 2010 running on .NET 4.0 Platform Installer 3.0 (install WebMatrix or MVC 3 for Razor Runtimes) Latest Code in Subversion Repository Download Snapshot of the Code Documentation (CHM Help File) © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  .NET  

    Read the article

< Previous Page | 57 58 59 60 61