Search Results

Search found 5377 results on 216 pages for 'explicit cast operator'.

Page 192/216 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Can I access type int (*)[] with [][]?

    - by Framester
    Hi coming from this question "What does (int (*)[])var1 stand for?" I tried to access the result of the cast like a multidimensional array. But I get following error: "assignment from incompatible pointer type" followed by a segmentation fault. I tried also some other variations, but none of them worked. How can I access the elements in var1 in the function example directly? Thank you! #include <stdlib.h> int i(int n,int m,int var1[n][m]) { var1[0][0]=5; return var1[0][0]; } int example() { int *var1 = malloc(100); // works int var2; var2 = i(10,10,(int (*)[])var1); printf("var2=%i",var2); //doesn't work I int *var3; var3=(int (*)[])var1; //"assignment from incompatible pointer type" printf("var3[0][0]=%i",var3[0][0]); //doesn't work II int *var4; var4=var1; printf("var4[0][0]=%i",var4[0][0]); //" error: subscripted value is neither array nor pointer" //doesn't work III int **var5; var5=var1; printf("var5[0][0]=%i",var5[0][0]); // assignment from incompatible pointer type return(1); } int main(){ int a; a=example(); return(1); }

    Read the article

  • Is this the best way to grab common elements from a Hash of arrays?

    - by Hulihan Applications
    I'm trying to get a common element from a group of arrays in Ruby. Normally, you can use the & operator to compare two arrays, which returns elements that are present or common in both arrays. This is all good, except when you're trying to get common elements from more than two arrays. However, I want to get common elements from an unknown, dynamic number of arrays, which are stored in a hash. I had to resort to using the eval() method in ruby, which executes a string as actual code. Here's the function I wrote: def get_common_elements_for_hash_of_arrays(hash) # get an array of common elements contained in a hash of arrays, for every array in the hash. # ["1","2","3"] & ["2","4","5"] & ["2","5","6"] # => ["2"] # eval("[\"1\",\"2\",\"3\"] & [\"2\",\"4\",\"5\"] & [\"2\",\"5\",\"6\"]") # => ["2"] eval_string_array = Array.new # an array to store strings of Arrays, ie: "[\"2\",\"5\",\"6\"]", which we will join with & to get all common elements hash.each do |key, array| eval_string_array << array.inspect end eval_string = eval_string_array.join(" & ") # create eval string delimited with a & so we can get common values return eval(eval_string) end example_hash = {:item_0 => ["1","2","3"], :item_1 => ["2","4","5"], :item_2 => ["2","5","6"] } puts get_common_elements_for_hash_of_arrays(example_hash) # => 2 This works and is great, but I'm wondering...eval, really? Is this the best way to do it? Are there even any other ways to accomplish this(besides a recursive function, of course). If anyone has any suggestions, I'm all ears. Otherwise, Feel free to use this code if you need to grab a common item or element from a group or hash of arrays, this code can also easily be adapted to search an array of arrays.

    Read the article

  • List of Lists of different types

    - by themarshal
    One of the data structures in my current project requires that I store lists of various types (String, int, float, etc.). I need to be able to dynamically store any number of lists without knowing what types they'll be. I tried storing each list as an object, but I ran into problems trying to cast back into the appropriate type (it kept recognizing everything as a List<String>). For example: List<object> myLists = new List<object>(); public static void Main(string args[]) { // Create some lists... // Populate the lists... // Add the lists to myLists... for (int i = 0; i < myLists.Count; i++) { Console.WriteLine("{0} elements in list {1}", GetNumElements(i), i); } } public int GetNumElements(int index) { object o = myLists[index]; if (o is List<int>) return (o as List<int>).Count; if (o is List<String>) // <-- Always true!? return (o as List<String>).Count; // <-- Returning 0 for non-String Lists return -1; } Am I doing something incorrectly? Is there a better way to store a list of lists of various types, or is there a better way to determine if something is a list of a certain type?

    Read the article

  • How to use JOIN using Hibernate's session.createSQLQuery()

    - by javauser71
    Hi All, I have two Entity (tables) - Employee & Project. An Employee can have multiple Projects. Project table's CREATOR_ID field refers to Employee table's ID field. Employee entity maintains a list of Project. Using EntityManager following query works fine - "entityManager.createQuery("select e from EmployeeDTO e, ProjectDTO p where p.id = ?1 and p.creator.id=e.id"); But since I have the LAZY association relationship, I get error: "Could not initialize proxy - no Session" if I try to access Project info from Employee entity. This is expected and so I am using Hibernate's Session to create query as shown below. Session session = HibernateUtil.getSessionFactory().openSession(); org.hibernate.Query q = session.createSQLQuery("SELECT E FROM EMPLOYEE_TAB E, PROJECT_TAB P WHERE P.ID = " + projectId + " AND P.CREATOR_ID = E.ID") .addEntity("EmployeeDTO ", EmployeeDTO.class) .addEntity("ProjectDTO", ProjectDTO.class); But I get error like: "Column 'E' is either not in any table in the FROM list or appears within a join specification and is outside the scope of the join specification..." Can anyone suggest what will be the right JOIN syntax for such case? If I use ("SELECT * FROM EMPLOYEE_TAB E, ........") - it gives other error: "java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to com.im.server.dto.EmployeeDTO". Thanks in advance.

    Read the article

  • Is this the best way to grab Common element from a Hash of arrays?

    - by Hulihan Applications
    I'm trying to get a common element from a group of arrays in Ruby. Normally, you can use the & operator to compare two arrays, which returns elements that are present or common in both arrays. This is all good, except when you're trying to get common elements from more than two arrays. However, I want to get common elements from an unknown, dynamic number of arrays, which are stored in a hash. I had to resort to using the eval() method in ruby, which executes a string as actual code. Here's the function I wrote: def get_common_elements_for_hash_of_arrays(hash) # get an array of common elements contained in a hash of arrays, for every array in the hash. # ["1","2","3"] & ["2","4","5"] & ["2","5","6"] # => ["2"] # eval("[\"1\",\"2\",\"3\"] & [\"2\",\"4\",\"5\"] & [\"2\",\"5\",\"6\"]") # => ["2"] eval_string_array = Array.new # an array to store strings of Arrays, ie: "[\"2\",\"5\",\"6\"]", which we will join with & to get all common elements hash.each do |key, array| eval_string_array << array.inspect end eval_string = eval_string_array.join(" & ") # create eval string delimited with a & so we can get common values return eval(eval_string) end example_hash = {:item_0 => ["1","2","3"], :item_1 => ["2","4","5"], :item_2 => ["2","5","6"] } puts get_common_elements_for_hash_of_arrays(example_hash) # => 2 This works and is great, but I'm wondering...eval, really? Is this the best way to do it? Are there even any other ways to accomplish this(besides a recursive function, of course). If anyone has any suggestions, I'm all ears. Otherwise, Feel free to use this code if you need to grab a common item or element from a group or hash of arrays, this code can also easily be adapted to search an array of arrays.

    Read the article

  • Cannot determine why pointer variable will not address elements in a string in this program?

    - by Smith Will Suffice
    I am attempting to utilize a pointer variable to access elements of a string and there are issues with my code generating a compilation error: #include <stdio.h> #define MAX 29 char arrayI[250]; char *ptr; int main(void) { ptr = arrayI; puts("Enter string to arrayI: up to 29 chars:\n"); fgets(arrayI, MAX, stdin); printf("\n Now printing array by pointer:\n"); printf("%s", *ptr); ptr = arrayI[1]; //(I set the pointer to the second array char element) printf("%c", *ptr); //Here is where I was wanting to use my pointer to //point to individual array elements. return 0; } My compiler crieth: [Warning] assignment makes pointer from integer without a cast [enabled by default] I do not see where my pointer was ever assigned to the integer data type? Could someone please explain why my attempt to implement a pointer variable is failing? Thanks all!

    Read the article

  • How to convert System.Object that's really an int32[] to a double[] ?

    - by fs_tech
    Hello- I get data from a 3rd party API that just gives me back a System.Object, which I know to be a double[] under the covers. And to deal with that return type, I have found the code below to work wonderfully. However, I also get back some int[] arrays that are also masquerading as System.Object, specifically dates in the form YYYYMMDD (e.g. 20100310). The conversion to float fails, and it just says that the specified cast is not valid. Does anyone out there know how to make this work for integers? let oIsNull (obj : System.Object) = obj = null let oIsArray (obj : System.Object) = obj.GetType().IsArray let o2f (obj : System.Object) = let mutable arr = [|Double.NaN|] if (oIsNull obj = false) && (oIsArray obj = true) then let objArr = obj :?> obj[] let u = objArr.GetUpperBound(0) let floatArr : float[] = Array.zeroCreate (u + 1); for i in 0..u do if objArr.[i] = null then floatArr.[i] <- Double.NaN else let t = objArr.[i].GetType() floatArr.[i] <- objArr.[i] :?> float //else floatArr.[i] <- float objArr.[i] arr <- floatArr arr

    Read the article

  • Best Methods for Dynamically Creating New Objects

    - by frankV
    I'm looking for a method to dynamically create new class objects during runtime of a program. So far what I've read leads me to believe it's not easy and normally reserved for more advanced program requirements. What I've tried so far is this: // create a vector of type class vector<class_name> vect; // and use push_back (method 1) vect.push_back(*new Object); //or use for loop and [] operator (method 2) vect[i] = *new Object; neither of these throw errors from the compiler, but I'm using ifstream to read data from a file and dynamically create the objects... the file read is taking in some weird data and occasionally reading a memory address, and it's obvious to me it's due to my use/misuse of the code snippet above. The file read code is as follows: // in main ifstream fileIn fileIn.open( fileName.c_str() ); // passes to a separate function along w/ vector loadObjects (fileIn, vect); void loadObjects (ifstream& is, vector<class_name>& Object) { int data1, data2, data3; int count = 0; string line; if( is.good() ){ for (int i = 0; i < 4; i++) { is >> data1 >> data2 >> data3; if (data1 == 0) { vect.push_back(*new Object(data2, data3) ) } } } }

    Read the article

  • Failing to use Array.Copy() in my WPF App

    - by Steven Wilson
    I am a C++ developer and recently started working on WPF. Well I am using Array.Copy() in my app and looks like I am not able to completely get the desired result. I had done in my C++ app as follows: static const signed char version[40] = { 'A', 'U', 'D', 'I', 'E', 'N', 'C', 'E', // name 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , // reserved, firmware size 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , // board number 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , // variant, version, serial 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 // date code, reserved }; unsigned char sendBuf[256] = {}; int memloc = 0; sendBuf[memloc++] = 0; sendBuf[memloc++] = 0; // fill in the audience header memcpy(sendBuf+memloc, version, 8); // the first 8 bytes memloc += 16; // the 8 copied, plus 8 reserved bytes I did the similar operation in my WPF (C#) app as follows: Byte[] sendBuf = new Byte[256]; char[] version = { 'A', 'U', 'D', 'I', 'E', 'N', 'C', 'E', // name '0', '0', '0', '0', '0', '0', '0', '0' , // reserved, firmware size '0', '0', '0', '0', '0', '0', '0', '0' , // board number '0', '0', '0', '0', '0', '0', '0', '0' , // variant, version, serial '0', '0', '0', '0', '0', '0', '0', '0' // date code, reserved }; // fill in the address to write to -- 0 sendBuf[memloc++] = 0; sendBuf[memloc++] = 0; // fill in the audience header Array.Copy(sendBuf + memloc, version, 8); // the first 8 bytes memloc += 16; But it throws me an error at Array.Copy(sendBuf + memloc, version, 8); as Operator '+' cannot be applied to operands of type 'byte[]' and 'int'. How can achieve this???? :) please help :)

    Read the article

  • Hosting the Razor Engine for Templating in Non-Web Applications

    - by Rick Strahl
    Microsoft’s new Razor HTML Rendering Engine that is currently shipping with ASP.NET MVC previews can be used outside of ASP.NET. Razor is an alternative view engine that can be used instead of the ASP.NET Page engine that currently works with ASP.NET WebForms and MVC. It provides a simpler and more readable markup syntax and is much more light weight in terms of functionality than the full blown WebForms Page engine, focusing only on features that are more along the lines of a pure view engine (or classic ASP!) with focus on expression and code rendering rather than a complex control/object model. Like the Page engine though, the parser understands .NET code syntax which can be embedded into templates, and behind the scenes the engine compiles markup and script code into an executing piece of .NET code in an assembly. Although it ships as part of the ASP.NET MVC and WebMatrix the Razor Engine itself is not directly dependent on ASP.NET or IIS or HTTP in any way. And although there are some markup and rendering features that are optimized for HTML based output generation, Razor is essentially a free standing template engine. And what’s really nice is that unlike the ASP.NET Runtime, Razor is fairly easy to host inside of your own non-Web applications to provide templating functionality. Templating in non-Web Applications? Yes please! So why might you host a template engine in your non-Web application? Template rendering is useful in many places and I have a number of applications that make heavy use of it. One of my applications – West Wind Html Help Builder - exclusively uses template based rendering to merge user supplied help text content into customizable and executable HTML markup templates that provide HTML output for CHM style HTML Help. This is an older product and it’s not actually using .NET at the moment – and this is one reason I’m looking at Razor for script hosting at the moment. For a few .NET applications though I’ve actually used the ASP.NET Runtime hosting to provide templating and mail merge style functionality and while that works reasonably well it’s a very heavy handed approach. It’s very resource intensive and has potential issues with versioning in various different versions of .NET. The generic implementation I created in the article above requires a lot of fix up to mimic an HTTP request in a non-HTTP environment and there are a lot of little things that have to happen to ensure that the ASP.NET runtime works properly most of it having nothing to do with the templating aspect but just satisfying ASP.NET’s requirements. The Razor Engine on the other hand is fairly light weight and completely decoupled from the ASP.NET runtime and the HTTP processing. Rather it’s a pure template engine whose sole purpose is to render text templates. Hosting this engine in your own applications can be accomplished with a reasonable amount of code (actually just a few lines with the tools I’m about to describe) and without having to fake HTTP requests. It’s also much lighter on resource usage and you can easily attach custom properties to your base template implementation to easily pass context from the parent application into templates all of which was rather complicated with ASP.NET runtime hosting. Installing the Razor Template Engine You can get Razor as part of the MVC 3 (RC and later) or Web Matrix. Both are available as downloadable components from the Web Platform Installer Version 3.0 (!important – V2 doesn’t show these components). If you already have that version of the WPI installed just fire it up. You can get the latest version of the Web Platform Installer from here: http://www.microsoft.com/web/gallery/install.aspx Once the platform Installer 3.0 is installed install either MVC 3 or ASP.NET Web Pages. Once installed you’ll find a System.Web.Razor assembly in C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\System.Web.Razor.dll which you can add as a reference to your project. Creating a Wrapper The basic Razor Hosting API is pretty simple and you can host Razor with a (large-ish) handful of lines of code. I’ll show the basics of it later in this article. However, if you want to customize the rendering and handle assembly and namespace includes for the markup as well as deal with text and file inputs as well as forcing Razor to run in a separate AppDomain so you can unload the code-generated assemblies and deal with assembly caching for re-used templates little more work is required to create something that is more easily reusable. For this reason I created a Razor Hosting wrapper project that combines a bunch of this functionality into an easy to use hosting class, a hosting factory that can load the engine in a separate AppDomain and a couple of hosting containers that provided folder based and string based caching for templates for an easily embeddable and reusable engine with easy to use syntax. If you just want the code and play with the samples and source go grab the latest code from the Subversion Repository at: http://www.west-wind.com:8080/svn/articles/trunk/RazorHosting/ or a snapshot from: http://www.west-wind.com/files/tools/RazorHosting.zip Getting Started Before I get into how hosting with Razor works, let’s take a look at how you can get up and running quickly with the wrapper classes provided. It only takes a few lines of code. The easiest way to use these Razor Hosting Wrappers is to use one of the two HostContainers provided. One is for hosting Razor scripts in a directory and rendering them as relative paths from these script files on disk. The other HostContainer serves razor scripts from string templates… Let’s start with a very simple template that displays some simple expressions, some code blocks and demonstrates rendering some data from contextual data that you pass to the template in the form of a ‘context’. Here’s a simple Razor template: @using System.Reflection Hello @Context.FirstName! Your entry was entered on: @Context.Entered @{ // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); } AppDomain Id: @AppDomain.CurrentDomain.FriendlyName Assembly: @Assembly.GetExecutingAssembly().FullName Code based output: @{ // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } Response.Write(output); } Pretty easy to see what’s going on here. The only unusual thing in this code is the Context object which is an arbitrary object I’m passing from the host to the template by way of the template base class. I’m also displaying the current AppDomain and the executing Assembly name so you can see how compiling and running a template actually loads up new assemblies. Also note that as part of my context I’m passing a reference to the current Windows Form down to the template and changing the title from within the script. It’s a silly example, but it demonstrates two-way communication between host and template and back which can be very powerful. The easiest way to quickly render this template is to use the RazorEngine<TTemplateBase> class. The generic parameter specifies a template base class type that is used by Razor internally to generate the class it generates from a template. The default implementation provided in my RazorHosting wrapper is RazorTemplateBase. Here’s a simple one that renders from a string and outputs a string: var engine = new RazorEngine<RazorTemplateBase>(); // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; string output = engine.RenderTemplate(this.txtSource.Text new string[] { "System.Windows.Forms.dll" }, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; Simple enough. This code renders a template from a string input and returns a result back as a string. It  creates a custom context and passes that to the template which can then access the Context’s properties. Note that anything passed as ‘context’ must be serializable (or MarshalByRefObject) – otherwise you get an exception when passing the reference over AppDomain boundaries (discussed later). Passing a context is optional, but is a key feature in being able to share data between the host application and the template. Note that we use the Context object to access FirstName, Entered and even the host Windows Form object which is used in the template to change the Window caption from within the script! In the code above all the work happens in the RenderTemplate method which provide a variety of overloads to read and write to and from strings, files and TextReaders/Writers. Here’s another example that renders from a file input using a TextReader: using (reader = new StreamReader("templates\\simple.csHtml", true)) { result = host.RenderTemplate(reader, new string[] { "System.Windows.Forms.dll" }, this.CustomContext); } RenderTemplate() is fairly high level and it handles loading of the runtime, compiling into an assembly and rendering of the template. If you want more control you can use the lower level methods to control each step of the way which is important for the HostContainers I’ll discuss later. Basically for those scenarios you want to separate out loading of the engine, compiling into an assembly and then rendering the template from the assembly. Why? So we can keep assemblies cached. In the code above a new assembly is created for each template rendered which is inefficient and uses up resources. Depending on the size of your templates and how often you fire them you can chew through memory very quickly. This slighter lower level approach is only a couple of extra steps: // we can pass any object as context - here create a custom context var context = new CustomContext() { WinForm = this, FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; var engine = new RazorEngine<RazorTemplateBase>(); string assId = null; using (StringReader reader = new StringReader(this.txtSource.Text)) { assId = engine.ParseAndCompileTemplate(new string[] { "System.Windows.Forms.dll" }, reader); } string output = engine.RenderTemplateFromAssembly(assId, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output; The difference here is that you can capture the assembly – or rather an Id to it – and potentially hold on to it to render again later assuming the template hasn’t changed. The HostContainers take advantage of this feature to cache the assemblies based on certain criteria like a filename and file time step or a string hash that if not change indicate that an assembly can be reused. Note that ParseAndCompileTemplate returns an assembly Id rather than the assembly itself. This is done so that that the assembly always stays in the host’s AppDomain and is not passed across AppDomain boundaries which would cause load failures. We’ll talk more about this in a minute but for now just realize that assemblies references are stored in a list and are accessible by this ID to allow locating and re-executing of the assembly based on that id. Reuse of the assembly avoids recompilation overhead and creation of yet another assembly that loads into the current AppDomain. You can play around with several different versions of the above code in the main sample form:   Using Hosting Containers for more Control and Caching The above examples simply render templates into assemblies each and every time they are executed. While this works and is even reasonably fast, it’s not terribly efficient. If you render templates more than once it would be nice if you could cache the generated assemblies for example to avoid re-compiling and creating of a new assembly each time. Additionally it would be nice to load template assemblies into a separate AppDomain optionally to be able to be able to unload assembli es and also to protect your host application from scripting attacks with malicious template code. Hosting containers provide also provide a wrapper around the RazorEngine<T> instance, a factory (which allows creation in separate AppDomains) and an easy way to start and stop the container ‘runtime’. The Razor Hosting samples provide two hosting containers: RazorFolderHostContainer and StringHostContainer. The folder host provides a simple runtime environment for a folder structure similar in the way that the ASP.NET runtime handles a virtual directory as it’s ‘application' root. Templates are loaded from disk in relative paths and the resulting assemblies are cached unless the template on disk is changed. The string host also caches templates based on string hashes – if the same string is passed a second time a cached version of the assembly is used. Here’s how HostContainers work. I’ll use the FolderHostContainer because it’s likely the most common way you’d use templates – from disk based templates that can be easily edited and maintained on disk. The first step is to create an instance of it and keep it around somewhere (in the example it’s attached as a property to the Form): RazorFolderHostContainer Host = new RazorFolderHostContainer(); public RazorFolderHostForm() { InitializeComponent(); // The base path for templates - templates are rendered with relative paths // based on this path. Host.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Add any assemblies you want reference in your templates Host.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container Host.Start(); } Next anytime you want to render a template you can use simple code like this: private void RenderTemplate(string fileName) { // Pass the template path via the Context var relativePath = Utilities.GetRelativePath(fileName, Host.TemplatePath); if (!Host.RenderTemplate(relativePath, this.Context, Host.RenderingOutputFile)) { MessageBox.Show("Error: " + Host.ErrorMessage); return; } this.webBrowser1.Navigate("file://" + Host.RenderingOutputFile); } You can also render the output to a string instead of to a file: string result = Host.RenderTemplateToString(relativePath,context); Finally if you want to release the engine and shut down the hosting AppDomain you can simply do: Host.Stop(); Stopping the AppDomain and restarting it (ie. calling Stop(); followed by Start()) is also a nice way to release all resources in the AppDomain. The FolderBased domain also supports partial Rendering based on root path based relative paths with the same caching characteristics as the main templates. From within a template you can call out to a partial like this: @RenderPartial(@"partials\PartialRendering.cshtml", Context) where partials\PartialRendering.cshtml is a relative to the template root folder. The folder host example lets you load up templates from disk and display the result in a Web Browser control which demonstrates using Razor HTML output from templates that contain HTML syntax which happens to me my target scenario for Html Help Builder.   The Razor Engine Wrapper Project The project I created to wrap Razor hosting has a fair bit of code and a number of classes associated with it. Most of the components are internally used and as you can see using the final RazorEngine<T> and HostContainer classes is pretty easy. The classes are extensible and I suspect developers will want to build more customized host containers for their applications. Host containers are the key to wrapping up all functionality – Engine, BaseTemplate, AppDomain Hosting, Caching etc in a logical piece that is ready to be plugged into an application. When looking at the code there are a couple of core features provided: Core Razor Engine Hosting This is the core Razor hosting which provides the basics of loading a template, compiling it into an assembly and executing it. This is fairly straightforward, but without a host container that can cache assemblies based on some criteria templates are recompiled and re-created each time which is inefficient (although pretty fast). The base engine wrapper implementation also supports hosting the Razor runtime in a separate AppDomain for security and the ability to unload it on demand. Host Containers The engine hosting itself doesn’t provide any sort of ‘runtime’ service like picking up files from disk, caching assemblies and so forth. So my implementation provides two HostContainers: RazorFolderHostContainer and RazorStringHostContainer. The FolderHost works off a base directory and loads templates based on relative paths (sort of like the ASP.NET runtime does off a virtual). The HostContainers also deal with caching of template assemblies – for the folder host the file date is tracked and checked for updates and unless the template is changed a cached assembly is reused. The StringHostContainer similiarily checks string hashes to figure out whether a particular string template was previously compiled and executed. The HostContainers also act as a simple startup environment and a single reference to easily store and reuse in an application. TemplateBase Classes The template base classes are the base classes that from which the Razor engine generates .NET code. A template is parsed into a class with an Execute() method and the class is based on this template type you can specify. RazorEngine<TBaseTemplate> can receive this type and the HostContainers default to specific templates in their base implementations. Template classes are customizable to allow you to create templates that provide application specific features and interaction from the template to your host application. How does the RazorEngine wrapper work? You can browse the source code in the links above or in the repository or download the source, but I’ll highlight some key features here. Here’s part of the RazorEngine implementation that can be used to host the runtime and that demonstrates the key code required to host the Razor runtime. The RazorEngine class is implemented as a generic class to reflect the Template base class type: public class RazorEngine<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase The generic type is used to internally provide easier access to the template type and assignments on it as part of the template processing. The class also inherits MarshalByRefObject to allow execution over AppDomain boundaries – something that all the classes discussed here need to do since there is much interaction between the host and the template. The first two key methods deal with creating a template assembly: /// <summary> /// Creates an instance of the RazorHost with various options applied. /// Applies basic namespace imports and the name of the class to generate /// </summary> /// <param name="generatedNamespace"></param> /// <param name="generatedClass"></param> /// <returns></returns> protected RazorTemplateEngine CreateHost(string generatedNamespace, string generatedClass) { Type baseClassType = typeof(TBaseTemplateType); RazorEngineHost host = new RazorEngineHost(new CSharpRazorCodeLanguage()); host.DefaultBaseClass = baseClassType.FullName; host.DefaultClassName = generatedClass; host.DefaultNamespace = generatedNamespace; host.NamespaceImports.Add("System"); host.NamespaceImports.Add("System.Text"); host.NamespaceImports.Add("System.Collections.Generic"); host.NamespaceImports.Add("System.Linq"); host.NamespaceImports.Add("System.IO"); return new RazorTemplateEngine(host); } /// <summary> /// Parses and compiles a markup template into an assembly and returns /// an assembly name. The name is an ID that can be passed to /// ExecuteTemplateByAssembly which picks up a cached instance of the /// loaded assembly. /// /// </summary> /// <param name="namespaceOfGeneratedClass">The namespace of the class to generate from the template</param> /// <param name="generatedClassName">The name of the class to generate from the template</param> /// <param name="ReferencedAssemblies">Any referenced assemblies by dll name only. Assemblies must be in execution path of host or in GAC.</param> /// <param name="templateSourceReader">Textreader that loads the template</param> /// <remarks> /// The actual assembly isn't returned here to allow for cross-AppDomain /// operation. If the assembly was returned it would fail for cross-AppDomain /// calls. /// </remarks> /// <returns>An assembly Id. The Assembly is cached in memory and can be used with RenderFromAssembly.</returns> public string ParseAndCompileTemplate( string namespaceOfGeneratedClass, string generatedClassName, string[] ReferencedAssemblies, TextReader templateSourceReader) { RazorTemplateEngine engine = CreateHost(namespaceOfGeneratedClass, generatedClassName); // Generate the template class as CodeDom GeneratorResults razorResults = engine.GenerateCode(templateSourceReader); // Create code from the codeDom and compile CSharpCodeProvider codeProvider = new CSharpCodeProvider(); CodeGeneratorOptions options = new CodeGeneratorOptions(); // Capture Code Generated as a string for error info // and debugging LastGeneratedCode = null; using (StringWriter writer = new StringWriter()) { codeProvider.GenerateCodeFromCompileUnit(razorResults.GeneratedCode, writer, options); LastGeneratedCode = writer.ToString(); } CompilerParameters compilerParameters = new CompilerParameters(ReferencedAssemblies); // Standard Assembly References compilerParameters.ReferencedAssemblies.Add("System.dll"); compilerParameters.ReferencedAssemblies.Add("System.Core.dll"); compilerParameters.ReferencedAssemblies.Add("Microsoft.CSharp.dll"); // dynamic support! // Also add the current assembly so RazorTemplateBase is available compilerParameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().CodeBase.Substring(8)); compilerParameters.GenerateInMemory = Configuration.CompileToMemory; if (!Configuration.CompileToMemory) compilerParameters.OutputAssembly = Path.Combine(Configuration.TempAssemblyPath, "_" + Guid.NewGuid().ToString("n") + ".dll"); CompilerResults compilerResults = codeProvider.CompileAssemblyFromDom(compilerParameters, razorResults.GeneratedCode); if (compilerResults.Errors.Count > 0) { var compileErrors = new StringBuilder(); foreach (System.CodeDom.Compiler.CompilerError compileError in compilerResults.Errors) compileErrors.Append(String.Format(Resources.LineX0TColX1TErrorX2RN, compileError.Line, compileError.Column, compileError.ErrorText)); this.SetError(compileErrors.ToString() + "\r\n" + LastGeneratedCode); return null; } AssemblyCache.Add(compilerResults.CompiledAssembly.FullName, compilerResults.CompiledAssembly); return compilerResults.CompiledAssembly.FullName; } Think of the internal CreateHost() method as setting up the assembly generated from each template. Each template compiles into a separate assembly. It sets up namespaces, and assembly references, the base class used and the name and namespace for the generated class. ParseAndCompileTemplate() then calls the CreateHost() method to receive the template engine generator which effectively generates a CodeDom from the template – the template is turned into .NET code. The code generated from our earlier example looks something like this: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace RazorTest { using System; using System.Text; using System.Collections.Generic; using System.Linq; using System.IO; using System.Reflection; public class RazorTemplate : RazorHosting.RazorTemplateBase { #line hidden public RazorTemplate() { } public override void Execute() { WriteLiteral("Hello "); Write(Context.FirstName); WriteLiteral("! Your entry was entered on: "); Write(Context.Entered); WriteLiteral("\r\n\r\n"); // Code block: Update the host Windows Form passed in through the context Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString(); WriteLiteral("\r\nAppDomain Id:\r\n "); Write(AppDomain.CurrentDomain.FriendlyName); WriteLiteral("\r\n \r\nAssembly:\r\n "); Write(Assembly.GetExecutingAssembly().FullName); WriteLiteral("\r\n\r\nCode based output: \r\n"); // Write output with Response object from code string output = string.Empty; for (int i = 0; i < 10; i++) { output += i.ToString() + " "; } } } } Basically the template’s body is turned into code in an Execute method that is called. Internally the template’s Write method is fired to actually generate the output. Note that the class inherits from RazorTemplateBase which is the generic parameter I used to specify the base class when creating an instance in my RazorEngine host: var engine = new RazorEngine<RazorTemplateBase>(); This template class must be provided and it must implement an Execute() and Write() method. Beyond that you can create any class you chose and attach your own properties. My RazorTemplateBase class implementation is very simple: public class RazorTemplateBase : MarshalByRefObject, IDisposable { /// <summary> /// You can pass in a generic context object /// to use in your template code /// </summary> public dynamic Context { get; set; } /// <summary> /// Class that generates output. Currently ultra simple /// with only Response.Write() implementation. /// </summary> public RazorResponse Response { get; set; } public object HostContainer {get; set; } public object Engine { get; set; } public RazorTemplateBase() { Response = new RazorResponse(); } public virtual void Write(object value) { Response.Write(value); } public virtual void WriteLiteral(object value) { Response.Write(value); } /// <summary> /// Razor Parser implements this method /// </summary> public virtual void Execute() {} public virtual void Dispose() { if (Response != null) { Response.Dispose(); Response = null; } } } Razor fills in the Execute method when it generates its subclass and uses the Write() method to output content. As you can see I use a RazorResponse() class here to generate output. This isn’t necessary really, as you could use a StringBuilder or StringWriter() directly, but I prefer using Response object so I can extend the Response behavior as needed. The RazorResponse class is also very simple and merely acts as a wrapper around a TextWriter: public class RazorResponse : IDisposable { /// <summary> /// Internal text writer - default to StringWriter() /// </summary> public TextWriter Writer = new StringWriter(); public virtual void Write(object value) { Writer.Write(value); } public virtual void WriteLine(object value) { Write(value); Write("\r\n"); } public virtual void WriteFormat(string format, params object[] args) { Write(string.Format(format, args)); } public override string ToString() { return Writer.ToString(); } public virtual void Dispose() { Writer.Close(); } public virtual void SetTextWriter(TextWriter writer) { // Close original writer if (Writer != null) Writer.Close(); Writer = writer; } } The Rendering Methods of RazorEngine At this point I’ve talked about the assembly generation logic and the template implementation itself. What’s left is that once you’ve generated the assembly is to execute it. The code to do this is handled in the various RenderXXX methods of the RazorEngine class. Let’s look at the lowest level one of these which is RenderTemplateFromAssembly() and a couple of internal support methods that handle instantiating and invoking of the generated template method: public string RenderTemplateFromAssembly( string assemblyId, string generatedNamespace, string generatedClass, object context, TextWriter outputWriter) { this.SetError(); Assembly generatedAssembly = AssemblyCache[assemblyId]; if (generatedAssembly == null) { this.SetError(Resources.PreviouslyCompiledAssemblyNotFound); return null; } string className = generatedNamespace + "." + generatedClass; Type type; try { type = generatedAssembly.GetType(className); } catch (Exception ex) { this.SetError(Resources.UnableToCreateType + className + ": " + ex.Message); return null; } // Start with empty non-error response (if we use a writer) string result = string.Empty; using(TBaseTemplateType instance = InstantiateTemplateClass(type)) { if (instance == null) return null; if (outputWriter != null) instance.Response.SetTextWriter(outputWriter); if (!InvokeTemplateInstance(instance, context)) return null; // Capture string output if implemented and return // otherwise null is returned if (outputWriter == null) result = instance.Response.ToString(); } return result; } protected virtual TBaseTemplateType InstantiateTemplateClass(Type type) { TBaseTemplateType instance = Activator.CreateInstance(type) as TBaseTemplateType; if (instance == null) { SetError(Resources.CouldnTActivateTypeInstance + type.FullName); return null; } instance.Engine = this; // If a HostContainer was set pass that to the template too instance.HostContainer = this.HostContainer; return instance; } /// <summary> /// Internally executes an instance of the template, /// captures errors on execution and returns true or false /// </summary> /// <param name="instance">An instance of the generated template</param> /// <returns>true or false - check ErrorMessage for errors</returns> protected virtual bool InvokeTemplateInstance(TBaseTemplateType instance, object context) { try { instance.Context = context; instance.Execute(); } catch (Exception ex) { this.SetError(Resources.TemplateExecutionError + ex.Message); return false; } finally { // Must make sure Response is closed instance.Response.Dispose(); } return true; } The RenderTemplateFromAssembly method basically requires the namespace and class to instantate and creates an instance of the class using InstantiateTemplateClass(). It then invokes the method with InvokeTemplateInstance(). These two methods are broken out because they are re-used by various other rendering methods and also to allow subclassing and providing additional configuration tasks to set properties and pass values to templates at execution time. In the default mode instantiation sets the Engine and HostContainer (discussed later) so the template can call back into the template engine, and the context is set when the template method is invoked. The various RenderXXX methods use similar code although they create the assemblies first. If you’re after potentially cashing assemblies the method is the one to call and that’s exactly what the two HostContainer classes do. More on that in a minute, but before we get into HostContainers let’s talk about AppDomain hosting and the like. Running Templates in their own AppDomain With the RazorEngine class above, when a template is parsed into an assembly and executed the assembly is created (in memory or on disk – you can configure that) and cached in the current AppDomain. In .NET once an assembly has been loaded it can never be unloaded so if you’re loading lots of templates and at some time you want to release them there’s no way to do so. If however you load the assemblies in a separate AppDomain that new AppDomain can be unloaded and the assemblies loaded in it with it. In order to host the templates in a separate AppDomain the easiest thing to do is to run the entire RazorEngine in a separate AppDomain. Then all interaction occurs in the other AppDomain and no further changes have to be made. To facilitate this there is a RazorEngineFactory which has methods that can instantiate the RazorHost in a separate AppDomain as well as in the local AppDomain. The host creates the remote instance and then hangs on to it to keep it alive as well as providing methods to shut down the AppDomain and reload the engine. Sounds complicated but cross-AppDomain invocation is actually fairly easy to implement. Here’s some of the relevant code from the RazorEngineFactory class. Like the RazorEngine this class is generic and requires a template base type in the generic class name: public class RazorEngineFactory<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase Here are the key methods of interest: /// <summary> /// Creates an instance of the RazorHost in a new AppDomain. This /// version creates a static singleton that that is cached and you /// can call UnloadRazorHostInAppDomain to unload it. /// </summary> /// <returns></returns> public static RazorEngine<TBaseTemplateType> CreateRazorHostInAppDomain() { if (Current == null) Current = new RazorEngineFactory<TBaseTemplateType>(); return Current.GetRazorHostInAppDomain(); } public static void UnloadRazorHostInAppDomain() { if (Current != null) Current.UnloadHost(); Current = null; } /// <summary> /// Instance method that creates a RazorHost in a new AppDomain. /// This method requires that you keep the Factory around in /// order to keep the AppDomain alive and be able to unload it. /// </summary> /// <returns></returns> public RazorEngine<TBaseTemplateType> GetRazorHostInAppDomain() { LocalAppDomain = CreateAppDomain(null); if (LocalAppDomain == null) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! RazorEngine<TBaseTemplateType> host = null; try { Assembly ass = Assembly.GetExecutingAssembly(); string AssemblyPath = ass.Location; host = (RazorEngine<TBaseTemplateType>) LocalAppDomain.CreateInstanceFrom(AssemblyPath, typeof(RazorEngine<TBaseTemplateType>).FullName).Unwrap(); } catch (Exception ex) { ErrorMessage = ex.Message; return null; } return host; } /// <summary> /// Internally creates a new AppDomain in which Razor templates can /// be run. /// </summary> /// <param name="appDomainName"></param> /// <returns></returns> private AppDomain CreateAppDomain(string appDomainName) { if (appDomainName == null) appDomainName = "RazorHost_" + Guid.NewGuid().ToString("n"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; AppDomain localDomain = AppDomain.CreateDomain(appDomainName, null, setup); return localDomain; } /// <summary> /// Allow unloading of the created AppDomain to release resources /// All internal resources in the AppDomain are released including /// in memory compiled Razor assemblies. /// </summary> public void UnloadHost() { if (this.LocalAppDomain != null) { AppDomain.Unload(this.LocalAppDomain); this.LocalAppDomain = null; } } The static CreateRazorHostInAppDomain() is the key method that startup code usually calls. It uses a Current singleton instance to an instance of itself that is created cross AppDomain and is kept alive because it’s static. GetRazorHostInAppDomain actually creates a cross-AppDomain instance which first creates a new AppDomain and then loads the RazorEngine into it. The remote Proxy instance is returned as a result to the method and can be used the same as a local instance. The code to run with a remote AppDomain is simple: private RazorEngine<RazorTemplateBase> CreateHost() { if (this.Host != null) return this.Host; // Use Static Methods - no error message if host doesn't load this.Host = RazorEngineFactory<RazorTemplateBase>.CreateRazorHostInAppDomain(); if (this.Host == null) { MessageBox.Show("Unable to load Razor Template Host", "Razor Hosting", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } return this.Host; } This code relies on a local reference of the Host which is kept around for the duration of the app (in this case a form reference). To use this you’d simply do: this.Host = CreateHost(); if (host == null) return; string result = host.RenderTemplate( this.txtSource.Text, new string[] { "System.Windows.Forms.dll", "Westwind.Utilities.dll" }, this.CustomContext); if (result == null) { MessageBox.Show(host.ErrorMessage, "Template Execution Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); return; } this.txtResult.Text = result; Now all templates run in a remote AppDomain and can be unloaded with simple code like this: RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Host = null; One Step further – Providing a caching ‘Runtime’ Once we can load templates in a remote AppDomain we can add some additional functionality like assembly caching based on application specific features. One of my typical scenarios is to render templates out of a scripts folder. So all templates live in a folder and they change infrequently. So a Folder based host that can compile these templates once and then only recompile them if something changes would be ideal. Enter host containers which are basically wrappers around the RazorEngine<t> and RazorEngineFactory<t>. They provide additional logic for things like file caching based on changes on disk or string hashes for string based template inputs. The folder host also provides for partial rendering logic through a custom template base implementation. There’s a base implementation in RazorBaseHostContainer, which provides the basics for hosting a RazorEngine, which includes the ability to start and stop the engine, cache assemblies and add references: public abstract class RazorBaseHostContainer<TBaseTemplateType> : MarshalByRefObject where TBaseTemplateType : RazorTemplateBase, new() { public RazorBaseHostContainer() { UseAppDomain = true; GeneratedNamespace = "__RazorHost"; } /// <summary> /// Determines whether the Container hosts Razor /// in a separate AppDomain. Seperate AppDomain /// hosting allows unloading and releasing of /// resources. /// </summary> public bool UseAppDomain { get; set; } /// <summary> /// Base folder location where the AppDomain /// is hosted. By default uses the same folder /// as the host application. /// /// Determines where binary dependencies are /// found for assembly references. /// </summary> public string BaseBinaryFolder { get; set; } /// <summary> /// List of referenced assemblies as string values. /// Must be in GAC or in the current folder of the host app/ /// base BinaryFolder /// </summary> public List<string> ReferencedAssemblies = new List<string>(); /// <summary> /// Name of the generated namespace for template classes /// </summary> public string GeneratedNamespace {get; set; } /// <summary> /// Any error messages /// </summary> public string ErrorMessage { get; set; } /// <summary> /// Cached instance of the Host. Required to keep the /// reference to the host alive for multiple uses. /// </summary> public RazorEngine<TBaseTemplateType> Engine; /// <summary> /// Cached instance of the Host Factory - so we can unload /// the host and its associated AppDomain. /// </summary> protected RazorEngineFactory<TBaseTemplateType> EngineFactory; /// <summary> /// Keep track of each compiled assembly /// and when it was compiled. /// /// Use a hash of the string to identify string /// changes. /// </summary> protected Dictionary<int, CompiledAssemblyItem> LoadedAssemblies = new Dictionary<int, CompiledAssemblyItem>(); /// <summary> /// Call to start the Host running. Follow by a calls to RenderTemplate to /// render individual templates. Call Stop when done. /// </summary> /// <returns>true or false - check ErrorMessage on false </returns> public virtual bool Start() { if (Engine == null) { if (UseAppDomain) Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHostInAppDomain(); else Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHost(); Engine.Configuration.CompileToMemory = true; Engine.HostContainer = this; if (Engine == null) { this.ErrorMessage = EngineFactory.ErrorMessage; return false; } } return true; } /// <summary> /// Stops the Host and releases the host AppDomain and cached /// assemblies. /// </summary> /// <returns>true or false</returns> public bool Stop() { this.LoadedAssemblies.Clear(); RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Engine = null; return true; } … } This base class provides most of the mechanics to host the runtime, but no application specific implementation for rendering. There are rendering functions but they just call the engine directly and provide no caching – there’s no context to decide how to cache and reuse templates. The key methods are Start and Stop and their main purpose is to start a new AppDomain (optionally) and shut it down when requested. The RazorFolderHostContainer – Folder Based Runtime Hosting Let’s look at the more application specific RazorFolderHostContainer implementation which is defined like this: public class RazorFolderHostContainer : RazorBaseHostContainer<RazorTemplateFolderHost> Note that a customized RazorTemplateFolderHost class template is used for this implementation that supports partial rendering in form of a RenderPartial() method that’s available to templates. The folder host’s features are: Render templates based on a Template Base Path (a ‘virtual’ if you will) Cache compiled assemblies based on the relative path and file time stamp File changes on templates cause templates to be recompiled into new assemblies Support for partial rendering using base folder relative pathing As shown in the startup examples earlier host containers require some startup code with a HostContainer tied to a persistent property (like a Form property): // The base path for templates - templates are rendered with relative paths // based on this path. HostContainer.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); // Default output rendering disk location HostContainer.RenderingOutputFile = Path.Combine(HostContainer.TemplatePath, "__Preview.htm"); // Add any assemblies you want reference in your templates HostContainer.ReferencedAssemblies.Add("System.Windows.Forms.dll"); // Start up the host container HostContainer.Start(); Once that’s done, you can render templates with the host container: // Pass the template path for full filename seleted with OpenFile Dialog // relativepath is: subdir\file.cshtml or file.cshtml or ..\file.cshtml var relativePath = Utilities.GetRelativePath(fileName, HostContainer.TemplatePath); if (!HostContainer.RenderTemplate(relativePath, Context, HostContainer.RenderingOutputFile)) { MessageBox.Show("Error: " + HostContainer.ErrorMessage); return; } webBrowser1.Navigate("file://" + HostContainer.RenderingOutputFile); The most critical task of the RazorFolderHostContainer implementation is to retrieve a template from disk, compile and cache it and then deal with deciding whether subsequent requests need to re-compile the template or simply use a cached version. Internally the GetAssemblyFromFileAndCache() handles this task: /// <summary> /// Internally checks if a cached assembly exists and if it does uses it /// else creates and compiles one. Returns an assembly Id to be /// used with the LoadedAssembly list. /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> protected virtual CompiledAssemblyItem GetAssemblyFromFileAndCache(string relativePath) { string fileName = Path.Combine(TemplatePath, relativePath).ToLower(); int fileNameHash = fileName.GetHashCode(); if (!File.Exists(fileName)) { this.SetError(Resources.TemplateFileDoesnTExist + fileName); return null; } CompiledAssemblyItem item = null; this.LoadedAssemblies.TryGetValue(fileNameHash, out item); string assemblyId = null; // Check for cached instance if (item != null) { var fileTime = File.GetLastWriteTimeUtc(fileName); if (fileTime <= item.CompileTimeUtc) assemblyId = item.AssemblyId; } else item = new CompiledAssemblyItem(); // No cached instance - create assembly and cache if (assemblyId == null) { string safeClassName = GetSafeClassName(fileName); StreamReader reader = null; try { reader = new StreamReader(fileName, true); } catch (Exception ex) { this.SetError(Resources.ErrorReadingTemplateFile + fileName); return null; } assemblyId = Engine.ParseAndCompileTemplate(this.ReferencedAssemblies.ToArray(), reader); // need to ensure reader is closed if (reader != null) reader.Close(); if (assemblyId == null) { this.SetError(Engine.ErrorMessage); return null; } item.AssemblyId = assemblyId; item.CompileTimeUtc = DateTime.UtcNow; item.FileName = fileName; item.SafeClassName = safeClassName; this.LoadedAssemblies[fileNameHash] = item; } return item; } This code uses a LoadedAssembly dictionary which is comprised of a structure that holds a reference to a compiled assembly, a full filename and file timestamp and an assembly id. LoadedAssemblies (defined on the base class shown earlier) is essentially a cache for compiled assemblies and they are identified by a hash id. In the case of files the hash is a GetHashCode() from the full filename of the template. The template is checked for in the cache and if not found the file stamp is checked. If that’s newer than the cache’s compilation date the template is recompiled otherwise the version in the cache is used. All the core work defers to a RazorEngine<T> instance to ParseAndCompileTemplate(). The three rendering specific methods then are rather simple implementations with just a few lines of code dealing with parameter and return value parsing: /// <summary> /// Renders a template to a TextWriter. Useful to write output into a stream or /// the Response object. Used for partial rendering. /// </summary> /// <param name="relativePath">Relative path to the file in the folder structure</param> /// <param name="context">Optional context object or null</param> /// <param name="writer">The textwriter to write output into</param> /// <returns></returns> public bool RenderTemplate(string relativePath, object context, TextWriter writer) { // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; CompiledAssemblyItem item = GetAssemblyFromFileAndCache(relativePath); if (item == null) { writer.Close(); return false; } try { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error string result = Engine.RenderTemplateFromAssembly(item.AssemblyId, context, writer); if (result == null) { this.SetError(Engine.ErrorMessage); return false; } } catch (Exception ex) { this.SetError(ex.Message); return false; } finally { writer.Close(); } return true; } /// <summary> /// Render a template from a source file on disk to a specified outputfile. /// </summary> /// <param name="relativePath">Relative path off the template root folder. Format: path/filename.cshtml</param> /// <param name="context">Any object that will be available in the template as a dynamic of this.Context</param> /// <param name="outputFile">Optional - output file where output is written to. If not specified the /// RenderingOutputFile property is used instead /// </param> /// <returns>true if rendering succeeds, false on failure - check ErrorMessage</returns> public bool RenderTemplate(string relativePath, object context, string outputFile) { if (outputFile == null) outputFile = RenderingOutputFile; try { using (StreamWriter writer = new StreamWriter(outputFile, false, Engine.Configuration.OutputEncoding, Engine.Configuration.StreamBufferSize)) { return RenderTemplate(relativePath, context, writer); } } catch (Exception ex) { this.SetError(ex.Message); return false; } return true; } /// <summary> /// Renders a template to string. Useful for RenderTemplate /// </summary> /// <param name="relativePath"></param> /// <param name="context"></param> /// <returns></returns> public string RenderTemplateToString(string relativePath, object context) { string result = string.Empty; try { using (StringWriter writer = new StringWriter()) { // String result will be empty as output will be rendered into the // Response object's stream output. However a null result denotes // an error if (!RenderTemplate(relativePath, context, writer)) { this.SetError(Engine.ErrorMessage); return null; } result = writer.ToString(); } } catch (Exception ex) { this.SetError(ex.Message); return null; } return result; } The idea is that you can create custom host container implementations that do exactly what you want fairly easily. Take a look at both the RazorFolderHostContainer and RazorStringHostContainer classes for the basic concepts you can use to create custom implementations. Notice also that you can set the engine’s PerRequestConfigurationData() from the host container: // Set configuration data that is to be passed to the template (any object) Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration() { TemplatePath = Path.Combine(this.TemplatePath, relativePath), TemplateRelativePath = relativePath, }; which when set to a non-null value is passed to the Template’s InitializeTemplate() method. This method receives an object parameter which you can cast as needed: public override void InitializeTemplate(object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; } With this data you can then configure any custom properties or objects on your main template class. It’s an easy way to pass data from the HostContainer all the way down into the template. The type you use is of type object so you have to cast it yourself, and it must be serializable since it will likely run in a separate AppDomain. This might seem like an ugly way to pass data around – normally I’d use an event delegate to call back from the engine to the host, but since this is running over AppDomain boundaries events get really tricky and passing a template instance back up into the host over AppDomain boundaries doesn’t work due to serialization issues. So it’s easier to pass the data from the host down into the template using this rather clumsy approach of set and forward. It’s ugly, but it’s something that can be hidden in the host container implementation as I’ve done here. It’s also not something you have to do in every implementation so this is kind of an edge case, but I know I’ll need to pass a bunch of data in some of my applications and this will be the easiest way to do so. Summing Up Hosting the Razor runtime is something I got jazzed up about quite a bit because I have an immediate need for this type of templating/merging/scripting capability in an application I’m working on. I’ve also been using templating in many apps and it’s always been a pain to deal with. The Razor engine makes this whole experience a lot cleaner and more light weight and with these wrappers I can now plug .NET based templating into my code literally with a few lines of code. That’s something to cheer about… I hope some of you will find this useful as well… Resources The examples and code require that you download the Razor runtimes. Projects are for Visual Studio 2010 running on .NET 4.0 Platform Installer 3.0 (install WebMatrix or MVC 3 for Razor Runtimes) Latest Code in Subversion Repository Download Snapshot of the Code Documentation (CHM Help File) © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  .NET  

    Read the article

  • Stop lazy loading or skip loading a property in NHibernate? Proxy cannot be serialized through WCF

    - by HelloSam
    Consider I have a parent, child relationship class and mapping. I am using NHibernate to read the object from the database, and intended to use WCF to send the object across the wire. Goal For reading the parent object, I want to selectively, at different execution path, decide when I would want to load the child object. Because I don't want to read more than what I needed. Those partially loaded object must be able to sent through WCF. When I mean I don't load it, neither side will access such property. Problem When such partially loaded object is being sent through WCF, as those property is marked as [DataContract], it cannot be serialized as the property is lazy load proxy instead of real known type. What I want to archive, or solution that I can think of lazy=false or lazy=true doesn't work. Former will eagerly fetch all the relationships, latter will create a proxy. But I want nothing instead - a null would be the best. I don't need lazy load. I hope to get a null for those references that I don't want to fetch. A null, but not just a proxy. This will makes WCF happy, and waste less time to have a lazy-load proxy constructed. Like could I have a null proxy factory? -OR- Or making WCF ignoring those property that's a proxy instead of real. I tried the IDataContractSurrogate solution, but only parent is passed to GetObjectToSerialize, I never observe an proxy being passed through GetObjectToSerialize, leaving no chance to un-proxy it. Edit After reading the comments, more surfing on the Internet... It seems to me that DTO would shift major part of the computation to the server side. But for the project I am working on, 50% of time the client is "smarter" than the server and the server is more like a data store with validation and verification. Though I agree the server is not exactly dumb - I have to decide when to fetch the extra references already, and DTO will make this very explicit. Maybe I should just take the pain. I didn't know http://automapper.codeplex.com/ before, this motivates me a little more to take the pain. On the other hand, I found http://trentacular.com/2009/08/how-to-use-nhibernate-lazy-initializing-proxies-with-web-services-or-wcf/, which seems to be working with IDataContractSurrogate.GetObjectToSerialize.

    Read the article

  • Eventlet or gevent or Stackless + Twisted, Pylons, Django and SQL Alchemy

    - by Khorkrak
    We're using Twisted extensively for apps requiring a great deal of asynchronous io. There are some cases where stuff is cpu bound instead and for that we spawn a pool of processes to do the work and have a system for managing these across multiple servers as well - all done in Twisted. Works great. The problem is that it's hard to bring new team members up to speed. Writing asynchronous code in Twisted requires a near vertical learning curve. It's as if humans just don't think that way naturally. We're considering a mixed approach perhaps. Maybe do the xmlrpc server part and process management in Twisted still but the other stuff in code that at least looks synchronous while not being as such. Then again I like explicit over implicit so hmmm. Anyway onto greenlets - how well does that stuff work? So there's Stackless and as you can see from my Gallentean avatar I'm well aware of the tremendous success in it's use for CCP's flagship EVE Online game first hand. What about Eventlet or gevent? Well for now only Eventlet works with Twisted. However gevent claims to be faster since it's not a pure python implementation it instead uses libevent. It also has fewer idiosyncrasies and defects supposedly. The documentation there is minimal in comparison to Eventlet and it's maintained by 1 guy as far as I can tell. This makes me leery but all great projects start this way so... Then there's PyPy - I haven't even finished reading about that one yet - just saw it in this thread: Drawbacks of Stackless. So confusing - I'm wondering what the heck to do - sounds like Eventlet is probably the best bet but is it really stable enough? Anyone out there have any experience with it? Should we go with Stackless instead as it's been around and is proven technology - just like Twisted is as well - and they do work together nicely. But still I hate having to have a separate version of Python to do this. what to do.... This somewhat obnoxious blog entry hit the nail on the head for me though: Asynchronous IO for Grownups We're stuck using MySQL as well - I never knew how great PostgreSQL was until having had to work on a production OLTP system in MySQL instead - but that's another story. But if that monkey patch thing really works then wow. Just wow.

    Read the article

  • Fluid CSS: floating column with max-width and overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've reproduced the issue with this bare-minimum scenario: <div style="float: left; max-width: 460px; border: 1px solid red"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> <pre style="max-width: 700px; border: 1px solid blue"> function foo() { // Lorem ipsum dolor sit amet, consectetur adipisicing elit } </pre> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </div> I noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post markup which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode. Update I've done further testing to observe that max-width seems to get ignored when the element has a float: left. I glanced at the W3C box model chapter but couldn't immediately see an explicit mention of this behaviour. Any pointers?

    Read the article

  • delta-dictionary/dictionary with revision awareness in python?

    - by shabbychef
    I am looking to create a dictionary with 'roll-back' capabilities in python. The dictionary would start with a revision number of 0, and the revision would be bumped up only by explicit method call. I do not need to delete keys, only add and update key,value pairs, and then roll back. I will never need to 'roll forward', that is, when rolling the dictionary back, all the newer revisions can be discarded, and I can start re-reving up again. thus I want behaviour like: >>> rr = rev_dictionary() >>> rr.rev 0 >>> rr["a"] = 17 >>> rr[('b',23)] = 'foo' >>> rr["a"] 17 >>> rr.rev 0 >>> rr.roll_rev() >>> rr.rev 1 >>> rr["a"] 17 >>> rr["a"] = 0 >>> rr["a"] 0 >>> rr[('b',23)] 'foo' >>> rr.roll_to(0) >>> rr.rev 0 >>> rr["a"] 17 >>> rr.roll_to(1) Exception ... Just to be clear, the state associated with a revision is the state of the dictionary just prior to the roll_rev() method call. thus if I can alter the value associated with a key several times 'within' a revision, and only have the last one remembered. I would like a fairly memory-efficient implementation of this: the memory usage should be proportional to the deltas. Thus simply having a list of copies of the dictionary will not scale for my problem. One should assume the keys are in the tens of thousands, and the revisions are in the hundreds of thousands. We can assume the values are immutable, but need not be numeric. For the case where the values are e.g. integers, there is a fairly straightforward implementation (have a list of dictionaries of the numerical delta from revision to revision). I am not sure how to turn this into the general form. Maybe bootstrap the integer version and add on an array of values? all help appreciated.

    Read the article

  • .Net Custom Configuration Section and Saving Changes within PropertyGrid

    - by Paul
    If I load the My.Settings object (app.config) into a PropertyGrid, I am able to edit the property inside the propertygrid and the change is automatically saved. PropertyGrid1.SelectedObject = My.Settings I want to do the same with a Custom Configuration Section. Following this code example (from here http://www.codeproject.com/KB/vb/SerializePropertyGrid.aspx), he is doing explicit serialization to disk when a "Save" button is pushed. Public Class Form1 'Load AppSettings Dim _appSettings As New AppSettings() Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click _appSettings = AppSettings.Load() ' Actually change the form size Me.Size = _appSettings.WindowSize PropertyGrid1.SelectedObject = _appSettings End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click _appSettings.Save() End Sub End Class In my code, my custom section Inherits from ConfigurationSection (see below) Question: Is there something built into ConfigurationSection class that does the autosave? If not, what is the best way to handle this, should it be in the PropertyGrid.PropertyValueChagned? (how does the My.Settings handle this internally?) Here is the example Custom Class that I am trying to get to auto-save and how I load into property grid. Dim config As System.Configuration.Configuration = _ ConfigurationManager.OpenExeConfiguration( _ ConfigurationUserLevel.None) PropertyGrid2.SelectedObject = config.GetSection("CustomSection") Public NotInheritable Class CustomSection Inherits ConfigurationSection ' The collection (property bag) that contains ' the section properties. Private Shared _Properties As ConfigurationPropertyCollection ' The FileName property. Private Shared _FileName As New ConfigurationProperty("fileName", GetType(String), "def.txt", ConfigurationPropertyOptions.IsRequired) ' The MasUsers property. Private Shared _MaxUsers _ As New ConfigurationProperty("maxUsers", _ GetType(Int32), 1000, _ ConfigurationPropertyOptions.None) ' The MaxIdleTime property. Private Shared _MaxIdleTime _ As New ConfigurationProperty("maxIdleTime", _ GetType(TimeSpan), TimeSpan.FromMinutes(5), _ ConfigurationPropertyOptions.IsRequired) ' CustomSection constructor. Public Sub New() _Properties = New ConfigurationPropertyCollection() _Properties.Add(_FileName) _Properties.Add(_MaxUsers) _Properties.Add(_MaxIdleTime) End Sub 'New ' This is a key customization. ' It returns the initialized property bag. Protected Overrides ReadOnly Property Properties() _ As ConfigurationPropertyCollection Get Return _Properties End Get End Property <StringValidator( _ InvalidCharacters:=" ~!@#$%^&*()[]{}/;'""|\", _ MinLength:=1, MaxLength:=60)> _ <EditorAttribute(GetType(System.Windows.Forms.Design.FileNameEditor), GetType(System.Drawing.Design.UITypeEditor))> _ Public Property FileName() As String Get Return CStr(Me("fileName")) End Get Set(ByVal value As String) Me("fileName") = value End Set End Property <LongValidator(MinValue:=1, _ MaxValue:=1000000, ExcludeRange:=False)> _ Public Property MaxUsers() As Int32 Get Return Fix(Me("maxUsers")) End Get Set(ByVal value As Int32) Me("maxUsers") = value End Set End Property <TimeSpanValidator(MinValueString:="0:0:30", _ MaxValueString:="5:00:0", ExcludeRange:=False)> _ Public Property MaxIdleTime() As TimeSpan Get Return CType(Me("maxIdleTime"), TimeSpan) End Get Set(ByVal value As TimeSpan) Me("maxIdleTime") = value End Set End Property End Class 'CustomSection

    Read the article

  • Synchronized Enumerator in C#

    - by Dan Bryant
    I'm putting together a custom SynchronizedCollection<T> class so that I can have a synchronized Observable collection for my WPF application. The synchronization is provided via a ReaderWriterLockSlim, which, for the most part, has been easy to apply. The case I'm having trouble with is how to provide thread-safe enumeration of the collection. I've created a custom IEnumerator<T> nested class that looks like this: private class SynchronizedEnumerator : IEnumerator<T> { private SynchronizedCollection<T> _collection; private int _currentIndex; internal SynchronizedEnumerator(SynchronizedCollection<T> collection) { _collection = collection; _collection._lock.EnterReadLock(); _currentIndex = -1; } #region IEnumerator<T> Members public T Current { get; private set;} #endregion #region IDisposable Members public void Dispose() { var collection = _collection; if (collection != null) collection._lock.ExitReadLock(); _collection = null; } #endregion #region IEnumerator Members object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { var collection = _collection; if (collection == null) throw new ObjectDisposedException("SynchronizedEnumerator"); _currentIndex++; if (_currentIndex >= collection.Count) { Current = default(T); return false; } Current = collection[_currentIndex]; return true; } public void Reset() { if (_collection == null) throw new ObjectDisposedException("SynchronizedEnumerator"); _currentIndex = -1; Current = default(T); } #endregion } My concern, however, is that if the Enumerator is not Disposed, the lock will never be released. In most use cases, this is not a problem, as foreach should properly call Dispose. It could be a problem, however, if a consumer retrieves an explicit Enumerator instance. Is my only option to document the class with a caveat implementer reminding the consumer to call Dispose if using the Enumerator explicitly or is there a way to safely release the lock during finalization? I'm thinking not, since the finalizer doesn't even run on the same thread, but I was curious if there other ways to improve this.

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • VBScript Out of String space

    - by MalsiaPro
    I got the following code to capture information for files on a specified drive, I ran the script againts a 600 GB hard drive on one of our servers and after a while I get the error Out of String space; "Join". Line 34, Char 2 For this code, file script.vbs: Option Explicit Dim objFS, objFld Dim objArgs Dim strFolder, strDestFile, blnRecursiveSearch ''Dim strLines Dim strCsv ''Dim i '' i = 0 ' 'Get the commandline parameters ' Set objArgs = WScript.Arguments ' strFolder = objArgs(0) ' strDestFile = objArgs(1) ' blnRecursiveSearch = objArgs(2) '######################################## 'SPECIFY THE DRIVE YOU WANT TO SCAN BELOW '######################################## strFolder = "C:\" strDestFile = "C:\InformationOutput.csv" blnRecursiveSearch = True 'Create the FileSystemObject Set objFS=CreateObject("Scripting.FileSystemObject") 'Get the directory you are working in Set objFld = objFS.GetFolder(strFolder) 'Open the csv file Set strCsv = objFS.CreateTextFile(strDestFile, True) '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) strCsv.WriteLine "File Path,File Size,Date Created,Date Last Modified,Date Last Accessed" '' strCsv.Write Join(strLines, vbCrLf) 'Now get the file details GetFileDetails objFld, blnRecursiveSearch '' 'Close and cleanup objects '' strCsv.Close '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) '' For i = 0 to UBound(strLines) '' strCsv.WriteLine strLines(i) '' Next 'Close and cleanup objects strCsv.Close Set strCsv = Nothing Set objFld = Nothing Set strFolder = Nothing Set objArgs = Nothing '---------------------------SCAN SPECIFIED LOCATION------------------------------- Private Sub GetFileDetails(fold, blnRecursive) Dim fld, fil dim strLine(4) on error resume next If InStr(fold.Path, "System Volume Information") < 1 Then If blnRecursive Then 'Work through all the folders and subfolders For Each fld In fold.SubFolders GetFileDetails fld, True If err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path err.Clear End If Next End If 'Now work on the files For Each fil in fold.Files strLine(0) = fil.Path strLine(1) = fil.Size strLine(2) = fil.DateCreated strLine(3) = fil.DateLastModified strLine(4) = fil.DateLastAccessed strCsv.WriteLine Join(strLine, ",") if err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path & vbcrlf & "File - " & fil.Name err.Clear End If Next End If end sub Private sub LogError(strError) dim strErr 'Write the csv file Set strErr = objFS.CreateTextFile("C:\test\err.log", false) strErr.WriteLine strError strErr.Close Set strErr = nothing End Sub RunMe.cmd wscript.exe "C:\temp\script\script.vbs" How can I avoid getting this error? The server drives are quite a bit <???? and I would imagine that the CSV file would be at least 40 MB. Edit by Guffa: I commented out some lines in the code, using double ticks ('') so you can see where.

    Read the article

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • HP DAT72x6 autoloader

    - by ericmayo
    Hoping someone here has seen this similar issue and can offer soem advise... I have an HP DAT72x6 auto loader tape backup unit. The external kind, here is a link to an owner's manual I found of it. http://www.dectrader.com/docs/set2/emr_na-c00070400-1.pdf I purchased the unit used about 6 months ago. The unit stopped working after 3-4 back-ups, it's used one day a month to do a monthly backup of another system. Suffice it to say the unit gets very little usage. There is an amber light on the front of the unit called the OAR (Operator Attention Required). The manual states to call for service when this light comes on and stays on. I've tried a few things to resolve but none are working. I've tried power cycling, re-securing the SCSI cables at both ends. Unit was used so I didn't pay much ($500) and so I don't want to spend a lot to have it fixed; might as well buy something new one if fixing this is going to cost more than $100-$150 bucks. I'm curious to see if anyone here has been around these devices or possibly is an HP repair person that can give me some things to try to resolve. The manual states that a solid amber OAR light indicates a hardware failure. When I power cycle the unit I see one of two scenarios so far. The unit powers up, shows self test in the LCD, then LCD changes to show all possible images and the OAR light comes on. The unit powers up, LCD is completely blank, the green lights go through some sort of process of going on and off and later the amber OAR light comes on and stays on. If it's a simple misalignment issue, I may be able to fix myself but not knowing what could cause the OAR light to come on gives me no where to even start. Google around gave no help either. I hoping someone here has experience with this and can help or point me in the right direction. Also, I don't have the HP Diagnostic tools mentioned in many manuals. The unit is connected to a Linux box. The 3-4 backups I've done with it so far have had no issues. We run amanda backup. Before this incident the unit was backing up and reading tapes fine. Thanks for any help or suggestions.

    Read the article

  • How can we implement change notification propagation for WPF and SL in the MVVM pattern?

    - by Firoso
    Here's an example scenario targetting MVVM WPF/SL development: View data binds to view model Property "Target" "Target" exposes a field of an object called "data" that exists in the local application model, called "Original" when "Original" changes, it should raise notification to the view model and then propogate that change notification to the View. Here are the solutions I've come up with, but I don't like any of them all that much. I'm looking for other ideas, by the time we come up with something rock solid I'm certain Microsoft will have released .NET 5 with WPF/SL extensions for better tools for MVVM development. For now the question is, "What have you done to solve this problem and how has it worked out for you?" Option 1. Proposal: Attach a handler to data's PropertyChanged event that watches for string values of properties it cares about that might have changed, and raises the appropriate notification. Why I don't like it: Changes don't bubble naturally, objects must be explicitly watched, if data changes to a new source, events must be un-registered/registered. Why I kind of like it: I get explicit control over propogation of changes, and I don't have to use any types that belong at a higher level of the application such as dependancy properties. Option 2. Proposal: Attach a handler to data's PropertyChanged event that re-raises the event across all properties using the name property name. Why I don't like it: This is essentially the same as option 1, but less intelligent, and forces me to never change my property names, as they have to be the same as the property names on data Why I kind of like it: It's very easy to set up and I don't have to think about it... Then again if I try to think, and change names to things that make sense, I shoot myself in the foot, and then I have to think about it! Option 3. Proposal: Inherit my view model from dependancy object, and notify binding sources of changes directly. Why I don't like it: I'm not even 100% sure dependancy properties/objects can DO this, it was just a thought to look into. Also I don't personally feel that WPF/SL types like Dep Obj belong at the view model level. Why I kind of like it: IF it has the capability that I'm seeking then it's a good answer! minus that pesky layering issue. Option 4. Proposal: Use a consistant agent messaging system based off of Task Parallels DataFlow Library to propogate everything through linked pipelining. Why I don't like it: I've never tried this, and somehow I think it will be lacking, plus it requires me to think about my code completely differently all the way around. Why I kind of like it: It has the possiblity of allowing me to do some VERY fun manipulations with a push based data model and using ActionBlocks as validation AND setters to then privately change view model properties and explicitly control PropertyChanged notifications.

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • .NET: understanding web.config in asp.net

    - by mark smith
    Hi there, Does anyone know of a good link to explain how to use the web.config...... For example, i am using forms authentication... and i notice there is a system.web and then it closed /system.web and then below configuration there are additional location tags here is an example, if you ntoice there is an authentication mode=forms with authorization i presume this is the ROOT....... It is also self contained within a system.web .... Below this there are more location= with system.web tags.... I have never really understand what i am actually doing.. I have tried checkign the MSDN documentation but still i don't fully understand up.... Can anyone help? If you notice with my example.... everything is stored in 1 web.config... i thought the standard waas create a standard web.config and then create another web.config in the directory where i wish to protect it..??? <configuration> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> <authentication mode="Forms"> <forms loginUrl="Login.aspx" defaultUrl="Login.aspx" cookieless="UseCookies" timeout="60"/> </authentication> <authorization> <allow users="*"/> </authorization> </system.web> <location path="Forms"> <system.web> <authorization> <deny users="?"/> <allow users="*"/> </authorization> </system.web> </location> <location path="Forms/Seguridad"> <system.web> <authorization> <allow roles="Administrador"/> <deny users="?"/> </authorization> </system.web> </location>

    Read the article

  • Building Python 3.2.3 on redhat 5: missing _posixsubprocess

    - by Oz123
    I am trying to build Python3 on a RHEL 5.7 machine, I successful managed to build Python 3.2.2, with : # Install required build dependencies yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel # Fetch and extract source. Please refer to http://www.python.org/download/releases # to ensure the latest source is used. wget http://www.python.org/ftp/python/3.2/Python-3.2.tar.bz2 tar -xjf Python-3.2.tar.bz2 cd Python-3.2 # Configure the build with a prefix (install dir) of /opt/python3, compile, and install. ./configure --prefix=/opt/python3 make But I am failing (?) with Python 3.2.3: Failed to build these modules: _posixsubprocess Is this a problem that should bother me ? How do I build it? I found this patch, but it's not included in sources Python 3.2.3 I obtained from the website ... Applying this patch on my sources, didn't solve the problem ... Here is the output from stderr: ~/tmp/Python-3.2.3 $ make > build.log ldd: warning: you do not have execution permission for `/usr/local/lib/libreadline.so' /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.so when searching for -lreadline /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.a when searching for -lreadline /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c: In function '_close_open_fd_range_safe': /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: 'O_CLOEXEC' undeclared (first use in this function) /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: (Each undeclared identifier is reported only once /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: for each function it appears in.) /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz ~/tmp/Python-3.2.3 $ grep posix build.log gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/posixmodule.c -o Modules/posixmodule.o ar rc libpython3.2m.a Modules/_threadmodule.o Modules/signalmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/symtablemodule.o Modules/xxsubtype.o building '_posixsubprocess' extension gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -IInclude -I/home/oznahum/localroot/include -I. -I./Include -I/usr/local/include -I/home/oznahum/tmp/Python-3.2.3 -c /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c -o build/temp.linux-x86_64-3.2/home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.o _posixsubprocess

    Read the article

  • Challenge: Neater way of currying or partially applying C#4's string.Join

    - by Damian Powell
    Background I recently read that .NET 4's System.String class has a new overload of the Join method. This new overload takes a separator, and an IEnumerable<T> which allows arbitrary collections to be joined into a single string without the need to convert to an intermediate string array. Cool! That means I can now do this: var evenNums = Enumerable.Range(1, 100) .Where(i => i%2 == 0); var list = string.Join(",",evenNums); ...instead of this: var evenNums = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .Select(i => i.ToString()) .ToArray(); var list = string.Join(",", evenNums); ...thus saving on a conversion of every item to a string, and then the allocation of an array. The Problem However, being a fan of the functional style of programming in general, and method chaining in C# in particular, I would prefer to be able to write something like this: var list = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .string.Join(","); This is not legal C# though. The closest I've managed to get is this: var list = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .ApplyTo( Functional.Curry<string, IEnumerable<object>, string> (string.Join)(",") ); ...using the following extension methods: public static class Functional { public static TRslt ApplyTo<TArg, TRslt>(this TArg arg, Func<TArg, TRslt> func) { return func(arg); } public static Func<T1, Func<T2, TResult>> Curry<T1, T2, TResult>(this Func<T1, T2, TResult> func) { Func<Func<T1, T2, TResult>, Func<T1, Func<T2, TResult>>> curried = f => x => y => f(x, y); return curried(func); } } This is quite verbose, requires explicit definition of the parameters and return type of the string.Join overload I want to use, and relies upon C#4's variance features because we are defining one of the arguments as IEnumerable rather than IEnumerable. The Challenge Can you find a neater way of achieving this using the method-chaining style of programming?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >