Search Results

Search found 3390 results on 136 pages for 'func delegate'.

Page 129/136 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • How to Zip one IEnumerable with itself

    - by wageoghe
    I am implementing some math algorithms based on lists of points, like Distance, Area, Centroid, etc. Just like in this post: http://stackoverflow.com/questions/2227828/find-the-distance-required-to-navigate-a-list-of-points-using-linq That post describes how to calculate the total distance of a sequence of points (taken in order) by essentially zipping the sequence "with itself", generating the sequence for Zip by offsetting the start position of the original IEnumerable by 1. So, given the Zip extension in .Net 4.0, assuming Point for the point type, and a reasonable Distance formula, you can make calls like this to generate a sequence of distances from one point to the next and then to sum the distances: var distances = points.Zip(points.Skip(1),Distance); double totalDistance = distances.Sum(); Area and Centroid calculations are similar in that they need to iterate over the sequence, processing each pair of points (points[i] and points[i+1]). I thought of making a generic IEnumerable extension suitable for implementing these (and possibly other) algorithms that operate over sequences, taking two items at a time (points[0] and points[1], points[1] and points[2], ..., points[n-1] and points[n] (or is it n-2 and n-1 ...) and applying a function. My generic iterator would have a similar signature to Zip, but it would not receive a second sequence to zip with as it is really just going to zip with itself. My first try looks like this: public static IEnumerable<TResult> ZipMyself<TSequence, TResult>(this IEnumerable<TSequence> seq, Func<TSequence, TSequence, TResult> resultSelector) { return seq.Zip(seq.Skip(1),resultSelector); } With my generic iterator in place, I can write functions like this: public static double Length(this IEnumerable<Point> points) { return points.ZipMyself(Distance).Sum(); } and call it like this: double d = points.Length(); and double GreensTheorem(Point p1, Point p1) { return p1.X * p2.Y - p1.Y * p2.X; } public static double SignedArea(this IEnumerable<Point> points) { return points.ZipMyself(GreensTheorem).Sum() / 2.0 } public static double Area(this IEnumerable<Point> points) { return Math.Abs(points.SignedArea()); } public static bool IsClockwise(this IEnumerable<Point> points) { return SignedArea(points) < 0; } and call them like this: double a = points.Area(); bool isClockwise = points.IsClockwise(); In this case, is there any reason NOT to implement "ZipMyself" in terms of Zip and Skip(1)? Is there already something in LINQ that automates this (zipping a list with itself) - not that it needs to be made that much easier ;-) Also, is there better name for the extension that might reflect that it is a well-known pattern (if, indeed it is a well-known pattern)? Had a link here for a StackOverflow question about area calculation. It is question 2432428. Also had a link to Wikipedia article on Centroid. Just go to Wikipedia and search for Centroid if interested. Just starting out, so don't have enough rep to post more than one link,

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • Launching a WPF Window in a Separate Thread, Part 1

    - by Reed
    Typically, I strongly recommend keeping the user interface within an application’s main thread, and using multiple threads to move the actual “work” into background threads.  However, there are rare times when creating a separate, dedicated thread for a Window can be beneficial.  This is even acknowledged in the MSDN samples, such as the Multiple Windows, Multiple Threads sample.  However, doing this correctly is difficult.  Even the referenced MSDN sample has major flaws, and will fail horribly in certain scenarios.  To ease this, I wrote a small class that alleviates some of the difficulties involved. The MSDN Multiple Windows, Multiple Threads Sample shows how to launch a new thread with a WPF Window, and will work in most cases.  The sample code (commented and slightly modified) works out to the following: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create and show the Window Window1 tempWindow = new Window1(); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Set the apartment state newWindowThread.SetApartmentState(ApartmentState.STA); // Make the thread a background thread newWindowThread.IsBackground = true; // Start the thread newWindowThread.Start(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This sample creates a thread, marks it as single threaded apartment state, and starts the Dispatcher on that thread. That is the minimum requirements to get a Window displaying and handling messages correctly, but, unfortunately, has some serious flaws. The first issue – the created thread will run continuously until the application shuts down, given the code in the sample.  The problem is that the ThreadStart delegate used ends with running the Dispatcher.  However, nothing ever stops the Dispatcher processing.  The thread was created as a Background thread, which prevents it from keeping the application alive, but the Dispatcher will continue to pump dispatcher frames until the application shuts down. In order to fix this, we need to call Dispatcher.InvokeShutdown after the Window is closed.  This would require modifying the above sample to subscribe to the Window’s Closed event, and, at that point, shutdown the Dispatcher: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This eliminates the first issue.  Now, when the Window is closed, the new thread’s Dispatcher will shut itself down, which in turn will cause the thread to complete. The above code will work correctly for most situations.  However, there is still a potential problem which could arise depending on the content of the Window1 class.  This is particularly nasty, as the code could easily work for most windows, but fail on others. The problem is, at the point where the Window is constructed, there is no active SynchronizationContext.  This is unlikely to be a problem in most cases, but is an absolute requirement if there is code within the constructor of Window1 which relies on a context being in place. While this sounds like an edge case, it’s fairly common.  For example, if a BackgroundWorker is started within the constructor, or a TaskScheduler is built using TaskScheduler.FromCurrentSynchronizationContext() with the expectation of synchronizing work to the UI thread, an exception will be raised at some point.  Both of these classes rely on the existence of a proper context being installed to SynchronizationContext.Current, which happens automatically, but not until Dispatcher.Run is called.  In the above case, SynchronizationContext.Current will return null during the Window’s construction, which can cause exceptions to occur or unexpected behavior. Luckily, this is fairly easy to correct.  We need to do three things, in order, prior to creating our Window: Create and initialize the Dispatcher for the new thread manually Create a synchronization context for the thread which uses the Dispatcher Install the synchronization context Creating the Dispatcher is quite simple – The Dispatcher.CurrentDispatcher property gets the current thread’s Dispatcher and “creates a new Dispatcher if one is not already associated with the thread.”  Once we have the correct Dispatcher, we can create a SynchronizationContext which uses the dispatcher by creating a DispatcherSynchronizationContext.  Finally, this synchronization context can be installed as the current thread’s context via SynchronizationContext.SetSynchronizationContext.  These three steps can easily be added to the above via a single line of code: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create our context, and install it: SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext( Dispatcher.CurrentDispatcher)); Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This now forces the synchronization context to be in place before the Window is created and correctly shuts down the Dispatcher when the window closes. However, there are quite a few steps.  In my next post, I’ll show how to make this operation more reusable by creating a class with a far simpler API…

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • Django - no module named app

    - by Koran
    Hi, I have been trying to get an application written in django working - but it is not working at all. I have been working on for some time too - and it is working on dev-server perfectly. But I am unable to put in the production env (apahce). My project name is apstat and the app name is basic. I try to access it as following Blockquote http://hostname/apstat But it shows the following error: MOD_PYTHON ERROR ProcessId: 6002 Interpreter: 'domU-12-31-39-06-DD-F4.compute-1.internal' ServerName: 'domU-12-31-39-06-DD-F4.compute-1.internal' DocumentRoot: '/home/ubuntu/server/' URI: '/apstat/' Location: '/apstat' Directory: None Filename: '/home/ubuntu/server/apstat/' PathInfo: '' Phase: 'PythonHandler' Handler: 'django.core.handlers.modpython' Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 228, in handler return ModPythonHandler()(req) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 201, in __call__ response = self.get_response(request) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 134, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 154, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 40, in technical_500_response html = reporter.get_traceback_html() File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 114, in get_traceback_html return t.render(c) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 178, in render return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 779, in render bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 81, in render_node raise wrapped TemplateSyntaxError: Caught an exception while rendering: No module named basic Original Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 87, in render output = force_unicode(self.filter_expression.resolve(context)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 572, in resolve new_obj = func(obj, *arg_vals) File "/usr/lib/pymodules/python2.6/django/template/defaultfilters.py", line 687, in date return format(value, arg) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 269, in format return df.format(format_string) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 175, in r return self.format('D, j M Y H:i:s O') File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/encoding.py", line 71, in force_unicode s = unicode(s) File "/usr/lib/pymodules/python2.6/django/utils/functional.py", line 201, in __unicode_cast return self.__func(*self.__args, **self.__kw) File "/usr/lib/pymodules/python2.6/django/utils/translation/__init__.py", line 62, in ugettext return real_ugettext(message) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 286, in ugettext return do_translate(message, 'ugettext') File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 276, in do_translate _default = translation(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 180, in _fetch app = import_module(appname) File "/usr/lib/pymodules/python2.6/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named basic My settings.py is as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'apstat.basic', 'django.contrib.admin', ) If I remove the apstat.basic, it goes through, but that is not a solution. Is it something I am doing in apache? My apache - settings are - <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/ubuntu/server/ <Directory /> Options None AllowOverride None </Directory> <Directory /home/ubuntu/server/apstat> AllowOverride None Order allow,deny allow from all </Directory> <Location "/apstat"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE apstat.settings PythonOption django.root /home/ubuntu/server/ PythonDebug On PythonPath "['/home/ubuntu/server/'] + sys.path" </Location> </VirtualHost> I have now sat for more than a day on this. If someone can help me out, it would be very nice.

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • CodePlex Daily Summary for Thursday, May 13, 2010

    CodePlex Daily Summary for Thursday, May 13, 2010New ProjectsC# 4.0: VS 2010Collaborative Rich Text Edit Prototype Silverlight on Windows Azure: Working like Google Doc but based on Silverlight/Windows Azure. Multiple users can edit one rich text document simultaneously. This is a POC and my...Consulting Assigment: Proyecto de consultoría para el curso Gestión de proyectos de la UCRCS 105 MP Splitter: Simple program to help with splitting up MPs for grading.DipEngine - graphic engine written on Xen for XNA: DIPEngine makes it easier for developers to use it in your game. It's developed in C# and using Xen for XNADotNetAgent: Agent FrameworkExtending C# editor - Outlining, classification: Extensions to VS C# editor current feautereset. Outlining for switch, for(each), if etc. statements. Classification types for method parameters and...Floe IRC Client: Floe is an IRC client written entirely in C#, using the .NET 4.0 framework with WPF technology. It is inspired largely by both XChat and mIRC, and ...Gestalt.Net: Gestalt.Net is a simple to use library for application configuration. It allows for more flexibility, easier to use, and more powerful than the app...GIM.OnlineChess: An implementation of online long-term chessGoogleTrail: Create and Export using the google map to develop the Trek Trails and Export Those Trails as Garmin Custom Map Compatible MapsGrassidi: What could Grassidi be?HodsAudio: Personal web based music library.Hospital Manage System: Free Hospital Manage SystemHtml Reader: Html Reader makes it easier for WPF WebBrowser users to access the Document property, and retrieve information from HTML pages. You'll no longer ha...Intellitouch.BackOffice: This project is a control for a menu.Jank: JankKooBoo Image Galery: It's a Module for kooboo that implements an image galery admin and view Developed for kooboo 2.1.1.0MoonyDesk (windows desktop widgets): Windows desktop plugin based widgets system written on WPFMSNSharp Application with Video Conferencing Feature: In MSN World, users don’t communicate directly each other.I examined P2P open source video/audio conference systems . After researching, I found Vi...mysample: study sample siteNoteFlyBot: An Intelligent Note Program is: - a 'Post-it' note that accept sNatural Language command to Upload content to EMC Repositories - Start Workflow...OpenGraphNET: A OpenGraphNET is a simple parser for the Open Graph protocol created by Facebook. More information on this can be found at opengraphprotocol.org.POCO - ADO.NET Entity Framework 4.0: The sample shows how to retrieve the data using POCO using EDM 4.0. Database used: Northwind (http://www.microsoft.com/downloads/details.aspx?Fa...sqlmx: sqlmxUpgrade Log Parser for SharePoint 2010: This is a simple Upgrade Log Parser for SharePoint 2010. It will add a node in the upgrade section of the central administration and deploy a pag...New ReleasesAMP (Adamo Media Player): AMP (Adamo Media Player): First Release version 0.1.0 Installation Instructions Unzip the package Run Setup.exe Follow Setup instructions File includes .NET Framework ...BeanProxy: BeanProxy 2.8: BeanProxy is a C# (.NET 3.5) library housing classes that facilitates unit testing. Any non-static, public interface/class/or abstract class can be...Begtostudy-Test: Test V1: Download to testCBM-Command: 2010-05-13: Release Notes - 2010-05-13New Features Configuration Manager is complete Changes Had to put CBM-Command on a big-time diet. Ran out of room on the...CS 105 MP Splitter: CS 105 MP Splitter: Uses SharpZipLib for unzipping (http://www.icsharpcode.net/OpenSource/SharpZipLib/) Uses ILMerge to merge it all into a single executable (http://w...DipEngine - graphic engine written on Xen for XNA: DipGUI: Provides simple controls for use itEvent Scavenger: Collector service update - version 3.2.3: Added additional functionality to check if host/machine is available on the network before trying to open Event log. Also added code to try to use ...ExcelExportLib: ExcelExportLib 1.4.0.0: Solution converted to Visual Studio 2010 - Added support for formulas in cells - Added support for cell naming - Added support for cell's comments...F# Project Extender: V0.9.1.0 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. What's new : Now supports both Visual Studio 2010 and Visual Studio 2008 Fixed...Floe IRC Client: Floe IRC Client 2010-05: Initial release. Note: The .NET Framework v4.0 is required to run this app. It can be downloaded here: http://www.microsoft.com/downloads/details....Fluent Assertions: Release 1.2.1: This is small release with two improvements: You can now use the Enumeration() extension method on a Func<IEnumerable> before calling ShouldThrow<T...Gestalt.Net: Gestalt.Net 1.0: Initial release of Gestalt.Net, a simple to use library for application configuration. It allows for more flexibility, easier to use, and more powe...GPdotNET - Tree Based Genetic Programming Tool: GPdotNETv0.9: This is the same version as previous, but compiled with Visual Studio 2010 and .NET 4.0. You no longer need CTP of ParallelFX.HKGolden Express: HKGoldenExpress (Build 201005122025): New features: (None) Bug fix: (None) Improvements: HKGolden Express now parse JSON data rather than XML document from HKGolden API. This shoul...Jobping Url Shortener: Deploy Code 0.3: Contains only the files for running 0.3 version of the code.Jobping Url Shortener: Source Code 0.3: Feature added: Restriction placed on the domains that the shortener will shorten. Our installation will only shorten www.jobping.com urls. Bug Fix...KooBoo Image Galery: Beta 1: This is the first release and it is divided in some parts Module Install - Is the admin module and an default view Plugin - returns the galery as...LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.8.5: Changes in release 0.8.5 Allow for multiple updatetypes. Add search by Company. Show profiles of all people in an update .MDownloader: MDownloader-0.15.13.58780: Fixed Uploading.com dead links detection; Fixed links detection at pages and in FilesTube results; Fixed RSS channel checking; Enabled by def...Microsoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V0.8 - N-Layer DDD Sample App (VS.2010 RTM compat): Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Unity Applic...MoonyDesk (windows desktop widgets): MoonyDesk: MoonyDesk alpha releaseMSNSharp Application with Video Conferencing Feature: MSNVideoChat: Msn Video Chat Application's exe and screenshot is attached.NazTek.Extension.Clr35: NazTek.Extension.Clr35 Binary: FxCop compliant codebaseNazTek.Extension.Clr35: NazTek.Extension.Clr35 Signed Binary: Signed binaryNazTek.Extension.Clr35: NazTek.Extension.Clr35 Signed Source: Signed sourceNazTek.Extension.Clr35: NazTek.Extension.Clr35 Source: FxCop compliant codebasepatterns & practices Web Client Developer Guidance: Web Client Software Factory 2010 Guide: If the right-side pane of the chm file is not displayed correctly, do the following: 1) Download WCSF2010Guide.chm file. 2) Start the windows explo...Powershell Scripts for Admins: BizTalk PowerShell Module: BizTalk PowerShell Module Available commands : Get-Applications Start-Application Stop-Application Get-HostInstances Start-HostInstance Stop-Hos...Reusable Library: V1.0.8: A collection of reusable abstractions for enterprise application developerReusable Library Demo: V1.0.6: A demonstration of reusable abstractions for enterprise application developerRobot Shootans: Robot Shootans 0.9: This is the second public release of the game. All instructions for play are in the game itself. Known issues: Bullet collision is still a little...Rx Contrib: V1.2: - Bug fix - API changes public static ReactiveQueue<T> CreateConcurrentQueue( ConcurrentPublicationBehavior concurrentPublicationBeha...SqlDiffFramework-A Visual Differencing Engine for Dissimilar Data Sources: SqlDiffFramework 1.0.1.0: Maintenance Release An embedded user control inadvertently assumed US regional and language settings; with non-US settings the application crashed ...Transcriber: Transcriber V0.5.0: Pre-releaseUpgrade Log Parser for SharePoint 2010: Upgrade Log Parser for SharePoint 2010: Introduction I did an upgrade from MOSS 2007 to SharePoint 2010 and SP2010 generated such a large log file and I really didn't feel like scrolling ...VCC: Latest build, v2.1.30512.0: Automatic drop of latest buildVisual Leak Detector for Visual C++ 2008/2010: v2.0a: Renamed vld dll files. Problem with MSVC 2010 Unicode library fixed.VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 24 Beta: Build 24 (beta) New: Added "Open Modified File..." to Solution Explorer context menus. VsTortoise.SolutionExplorerSelectedItemsOpenModifiedFile com...WatchersNET.TagCloud: WatchersNET.TagCloud 01.05.00: Whats New Custom Tags: Tag Name and Tag Url can be localized New Option to set Start Date to grab Search Referrals from New Option to Choose th...Wavelet analysis: Wavelet analysis: Первая публичная версия проекта.XNA Shooter Engine: ModelViewer Alpha 1 Preview: ModelViewer is a utility for previewing HLSL shaders and XNA BasicEffects using the XSE rendering engine. It can also be used with the Microsoft XN...Yet another developer blog - Examples: XML and XSLT transformation in ASP.NET MVC example: This sample application shows how to perform XSL transformation of XML file in ASP.NET MVC (using dedicated HtmlHelper extensions or ActionResult)....Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrBlogEngine.NETPHPExcelwhitejQuery Library for SharePoint Web ServicesMicrosoft Biology FoundationFarseer Physics EngineShake - C# Make

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • A DirectoryCatalog class for Silverlight MEF (Managed Extensibility Framework)

    - by Dixin
    In the MEF (Managed Extension Framework) for .NET, there are useful ComposablePartCatalog implementations in System.ComponentModel.Composition.dll, like: System.ComponentModel.Composition.Hosting.AggregateCatalog System.ComponentModel.Composition.Hosting.AssemblyCatalog System.ComponentModel.Composition.Hosting.DirectoryCatalog System.ComponentModel.Composition.Hosting.TypeCatalog While in Silverlight, there is a extra System.ComponentModel.Composition.Hosting.DeploymentCatalog. As a wrapper of AssemblyCatalog, it can load all assemblies in a XAP file in the web server side. Unfortunately, in silverlight there is no DirectoryCatalog to load a folder. Background There are scenarios that Silverlight application may need to load all XAP files in a folder in the web server side, for example: If the Silverlight application is extensible and supports plug-ins, there would be a /ClinetBin/Plugins/ folder in the web server, and each pluin would be an individual XAP file in the folder. In this scenario, after the application is loaded and started up, it would like to load all XAP files in /ClinetBin/Plugins/ folder. If the aplication supports themes, there would be a /ClinetBin/Themes/ folder, and each theme would be an individual XAP file too. The application would qalso need to load all XAP files in /ClinetBin/Themes/. It is useful if we have a DirectoryCatalog: DirectoryCatalog catalog = new DirectoryCatalog("/Plugins"); catalog.DownloadCompleted += (sender, e) => { }; catalog.DownloadAsync(); Obviously, the implementation of DirectoryCatalog is easy. It is just a collection of DeploymentCatalog class. Retrieve file list from a directory Of course, to retrieve file list from a web folder, the folder’s “Directory Browsing” feature must be enabled: So when the folder is requested, it responses a list of its files and folders: This is nothing but a simple HTML page: <html> <head> <title>localhost - /Folder/</title> </head> <body> <h1>localhost - /Folder/</h1> <hr> <pre> <a href="/">[To Parent Directory]</a><br> <br> 1/3/2011 7:22 PM 185 <a href="/Folder/File.txt">File.txt</a><br> 1/3/2011 7:22 PM &lt;dir&gt; <a href="/Folder/Folder/">Folder</a><br> </pre> <hr> </body> </html> For the ASP.NET Deployment Server of Visual Studio, directory browsing is enabled by default: The HTML <Body> is almost the same: <body bgcolor="white"> <h2><i>Directory Listing -- /ClientBin/</i></h2> <hr width="100%" size="1" color="silver"> <pre> <a href="/">[To Parent Directory]</a> Thursday, January 27, 2011 11:51 PM 282,538 <a href="Test.xap">Test.xap</a> Tuesday, January 04, 2011 02:06 AM &lt;dir&gt; <a href="TestFolder/">TestFolder</a> </pre> <hr width="100%" size="1" color="silver"> <b>Version Information:</b>&nbsp;ASP.NET Development Server 10.0.0.0 </body> The only difference is, IIS’s links start with slash, but here the links do not. Here one way to get the file list is read the href attributes of the links: [Pure] private IEnumerable<Uri> GetFilesFromDirectory(string html) { Contract.Requires(html != null); Contract.Ensures(Contract.Result<IEnumerable<Uri>>() != null); return new Regex( "<a href=\"(?<uriRelative>[^\"]*)\">[^<]*</a>", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant) .Matches(html) .OfType<Match>() .Where(match => match.Success) .Select(match => match.Groups["uriRelative"].Value) .Where(uriRelative => uriRelative.EndsWith(".xap", StringComparison.Ordinal)) .Select(uriRelative => { Uri baseUri = this.Uri.IsAbsoluteUri ? this.Uri : new Uri(Application.Current.Host.Source, this.Uri); uriRelative = uriRelative.StartsWith("/", StringComparison.Ordinal) ? uriRelative : (baseUri.LocalPath.EndsWith("/", StringComparison.Ordinal) ? baseUri.LocalPath + uriRelative : baseUri.LocalPath + "/" + uriRelative); return new Uri(baseUri, uriRelative); }); } Please notice the folders’ links end with a slash. They are filtered by the second Where() query. The above method can find files’ URIs from the specified IIS folder, or ASP.NET Deployment Server folder while debugging. To support other formats of file list, a constructor is needed to pass into a customized method: /// <summary> /// Initializes a new instance of the <see cref="T:System.ComponentModel.Composition.Hosting.DirectoryCatalog" /> class with <see cref="T:System.ComponentModel.Composition.Primitives.ComposablePartDefinition" /> objects based on all the XAP files in the specified directory URI. /// </summary> /// <param name="uri"> /// URI to the directory to scan for XAPs to add to the catalog. /// The URI must be absolute, or relative to <see cref="P:System.Windows.Interop.SilverlightHost.Source" />. /// </param> /// <param name="getFilesFromDirectory"> /// The method to find files' URIs in the specified directory. /// </param> public DirectoryCatalog(Uri uri, Func<string, IEnumerable<Uri>> getFilesFromDirectory) { Contract.Requires(uri != null); this._uri = uri; this._getFilesFromDirectory = getFilesFromDirectory ?? this.GetFilesFromDirectory; this._webClient = new Lazy<WebClient>(() => new WebClient()); // Initializes other members. } When the getFilesFromDirectory parameter is null, the above GetFilesFromDirectory() method will be used as default. Download the directory’s XAP file list Now a public method can be created to start the downloading: /// <summary> /// Begins downloading the XAP files in the directory. /// </summary> public void DownloadAsync() { this.ThrowIfDisposed(); if (Interlocked.CompareExchange(ref this._state, State.DownloadStarted, State.Created) == 0) { this._webClient.Value.OpenReadCompleted += this.HandleOpenReadCompleted; this._webClient.Value.OpenReadAsync(this.Uri, this); } else { this.MutateStateOrThrow(State.DownloadCompleted, State.Initialized); this.OnDownloadCompleted(new AsyncCompletedEventArgs(null, false, this)); } } Here the HandleOpenReadCompleted() method is invoked when the file list HTML is downloaded. Download all XAP files After retrieving all files’ URIs, the next thing becomes even easier. HandleOpenReadCompleted() just uses built in DeploymentCatalog to download the XAPs, and aggregate them into one AggregateCatalog: private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { using (StreamReader reader = new StreamReader(e.Result)) { string html = reader.ReadToEnd(); IEnumerable<Uri> uris = this._getFilesFromDirectory(html); Contract.Assume(uris != null); IEnumerable<DeploymentCatalog> deploymentCatalogs = uris.Select(uri => new DeploymentCatalog(uri)); deploymentCatalogs.ForEach( deploymentCatalog => { this._aggregateCatalog.Catalogs.Add(deploymentCatalog); deploymentCatalog.DownloadCompleted += this.HandleDownloadCompleted; }); deploymentCatalogs.ForEach(deploymentCatalog => deploymentCatalog.DownloadAsync()); } } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } // Exception handling. } In HandleDownloadCompleted(), if all XAPs are downloaded without exception, OnDownloadCompleted() callback method will be invoked. private void HandleDownloadCompleted(object sender, AsyncCompletedEventArgs e) { if (Interlocked.Increment(ref this._downloaded) == this._aggregateCatalog.Catalogs.Count) { this.OnDownloadCompleted(e); } } Exception handling Whether this DirectoryCatelog can work only if the directory browsing feature is enabled. It is important to inform caller when directory cannot be browsed for XAP downloading. private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { // No exception thrown when browsing directory. Downloads the listed XAPs. } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } WebException webException = error as WebException; if (webException != null) { HttpWebResponse webResponse = webException.Response as HttpWebResponse; if (webResponse != null) { // Internally, WebClient uses WebRequest.Create() to create the WebRequest object. Here does the same thing. WebRequest request = WebRequest.Create(Application.Current.Host.Source); Contract.Assume(request != null); if (request.CreatorInstance == WebRequestCreator.ClientHttp && // Silverlight is in client HTTP handling, all HTTP status codes are supported. webResponse.StatusCode == HttpStatusCode.Forbidden) { // When directory browsing is disabled, the HTTP status code is 403 (forbidden). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_ClientHttp, webException); } else if (request.CreatorInstance == WebRequestCreator.BrowserHttp && // Silverlight is in browser HTTP handling, only 200 and 404 are supported. webResponse.StatusCode == HttpStatusCode.NotFound) { // When directory browsing is disabled, the HTTP status code is 404 (not found). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_BrowserHttp, webException); } } } this.OnDownloadCompleted(new AsyncCompletedEventArgs(error, cancelled, this)); } Please notice Silverlight 3+ application can work either in client HTTP handling, or browser HTTP handling. One difference is: In browser HTTP handling, only HTTP status code 200 (OK) and 404 (not OK, including 500, 403, etc.) are supported In client HTTP handling, all HTTP status code are supported So in above code, exceptions in 2 modes are handled differently. Conclusion Here is the whole DirectoryCatelog’s looking: Please click here to download the source code, a simple unit test is included. This is a rough implementation. And, for convenience, some design and coding are just following the built in AggregateCatalog class and Deployment class. Please feel free to modify the code, and please kindly tell me if any issue is found.

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • CURL - HTTPS Wierd error

    - by Vincent
    All, I am having trouble requesting info from HTTPS site using CURL and PHP. I am using Solaris 10. It so happens that sometimes it works and sometimes it doesn't. I am not sure what is the cause. If it doesn't work, this is the entry recorded in the verbose log: * About to connect() to 10.10.101.12 port 443 (#0) * Trying 10.10.101.12... * connected * Connected to 10.10.101.12 (10.10.101.12) port 443 (#0) * error setting certificate verify locations, continuing anyway: * CAfile: /etc/opt/webstack/curl/curlCA CApath: none * error:80089077:lib(128):func(137):reason(119) * Closing connection #0 If it works, this is the entry recorded in the verbose log: * About to connect() to 10.10.101.12 port 443 (#0) * Trying 10.10.101.12... * connected * Connected to 10.10.101.12 (10.10.101.12) port 443 (#0) * error setting certificate verify locations, continuing anyway: * CAfile: /etc/opt/webstack/curl/curlCA CApath: none * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: C=CA, ST=British Columnbia, L=Vancouver, O=google, OU=FDN, CN=g.googlenet.com, [email protected] * start date: 2007-07-24 23:06:32 GMT * expire date: 2027-09-07 23:06:32 GMT * issuer: C=US, ST=California, L=Sunnyvale, O=Google, OU=Certificate Authority, CN=support, [email protected] * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. > POST /gportal/gpmgr HTTP/1.1^M Host: 10.10.101.12^M Accept: */*^M Accept-Encoding: gzip,deflate^M Content-Length: 1623^M Content-Type: application/x-www-form-urlencoded^M Expect: 100-continue^M ^M < HTTP/1.1 100 Continue^M < HTTP/1.1 200 OK^M < Date: Wed, 28 Apr 2010 21:56:15 GMT^M < Server: Apache^M < Cache-Control: no-cache^M < Pragma: no-cache^M < Vary: Accept-Encoding^M < Content-Encoding: gzip^M < Content-Length: 1453^M < Content-Type: application/json^M < ^M * Connection #0 to host 10.10.101.12 left intact * Closing connection #0 My CURL options are as under: $ch = curl_init(); $devnull = fopen('/tmp/curlcookie.txt', 'w'); $fp_err = fopen('/tmp/verbose_file.txt', 'ab+'); fwrite($fp_err, date('Y-m-d H:i:s')."\n\n"); curl_setopt($ch, CURLOPT_STDERR, $devnull); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_URL, $desturl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT,120); curl_setopt($ch, CURLOPT_AUTOREFERER, true); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata); curl_setopt($ch, CURLOPT_VERBOSE,1); curl_setopt($ch, CURLOPT_FAILONERROR, true); curl_setopt($ch, CURLOPT_STDERR, $fp_err); $ret = curl_exec($ch); Anybody has any idea, why it works sometimes but fails mostly? Thanks

    Read the article

  • LLVM JIT segfaults. What am I doing wrong?

    - by bugspy.net
    It is probably something basic because I am just starting to learn LLVM.. The following creates a factorial function and tries to git and execute it (I know the generated func is correct because I was able to static compile and execute it). But I get segmentation fault upon execution of the function (in EE-runFunction(TheF, Args)) #include <iostream> #include "llvm/Module.h" #include "llvm/Function.h" #include "llvm/PassManager.h" #include "llvm/CallingConv.h" #include "llvm/Analysis/Verifier.h" #include "llvm/Assembly/PrintModulePass.h" #include "llvm/Support/IRBuilder.h" #include "llvm/Support/raw_ostream.h" #include "llvm/ExecutionEngine/JIT.h" #include "llvm/ExecutionEngine/GenericValue.h" using namespace llvm; Module* makeLLVMModule() { // Module Construction LLVMContext& ctx = getGlobalContext(); Module* mod = new Module("test", ctx); Constant* c = mod->getOrInsertFunction("fact64", /*ret type*/ IntegerType::get(ctx,64), IntegerType::get(ctx,64), /*varargs terminated with null*/ NULL); Function* fact64 = cast<Function>(c); fact64->setCallingConv(CallingConv::C); /* Arg names */ Function::arg_iterator args = fact64->arg_begin(); Value* x = args++; x->setName("x"); /* Body */ BasicBlock* block = BasicBlock::Create(ctx, "entry", fact64); BasicBlock* xLessThan2Block= BasicBlock::Create(ctx, "xlst2_block", fact64); BasicBlock* elseBlock = BasicBlock::Create(ctx, "else_block", fact64); IRBuilder<> builder(block); Value *One = ConstantInt::get(Type::getInt64Ty(ctx), 1); Value *Two = ConstantInt::get(Type::getInt64Ty(ctx), 2); Value* xLessThan2 = builder.CreateICmpULT(x, Two, "tmp"); //builder.CreateCondBr(xLessThan2, xLessThan2Block, cond_false_2); builder.CreateCondBr(xLessThan2, xLessThan2Block, elseBlock); /* Recursion */ builder.SetInsertPoint(elseBlock); Value* xMinus1 = builder.CreateSub(x, One, "tmp"); std::vector<Value*> args1; args1.push_back(xMinus1); Value* recur_1 = builder.CreateCall(fact64, args1.begin(), args1.end(), "tmp"); Value* retVal = builder.CreateBinOp(Instruction::Mul, x, recur_1, "tmp"); builder.CreateRet(retVal); /* x<2 */ builder.SetInsertPoint(xLessThan2Block); builder.CreateRet(One); return mod; } int main(int argc, char**argv) { long long x; if(argc > 1) x = atol(argv[1]); else x = 4; Module* Mod = makeLLVMModule(); verifyModule(*Mod, PrintMessageAction); PassManager PM; PM.add(createPrintModulePass(&outs())); PM.run(*Mod); // Now we going to create JIT ExecutionEngine *EE = EngineBuilder(Mod).create(); // Call the function with argument x: std::vector<GenericValue> Args(1); Args[0].IntVal = APInt(64, x); Function* TheF = cast<Function>(Mod->getFunction("fact64")) ; /* The following CRASHES.. */ GenericValue GV = EE->runFunction(TheF, Args); outs() << "Result: " << GV.IntVal << "\n"; delete Mod; return 0; }

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • Knockout with ASP.Net MVC2 - HTML Extension Helpers for input controls

    - by Renso
    Goal: Defining Knockout-style input controls can be tedious and also may be something that you may find obtrusive, mixing your HTML with data bind syntax as well as binding your aspx, ascx files to Knockout. The goal is to make specifying Knockout specific HTML tags easy, seamless really, as well as being able to remove references to Knockout easily. Environment considerations: ASP.Net MVC2 or later Knockoutjs.js How to:     public static class HtmlExtensions     {         public static string DataBoundCheckBox(this HtmlHelper helper, string name, bool isChecked, object htmlAttributes)         {             var builder = new TagBuilder("input");             var dic = new RouteValueDictionary(htmlAttributes) { { "data-bind", String.Format("checked: {0}", name) } };             builder.MergeAttributes(dic);             builder.MergeAttribute("type", @"checkbox");             builder.MergeAttribute("name", name);             builder.MergeAttribute("value", @"true");             if (isChecked)             {                 builder.MergeAttribute("checked", @"checked");             }             return builder.ToString(TagRenderMode.SelfClosing);         }         public static MvcHtmlString DataBoundSelectList(this HtmlHelper helper, string name, IEnumerable<SelectListItem> selectList, String optionLabel)         {             var attrProperties = new StringBuilder();             attrProperties.Append(String.Format("optionsText: '{0}'", name));             if (!String.IsNullOrEmpty(optionLabel)) attrProperties.Append(String.Format(", optionsCaption: '{0}'", optionLabel));             attrProperties.Append(String.Format(", value: {0}", name));             var dic = new RouteValueDictionary { { "data-bind", attrProperties.ToString() } };             return helper.DropDownList(name, selectList, optionLabel, dic);         }         public static MvcHtmlString DataBoundSelectList(this HtmlHelper helper, string name, IEnumerable<SelectListItem> selectList, String optionLabel, object htmlAttributes)         {             var attrProperties = new StringBuilder();             attrProperties.Append(String.Format("optionsText: '{0}'", name));             if (!String.IsNullOrEmpty(optionLabel)) attrProperties.Append(String.Format(", optionsCaption: '{0}'", optionLabel));             attrProperties.Append(String.Format(", value: {0}", name));             var dic = new RouteValueDictionary(htmlAttributes) {{"data-bind", attrProperties}};             return helper.DropDownList(name, selectList, optionLabel, dic);         }         public static String DataBoundSelectList(this HtmlHelper helper, String options, String optionsText, String value)         {             return String.Format("<select data-bind=\"options: {0},optionsText: '{1}',value: {2}\"></select>", options, optionsText, value);         }         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, string observable, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", observable));             return helper.TextBox(name, value, dic);         }         public static MvcHtmlString DataBoundTextArea(this HtmlHelper helper, string name, string value, int rows, int columns, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextArea(name, value, rows, columns, dic);         }         public static MvcHtmlString DataBoundTextArea(this HtmlHelper helper, string name, string observable, string value, int rows, int columns, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", observable));             return helper.TextArea(name, value, rows, columns, dic);         }         public static string BuildUrlFromExpression<T>(this HtmlHelper helper, Expression<Action<T>> action)         {             var values = CreateRouteValuesFromExpression(action);             var virtualPath = helper.RouteCollection.GetVirtualPath(helper.ViewContext.RequestContext, values);             if (virtualPath != null)             {                 return virtualPath.VirtualPath;             }             return null;         }         public static string ActionLink<T>(this HtmlHelper helper, Expression<Action<T>> action, string linkText)         {             return helper.ActionLink(action, linkText, null);         }         public static string ActionLink<T>(this HtmlHelper helper, Expression<Action<T>> action, string linkText, object htmlAttributes)         {             var values = CreateRouteValuesFromExpression(action);             var controllerName = (string)values["controller"];             var actionName = (string)values["action"];             values.Remove("controller");             values.Remove("action");             return helper.ActionLink(linkText, actionName, controllerName, values, new RouteValueDictionary(htmlAttributes)).ToHtmlString();         }         public static MvcForm Form<T>(this HtmlHelper helper, Expression<Action<T>> action)         {             return helper.Form(action, FormMethod.Post);         }         public static MvcForm Form<T>(this HtmlHelper helper, Expression<Action<T>> action, FormMethod method)         {             var values = CreateRouteValuesFromExpression(action);             string controllerName = (string)values["controller"];             string actionName = (string)values["action"];             values.Remove("controller");             values.Remove("action");             return helper.BeginForm(actionName, controllerName, values, method);         }         public static MvcForm Form<T>(this HtmlHelper helper, Expression<Action<T>> action, FormMethod method, object htmlAttributes)         {             var values = CreateRouteValuesFromExpression(action);             string controllerName = (string)values["controller"];             string actionName = (string)values["action"];             values.Remove("controller");             values.Remove("action");             return helper.BeginForm(actionName, controllerName, values, method, new RouteValueDictionary(htmlAttributes));         }         public static string VertCheckBox(this HtmlHelper helper, string name, bool isChecked)         {             return helper.CustomCheckBox(name, isChecked, null);         }          public static string CustomCheckBox(this HtmlHelper helper, string name, bool isChecked, object htmlAttributes)         {             TagBuilder builder = new TagBuilder("input");             builder.MergeAttributes(new RouteValueDictionary(htmlAttributes));             builder.MergeAttribute("type", "checkbox");             builder.MergeAttribute("name", name);             builder.MergeAttribute("value", "true");             if (isChecked)             {                 builder.MergeAttribute("checked", "checked");             }             return builder.ToString(TagRenderMode.SelfClosing);         }         public static string Script(this HtmlHelper helper, string script, object scriptAttributes)         {             var pathForCRMScripts = ScriptsController.GetPathForCRMScripts();             if (ScriptOptimizerConfig.EnableMinimizedFileLoad)             {                 string newPathForCRM = pathForCRMScripts + "Min/";                 ScriptsController.ServerPathMapper = new ServerPathMapper();                 string fullPath = ScriptsController.ServerMapPath(newPathForCRM);                 if (!File.Exists(fullPath + script))                     return null;                 if (!Directory.Exists(fullPath))                     return null;                 pathForCRMScripts = newPathForCRM;             }             var builder = new TagBuilder("script");             builder.MergeAttributes(new RouteValueDictionary(scriptAttributes));             builder.MergeAttribute("type", @"text/javascript");             builder.MergeAttribute("src", String.Format("{0}{1}", pathForCRMScripts.Replace("~", String.Empty), script));             return builder.ToString(TagRenderMode.SelfClosing);         }         private static RouteValueDictionary CreateRouteValuesFromExpression<T>(Expression<Action<T>> action)         {             if (action == null)                 throw new InvalidOperationException("Action must be provided");             var body = action.Body as MethodCallExpression;             if (body == null)             {                 throw new InvalidOperationException("Expression must be a method call");             }             if (body.Object != action.Parameters[0])             {                 throw new InvalidOperationException("Method call must target lambda argument");             }             // This will build up a RouteValueDictionary containing the controller name, action name, and any             // parameters passed as part of the "action" parameter.             string name = body.Method.Name;             string controllerName = typeof(T).Name;             if (controllerName.EndsWith("Controller", StringComparison.OrdinalIgnoreCase))             {                 controllerName = controllerName.Remove(controllerName.Length - 10, 10);             }             var values = BuildParameterValuesFromExpression(body) ?? new RouteValueDictionary();             values.Add("controller", controllerName);             values.Add("action", name);             return values;         }         private static RouteValueDictionary BuildParameterValuesFromExpression(MethodCallExpression call)         {             // Build up a RouteValueDictionary containing parameter names as keys and parameter values             // as values based on the MethodCallExpression passed in.             var values = new RouteValueDictionary();             ParameterInfo[] parameters = call.Method.GetParameters();             // If the passed in method has no parameters, just return an empty dictionary.             if (parameters.Length == 0)             {                 return values;             }             for (int i = 0; i < parameters.Length; i++)             {                 object parameterValue;                 Expression expression = call.Arguments[i];                 // If the current parameter is a constant, just use its value as the parameter value.                 var constant = expression as ConstantExpression;                 if (constant != null)                 {                     parameterValue = constant.Value;                 }                 else                 {                     // Otherwise, compile and execute the expression and use that as the parameter value.                     var function = Expression.Lambda<Func<object>>(Expression.Convert(expression, typeof(object)),                                                                    new ParameterExpression[0]);                     try                     {                         parameterValue = function.Compile()();                     }                     catch                     {                         parameterValue = null;                     }                 }                 values.Add(parameters[i].Name, parameterValue);             }             return values;         }     }   Some observations: The first two DataBoundSelectList overloaded methods are specifically built to load the data right into the drop down box as part of the HTML response stream rather than let Knockout's engine populate the options client-side. The third overloaded method does it client-side via the viewmodel. The first two overloads can be done when you have no requirement to add complex JSON objects to your lists. Furthermore, why render and parse the JSON object when you can have it all built and rendered server-side like any other list control.

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • Linq2SQL vs NHibernate performance (have I gone mad?)

    - by HeavyWave
    I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms. Linq: [Test] public void Test1000ReadsWritesToAgentStateLinqPrecompiled() { Stopwatch sw = new Stopwatch(); Stopwatch swIn = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { swIn.Reset(); swIn.Start(); ReadWriteAndDeleteAgentStateWithLinqPrecompiled(); swIn.Stop(); Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms"); } sw.Stop(); Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms"); Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms"); } private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail> GetCampaignById = CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>( (ctx, sessionId) => (from cd in ctx.CampaignDetails join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId where a.AgentStateId == sessionId select cd).FirstOrDefault()); private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled() { int id = 0; using (var ctx = new AgentDesktop3DataContext()) { EntityModel.AgentState agentState = new EntityModel.AgentState(); var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" }; var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" }; campaignDisposition.Description = "abc"; campaign.CampaignDispositions.Add(campaignDisposition); agentState.CallState = 3; campaign.AgentCampaigns.Add(new AgentCampaign { AgentState = agentState }); ctx.CampaignDetails.InsertOnSubmit(campaign); ctx.AgentStates.InsertOnSubmit(agentState); ctx.SubmitChanges(); id = agentState.AgentStateId; } using (var ctx = new AgentDesktop3DataContext()) { var dbAgentState = ctx.GetAgentStateById(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); var campaignDetails = GetCampaignById(ctx, id); Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc"); } using (var ctx = new AgentDesktop3DataContext()) { ctx.DeleteSessionById(id); } } NHibernate (the loop is the same): private void ReadWriteAndDeleteAgentState() { var id = WriteAgentState().Id; StartNewTransaction(); var dbAgentState = agentStateRepository.Get(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc"); var campaignId = dbAgentState.Campaigns[0].Id; agentStateRepository.Delete(dbAgentState); NHibernateSession.Current.Transaction.Commit(); Cleanup(campaignId); NHibernateSession.Current.BeginTransaction(); } Results: NHibernate: Total Time: 9469 ms Average time to execute 13 queries: 9 ms Linq: Total Time: 127200 ms Average time to execute 13 queries: 127 ms Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled). This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Algorithm to select groups of similar items in 2d array

    - by mafutrct
    There is a 2d array of items (in my case they are called Intersections). A certain item is given as a start. The task is to find all items directly or indirectly connected to this item that satisfy a certain function. So the basic algorithm is like this: Add the start to the result list. Repeat until no modification: Add each item in the array that satisfies the function and touches any item in the result list to the result list. My current implementation looks like this: private IList<Intersection> SelectGroup ( Intersection start, Func<Intersection, Intersection, bool> select) { List<Intersection> result = new List<Intersection> (); Queue<Intersection> source = new Queue<Intersection> (); source.Enqueue (start); while (source.Any ()) { var s = source.Dequeue (); result.Add (s); foreach (var neighbour in Neighbours (s)) { if (select (start, neighbour) && !result.Contains (neighbour) && !source.Contains (neighbour)) { source.Enqueue (neighbour); } } } Debug.Assert (result.Distinct ().Count () == result.Count ()); Debug.Assert (result.All (x => select (x, result.First ()))); return result; } private List<Intersection> Neighbours (IIntersection intersection) { int x = intersection.X; int y = intersection.Y; List<Intersection> list = new List<Intersection> (); if (x > 1) { list.Add (GetIntersection (x - 1, y)); } if (y > 1) { list.Add (GetIntersection (x, y - 1)); } if (x < Size) { list.Add (GetIntersection (x + 1, y)); } if (y < Size) { list.Add (GetIntersection (x, y + 1)); } return list; } (The select function takes a start item and returns true iff the second item satisfies.) This does its job and turned out to be reasonable fast for the usual array sizes (about 20*20). However, I'm interested in further improvements. Any ideas? Example (X satisfies in relation to other Xs, . does never satisfy): .... XX.. .XX. X... In this case, there are 2 groups: a central group of 4 items and a group of a single item in the lower left. Selecting the group (for instance by starting item [2, 2]) returns the former, while the latter can be selected using the starting item and sole return value [0, 3]. Example 2: .A.. ..BB A.AA This time there are 4 groups. The 3 A groups are not connected, so they are returned as separate groups. The bigger A and B groups are connected, but A does not related to B so they are returned as separate groups.

    Read the article

  • Why does my performance slow to a crawl I move methods into a base class?

    - by Juliet
    I'm writing different implementations of immutable binary trees in C#, and I wanted my trees to inherit some common methods from a base class. However, I find. I have lots of binary tree data structures to implement, and I wanted move some common methods into in a base binary tree class. Unfortunately, classes which derive from the base class are abysmally slow. Non-derived classes perform adequately. Here are two nearly identical implementations of an AVL tree to demonstrate: AvlTree: http://pastebin.com/V4WWUAyT DerivedAvlTree: http://pastebin.com/PussQDmN The two trees have the exact same code, but I've moved the DerivedAvlTree.Insert method in base class. Here's a test app: using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using Juliet.Collections.Immutable; namespace ConsoleApplication1 { class Program { const int VALUE_COUNT = 5000; static void Main(string[] args) { var avlTreeTimes = TimeIt(TestAvlTree); var derivedAvlTreeTimes = TimeIt(TestDerivedAvlTree); Console.WriteLine("avlTreeTimes: {0}, derivedAvlTreeTimes: {1}", avlTreeTimes, derivedAvlTreeTimes); } static double TimeIt(Func<int, int> f) { var seeds = new int[] { 314159265, 271828183, 231406926, 141421356, 161803399, 266514414, 15485867, 122949829, 198491329, 42 }; var times = new List<double>(); foreach (int seed in seeds) { var sw = Stopwatch.StartNew(); f(seed); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } // throwing away top and bottom results times.Sort(); times.RemoveAt(0); times.RemoveAt(times.Count - 1); return times.Average(); } static int TestAvlTree(int seed) { var rnd = new System.Random(seed); var avlTree = AvlTree<double>.Create((x, y) => x.CompareTo(y)); for (int i = 0; i < VALUE_COUNT; i++) { avlTree = avlTree.Insert(rnd.NextDouble()); } return avlTree.Count; } static int TestDerivedAvlTree(int seed) { var rnd = new System.Random(seed); var avlTree2 = DerivedAvlTree<double>.Create((x, y) => x.CompareTo(y)); for (int i = 0; i < VALUE_COUNT; i++) { avlTree2 = avlTree2.Insert(rnd.NextDouble()); } return avlTree2.Count; } } } AvlTree: inserts 5000 items in 121 ms DerivedAvlTree: inserts 5000 items in 2182 ms My profiler indicates that the program spends an inordinate amount of time in BaseBinaryTree.Insert. Anyone whose interested can see the EQATEC log file I've created with the code above (you'll need EQATEC profiler to make sense of file). I really want to use a common base class for all of my binary trees, but I can't do that if performance will suffer. What causes my DerivedAvlTree to perform so badly, and what can I do to fix it?

    Read the article

  • bubble sort on array of c structures not sorting properly

    - by xmpirate
    I have the following program for books record and I want to sort the records on name of book. the code isn't showing any error but it's not sorting all the records. #include "stdio.h" #include "string.h" #define SIZE 5 struct books{ //define struct char name[100],author[100]; int year,copies; }; struct books book1[SIZE],book2[SIZE],*pointer; //define struct vars void sort(struct books *,int); //define sort func main() { int i; char c; for(i=0;i<SIZE;i++) //scanning values { gets(book1[i].name); gets(book1[i].author); scanf("%d%d",&book1[i].year,&book1[i].copies); while((c = getchar()) != '\n' && c != EOF); } pointer=book1; sort(pointer,SIZE); //sort call i=0; //printing values while(i<SIZE) { printf("##########################################################################\n"); printf("Book: %s\nAuthor: %s\nYear of Publication: %d\nNo of Copies: %d\n",book1[i].name,book1[i].author,book1[i].year,book1[i].copies); printf("##########################################################################\n"); i++; } } void sort(struct books *pointer,int n) { int i,j,sorted=0; struct books temp; for(i=0;(i<n-1)&&(sorted==0);i++) //bubble sort on the book name { sorted=1; for(j=0;j<n-i-1;j++) { if(strcmp((*pointer).name,(*(pointer+1)).name)>0) { //copy to temp val strcpy(temp.name,(*pointer).name); strcpy(temp.author,(*pointer).author); temp.year=(*pointer).year; temp.copies=(*pointer).copies; //copy next val strcpy((*pointer).name,(*(pointer+1)).name); strcpy((*pointer).author,(*(pointer+1)).author); (*pointer).year=(*(pointer+1)).year; (*pointer).copies=(*(pointer+1)).copies; //copy back temp val strcpy((*(pointer+1)).name,temp.name); strcpy((*(pointer+1)).author,temp.author); (*(pointer+1)).year=temp.year; (*(pointer+1)).copies=temp.copies; sorted=0; } *pointer++; } } } My Imput The C Programming Language X Y Z 1934 56 Inferno Dan Brown 1993 453 harry Potter and the soccers stone J K Rowling 2012 150 Ruby On Rails jim aurther nil 2004 130 Learn Python Easy Way gmaps4rails 1967 100 And the output ########################################################################## Book: Inferno Author: Dan Brown Year of Publication: 1993 No of Copies: 453 ########################################################################## ########################################################################## Book: The C Programming Language Author: X Y Z Year of Publication: 1934 No of Copies: 56 ########################################################################## ########################################################################## Book: Ruby On Rails Author: jim aurther nil Year of Publication: 2004 No of Copies: 130 ########################################################################## ########################################################################## Book: Learn Python Easy Way Author: gmaps4rails Year of Publication: 1967 No of Copies: 100 ########################################################################## ########################################################################## Book: Author: Year of Publication: 0 No of Copies: 0 ########################################################################## We can see the above sorting is wrong? What I'm I doing wrong?

    Read the article

  • how to avoid sub-query to gain performance

    - by chun
    hi i have a reporting query which have 2 long sub-query SELECT r1.code_centre, r1.libelle_centre, r1.id_equipe, r1.equipe, r1.id_file_attente, r1.libelle_file_attente,r1.id_date, r1.tranche, r1.id_granularite_de_periode,r1.granularite, r1.ContactsTraites, r1.ContactsenParcage, r1.ContactsenComm, r1.DureeTraitementContacts, r1.DureeComm, r1.DureeParcage, r2.AgentsConnectes, r2.DureeConnexion, r2.DureeTraitementAgents, r2.DureePostTraitement FROM ( SELECT cc.id_centre_contact, cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_file_attente, f.libelle_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode, g.granularite, sum(Nb_Contacts_Traites) as ContactsTraites, sum(Nb_Contacts_en_Parcage) as ContactsenParcage, sum(Nb_Contacts_en_Communication) as ContactsenComm, sum(Duree_Traitement/1000) as DureeTraitementContacts, sum(Duree_Communication / 1000 + Duree_Conference / 1000 + Duree_Com_Interagent / 1000) as DureeComm, sum(Duree_Parcage/1000) as DureeParcage FROM agr_synthese_activite_media_fa_agent a, centre_contact cc, direction_contact dc, granularite_de_periode g, media m, file_attente f WHERE m.id_media = a.id_media AND cc.id_centre_contact = a.id_centre_contact AND a.id_direction_contact = dc.id_direction_contact AND dc.direction_contact ='INCOMING' AND a.id_file_attente = f.id_file_attente AND m.media = 'PHONE' AND ( ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') or ( g.granularite = 'Heure' and a.id_th_heure = g.id_granularite_de_periode) ) GROUP by cc.id_centre_contact, a.id_equipe, a.id_file_attente, a.id_date, g.tranche, g.id_granularite_de_periode) r1, ( (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.valeur_min = date_format(a.id_date,'%d/%m') and g.granularite = 'Jour') AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact, a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode ) UNION (SELECT cc.id_centre_contact,cc.code_centre, cc.libelle_centre, a.id_equipe, a.equipe, a.id_date, g.tranche, g.id_granularite_de_periode,g.granularite, count(distinct a.id_agent) as AgentsConnectes, sum(Duree_Connexion / 1000) as DureeConnexion, sum(Duree_en_Traitement / 1000) as DureeTraitementAgents, sum(Duree_en_PostTraitement / 1000) as DureePostTraitement FROM activite_agent a, centre_contact cc, granularite_de_periode g WHERE ( g.granularite = 'Heure' AND a.id_th_heure = g.id_granularite_de_periode) AND cc.id_centre_contact = a.id_centre_contact GROUP BY cc.id_centre_contact,a.id_equipe, a.id_date, g.tranche, g.id_granularite_de_periode) ) r2 WHERE r1.id_centre_contact = r2.id_centre_contact AND r1.id_equipe = r2.id_equipe AND r1.id_date = r2.id_date AND r1.tranche = r2.tranche AND r1.id_granularite_de_periode = r2.id_granularite_de_periode GROUP BY r1.id_centre_contact , r1.id_equipe, r1.id_file_attente, r1.id_date, r1.tranche, r1.id_granularite_de_periode ORDER BY r1.code_centre, r1.libelle_centre, r1.equipe, r1.libelle_file_attente, r1.id_date, r1.id_granularite_de_periode,r1.tranche the EXPLAIN shows | id | select_type | table | type| possible_keys | key | key_len | ref| rows | Extra | '1', 'PRIMARY', '<derived3>', 'ALL', NULL, NULL, NULL, NULL, '2520', 'Using temporary; Using filesort' '1', 'PRIMARY', '<derived2>', 'ALL', NULL, NULL, NULL, NULL, '4378', 'Using where; Using join buffer' '3', 'DERIVED', 'a', 'ALL', 'fk_Activite_Agent_centre_contact', NULL, NULL, NULL, '83433', 'Using temporary; Using filesort' '3', 'DERIVED', 'g', 'ref', 'Index_granularite,Index_Valeur_min', 'Index_Valeur_min', '23', 'func', '1', 'Using where' '3', 'DERIVED', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' '4', 'UNION', 'g', 'ref', 'PRIMARY,Index_granularite', 'Index_granularite', '23', '', '24', 'Using where; Using temporary; Using filesort' '4', 'UNION', 'a', 'ref', 'fk_Activite_Agent_centre_contact,fk_activite_agent_TH_heure', 'fk_activite_agent_TH_heure', '5', 'reporting_acd.g.Id_Granularite_de_periode', '2979', 'Using where' '4', 'UNION', 'cc', 'ALL', 'PRIMARY', NULL, NULL, NULL, '6', 'Using where; Using join buffer' NULL, 'UNION RESULT', '<union3,4>', 'ALL', NULL, NULL, NULL, NULL, NULL, '' '2', 'DERIVED', 'g', 'range', 'PRIMARY,Index_granularite,Index_Valeur_min', 'Index_granularite', '23', NULL, '389', 'Using where; Using temporary; Using filesort' '2', 'DERIVED', 'a', 'ALL', 'fk_agr_synthese_activite_media_fa_agent_centre_contact,fk_agr_synthese_activite_media_fa_agent_direction_contact,fk_agr_synthese_activite_media_fa_agent_file_attente,fk_agr_synthese_activite_media_fa_agent_media,fk_agr_synthese_activite_media_fa_agent_th_heure', NULL, NULL, NULL, '20903', 'Using where; Using join buffer' '2', 'DERIVED', 'cc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Centre_Contact', '1', '' '2', 'DERIVED', 'f', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_File_Attente', '1', '' '2', 'DERIVED', 'dc', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Direction_Contact', '1', 'Using where' '2', 'DERIVED', 'm', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 'reporting_acd.a.Id_Media', '1', 'Using where' don't know it very clear, but i think is the problem of seems it take full scaning than i change all the sub-query to views(create view as select sub-query), and the result is the same thanks for any advice

    Read the article

  • Hibernate - strange order of native SQL parameters

    - by Xorty
    Hello, I am trying to use native MySQL's MD5 crypto func, so I defined custom insert in my mapping file. <hibernate-mapping package="tutorial"> <class name="com.xorty.mailclient.client.domain.User" table="user"> <id name="login" type="string" column="login"></id> <property name="password"> <column name="password" /> </property> <sql-insert>INSERT INTO user (login,password) VALUES ( ?, MD5(?) )</sql-insert> </class> </hibernate-mapping> Then I create User (pretty simple POJO with just 2 Strings - login and password) and try to persist it. session.beginTransaction(); // we have no such user in here yet User junitUser = (User) session.load(User.class, "junit_user"); assert (null == junitUser); // insert new user junitUser = new User(); junitUser.setLogin("junit_user"); junitUser.setPassword("junitpass"); session.save(junitUser); session.getTransaction().commit(); What actually happens? User is created, but with reversed parameters order. He has login "junitpass" and "junit_user" is MD5 encrypted and stored as password. What did I wrong? Thanks EDIT: ADDING POJO class package com.xorty.mailclient.client.domain; import java.io.Serializable; /** * POJO class representing user. * @author MisoV * @version 0.1 */ public class User implements Serializable { /** * Generated UID */ private static final long serialVersionUID = -969127095912324468L; private String login; private String password; /** * @return login */ public String getLogin() { return login; } /** * @return password */ public String getPassword() { return password; } /** * @param login the login to set */ public void setLogin(String login) { this.login = login; } /** * @param password the password to set */ public void setPassword(String password) { this.password = password; } /** * @see java.lang.Object#toString() * @return login */ @Override public String toString() { return login; } /** * Creates new User. * @param login User's login. * @param password User's password. */ public User(String login, String password) { setLogin(login); setPassword(password); } /** * Default constructor */ public User() { } /** * @return hashCode * @see java.lang.Object#hashCode() */ @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((null == login) ? 0 : login.hashCode()); result = prime * result + ((null == password) ? 0 : password.hashCode()); return result; } /** * @param obj Compared object * @return True, if objects are same. Else false. * @see java.lang.Object#equals(java.lang.Object) */ @Override public boolean equals(Object obj) { if (this == obj) { return true; } if (obj == null) { return false; } if (!(obj instanceof User)) { return false; } User other = (User) obj; if (login == null) { if (other.login != null) { return false; } } else if (!login.equals(other.login)) { return false; } if (password == null) { if (other.password != null) { return false; } } else if (!password.equals(other.password)) { return false; } return true; } }

    Read the article

  • Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host

    - by xnoor
    i have an update server that sends client updates through TCP port 12000, the sending of a single file is successful only the first time, but after that i get an error message on the server "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host", if i restart the update service on the server, it works again only one, i have normal multithreaded windows service SERVER CODE namespace WSTSAU { public partial class ApplicationUpdater : ServiceBase { private Logger logger = LogManager.GetCurrentClassLogger(); private int _listeningPort; private int _ApplicationReceivingPort; private string _setupFilename; private string _startupPath; public ApplicationUpdater() { InitializeComponent(); } protected override void OnStart(string[] args) { init(); logger.Info("after init"); Thread ListnerThread = new Thread(new ThreadStart(StartListener)); ListnerThread.IsBackground = true; ListnerThread.Start(); logger.Info("after thread start"); } private void init() { _listeningPort = Convert.ToInt16(ConfigurationSettings.AppSettings["ListeningPort"]); _setupFilename = ConfigurationSettings.AppSettings["SetupFilename"]; _startupPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); } private void StartListener() { try { logger.Info("Listening Started"); ThreadPool.SetMinThreads(50, 50); TcpListener listener = new TcpListener(_listeningPort); listener.Start(); while (true) { TcpClient c = listener.AcceptTcpClient(); ThreadPool.QueueUserWorkItem(ProcessReceivedMessage, c); } } catch (Exception ex) { logger.Error(ex.Message); } } void ProcessReceivedMessage(object c) { try { TcpClient tcpClient = c as TcpClient; NetworkStream Networkstream = tcpClient.GetStream(); byte[] _data = new byte[1024]; int _bytesRead = 0; _bytesRead = Networkstream.Read(_data, 0, _data.Length); MessageContainer messageContainer = new MessageContainer(); messageContainer = SerializationManager.XmlFormatterByteArrayToObject(_data, messageContainer) as MessageContainer; switch (messageContainer.messageType) { case MessageType.ApplicationUpdateMessage: ApplicationUpdateMessage appUpdateMessage = new ApplicationUpdateMessage(); appUpdateMessage = SerializationManager.XmlFormatterByteArrayToObject(messageContainer.messageContnet, appUpdateMessage) as ApplicationUpdateMessage; Func<ApplicationUpdateMessage, bool> HandleUpdateRequestMethod = HandleUpdateRequest; IAsyncResult cookie = HandleUpdateRequestMethod.BeginInvoke(appUpdateMessage, null, null); bool WorkerThread = HandleUpdateRequestMethod.EndInvoke(cookie); break; } } catch (Exception ex) { logger.Error(ex.Message); } } private bool HandleUpdateRequest(ApplicationUpdateMessage appUpdateMessage) { try { TcpClient tcpClient = new TcpClient(); NetworkStream networkStream; FileStream fileStream = null; tcpClient.Connect(appUpdateMessage.receiverIpAddress, appUpdateMessage.receiverPortNumber); networkStream = tcpClient.GetStream(); fileStream = new FileStream(_startupPath + "\\" + _setupFilename, FileMode.Open, FileAccess.Read); FileInfo fi = new FileInfo(_startupPath + "\\" + _setupFilename); BinaryReader binFile = new BinaryReader(fileStream); FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage.fileName = fi.Name; fileUpdateMessage.fileSize = fi.Length; MessageContainer messageContainer = new MessageContainer(); messageContainer.messageType = MessageType.FileProperties; messageContainer.messageContnet = SerializationManager.XmlFormatterObjectToByteArray(fileUpdateMessage); byte[] messageByte = SerializationManager.XmlFormatterObjectToByteArray(messageContainer); networkStream.Write(messageByte, 0, messageByte.Length); int bytesSize = 0; byte[] downBuffer = new byte[2048]; while ((bytesSize = fileStream.Read(downBuffer, 0, downBuffer.Length)) > 0) { networkStream.Write(downBuffer, 0, bytesSize); } fileStream.Close(); tcpClient.Close(); networkStream.Close(); return true; } catch (Exception ex) { logger.Info(ex.Message); return false; } finally { } } protected override void OnStop() { } } i have to note something that my windows service (server) is multithreaded.. i hope anyone can help with this

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >