Search Results

Search found 5718 results on 229 pages for 'apply'.

Page 222/229 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • CodePlex Daily Summary for Thursday, July 31, 2014

    CodePlex Daily Summary for Thursday, July 31, 2014Popular ReleasesDbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Recaptcha for .NET: Recaptcha for .NET v1.5: What's NewMinor bug fixes Support for legacy .NET framework 4.0 and ASP.NET MVC 4. Support for .NET Framework 4.5.1.Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksDynamics AX Development tools: Version 1.0.0: Alpha release of the package. I am just combining greate community tools into a package. I have not go through testing every function of the tools. However, feel free to point out if you found any bugs!Asp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityHead First C#: The Quest: The Quest: Stable release of "The Quest" game ( Lab 2 from Head First C#, 3rd edition)TCP/IP Adapter BizTalk 2013: TCPIP Adapter for BizTalk 2013 - Version 1.0: Version 1.0 This is an updated version of the BizTalk 2010 TCP/IP adapter. References have been updated and a new setup (ISLE) has been used. Follow the installation instructions below to install the adapter. Still want to automate step 4, but haven't got to it yet. Installation instructions Make sure to close the BizTalk Administration Console. Just to make sure everything gets refreshed after installation. Download and install the setup. The setup makes sure the adapter will be regis...gicon: gicon HTTP service v1.0.0: core: 1.0.0.26592 service: 1.0.0.27190 Github: http://rynnwang.github.io/gicon/ Github.io: http://rynnwang.github.io/gicon/ Codeplex: http://gicon.codeplex.com .NET required: 2.0 or above. Installation notes: 1. You need to run intallutil command to install ifunction.GuidIconHttpService.exe file as Windows Service 2. You need to update ifunction.GuidIconHttpService.exe.config file in the folder to set correct HTTP URI prefixes. In release zip by default, it is set as http://localhost:20000...TEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titlenewmail_OutlookCOM: newmail_OutlookCOM: First Stable ReleaseSharePoint - Data View Web Part Documenter: Data View Web Part Documenter 1.0: .NET Framework 4 Client Profile required for this to run on your PC. Please see the documentation for running the tool.LOL zmena skinu klienta: LOL jednoduchý menic skinu klienta: Jednoduchý prográmek, který ušetrí cas pri zmene klienta. Nemusí se proklikáva18541855 adresáru :)SSRS Plus+: Latest - SSRS Plus Application and Source Code: This release contains the Beta version with significant changes in comparison to the old release which I am keeping for legacy reasons. At some point a changeover will be made once it has been confirmed that my branch is stable and Mr Bagwan is in agreement also.CS-Script Source: Release v3.8.4: CSScript.Evaluator is migrated to Mono v3.3.0 Added aggregating //css_ignore_ns from the imported scripts cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationDotSpatial: DotSpatial 1.7: DotSpatial.Full - includes all DotSpatial libraries, extensions and DemoMap application DotSpatial.Core - includes only DotSpatial core libraries Entire list of changes see in the issue tracker. Main changes: Improved common stability, optimized memory and speed when loading and rendering shapefiles, fixed some memory leaks in rasters and shape layers. Simplified plugin infrastructure. Now there are predefined implementations for all required components (IStatusControl, IDockManager, IHead...New ProjectsBrilliantORM: ????ORM??,??SQL?????????。???????????。Crafter: ?????? ??? ????Dotnetnuke xblog // Blog, Article, Events, Documents: xBlog is a blog program based on DNN, it has powerful functions and unique design style. In addition to article management function which common blog module hasIn Plain Sight: Embedding plain text into an image so one can hide the text "In Plain Sight". A cursory view into the world of Steganography.JS Koans Visual Studio Friendly Version: A Version of JS koans loaded into Visual StudioMy Troop First: just adding to the siteOFX Parser: Biblioteca desenvolvida em C# que traduz arquivos OFX e gera a instância de uma classe que representa o arquivo. Um exemplo de uso está disponível, assim como aOrchard Custom Code module: This module adds a very simple tool to add custom code in front-end pages head and foot.SPB Export to KeePass Import Conversion: A simple script to convert the .xml-file by konste's "SPB Wallet Export" (https://spbwalletexport.codeplex.com/) into an import format accepted by KeePass 2The unTroublemaker: The unTroublemaker is a troubleshooter program that helps your project quickly find and fix missing dependency issues using simple XML-based specifications.

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • CodePlex Daily Summary for Sunday, May 18, 2014

    CodePlex Daily Summary for Sunday, May 18, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.70.0: A lot of fixes. See history.TBox - tool to make developer's life easier.: TBox 1.29: Bug fixing. Add LocalizationTool pluginYAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib 2.13: Fixed a bug and added unit tests related to serializing path like aliases with one letter (e.g., './B'). Thanks go to CodeProject user B.O.B. for reporting this bug. Added `Bin/*.dll.mdb` to `.gitignore`. Fixed the issue with Indexer properties. Indexers must not be serialized/deserialized. YAXLib will ignore delegate (callback/function pointer) properties, so that the exception upon serializing them is prevented. Significant improve in cycling object reference detection Self Referr...SFDL.NET: SFDL.NET (2.2.9.2): Changelog: Neues Icon Xup.in CnL Plugin BugfixSEToolbox: SEToolbox 01.030.008 Release 1: Fixed cube editor failing to apply color to cubes. Added to cube editor, replace cube dialog, and Build Percent dialog. Corrected for hidden asteroid ore, allowing rare ore to show when importing an asteroid, or converting a 3d model to an asteroid (still appears to be limitations on rare ore in small asteroids). Allowed ore selection to Asteroid file import. (Can copy/import and convert existing asteroid to another ore). Added progress bars to common long running operations. Fixed ...Better Robocopy GUI: Command Line GUI for Robocopy: Better Robocopy GUI had become the primary plugin in Command Line GUI built on .NET 4Mini SQL Query: Mini SQL Query (1.0.71.456): Minor fixes and template corrections.Visual Studio Settings Switcher: Settings Switcher 1.1: Settings Switcher is compatible with Visual Studio 2012 and Visual Studio 2013. Express editions are not supported. NewFull support for Visual Studio 2013. Solution Settings Files (see Documentation for details.) Bug fixes and general usability improvements. There are two ways to install Settings Switcher: DOWNLOAD FROM CODEPLEXDownload the installer (.vsix) from the link above. Close all instances of Visual Studio. Double-click the .vsix file to install Settings Switcher. DOWNLOA...SharePoint Online Automation Cmdlets: Apps, Solutions and Permissions: Solutions can now be activated/deactivated/updated :-) See documentation for examples. Added Add-SPOApp and Install-SPOApp for uploading and activating apps on non-developer site collections. Also adding groups and permission levels has been included. Install Instructions Install the SharePoint Online Client components. Download and run the MSI file from the downloads section.TFS Planning and Disaster Recovery Avoidance Guide: v1.4.BETA - TFS, DR and Azure IaaS Planning Guides: Welcome to the TFS Planning and DR Avoidance Guidance What is new? A new crisper, more compact style, which is easier to consume on multiple devices without sacrificing any content. Also included are the new TFS on Azure IaaS guide and supplementary guides. Note Capacity planning workbook and posters are included in the Everything Zip package. Quality-Bar Detail Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review ...CRM Web API Lead Capture Example: CRM Web API Lead Capture Example: Sample Visual Studio 2013 project that provides an example of how you could use Web API and Microsoft Azure (could be deployed anywhere) to capture simple HTML form data into Dynamics CRM without directly having to integrate a .NET component.MB Tools: MDT Monitor Tool v1.4: This tool is used to connect to an MDT 2013 Monitor Webservice. The purpose is to provide an alternative to the MDT Deployment Workbench. Update: New in v1.4: Fixed bug where Dart Remote Viewer didnt work Option to show client local time instead of UTC, edit config.xml to enable/disable New in v1.2: Fixed Dart Remote Viewer not connection to full ip Issue: 1222 New in v1.1: Added timers for autorefresh of webservice info Added some better errorchecking and cleaned up the code a bit ...WinAudit: WinAudit Freeware v3.0: WinAudit.exe v3.0 MD5: 88750CCF49FF7418199B2645755830FA Known Issues: 1. Report creation can be very slow when right-to-left (Hebrew) characters are present. 2. Emsisoft Anti-Malware may stop and/or quarantine WinAudit. This happens when WinAudit attempts to obtain a list if running programmes. You will need to set an exception rule in Emsisoft to allow WinAudit to run.TerraMap (Terraria World Map Viewer): TerraMap 1.0.4: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed installer not uninstalling older versions The setup file will make sure .NET 4 is installed, install TerraMap, create desktop and start menu shortcuts, add a .wld file association, and launch TerraMap. If you prefer the zip ...Amqp.Net Lite: 0.1: This is the Alpha-quality release of the AMQP.Net Lite library.TSS.MSR: TSS.MSR v1.1: MSR's TPM2.0 access libraries and sample applications.WPF Localization Extension: v2.2.1: Issue #9277 Issue #9292 Issue #9311 Issue #9312 Issue #9313 Issue #9314Hime Parser Generator: Hime Parser Generator v1.0.0: This releases the stable version of the Hime parser generator. This release contains many bugfixes and performance enhancement. It also provides a clean API for the manipulation and debugging of context-free grammars.CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.1.41167 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Variable walking / flying speed. Xbox 360 Controller support. Kinect for Windows support. Based on Firestorm viewer 4.6.5 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-1-41167-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http:/...ExtJS based ASP.NET Controls: FineUI v4.0.6: FineUI(???) ?? ExtJS ??? ASP.NET ??? FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ??????? ?????? IE 8.0+、Chrome、Firefox、Opera、Safari ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license) ???? ??:http://fineui.com/ ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,?????: 1. ????? FineUI ? ExtJS ? http://fineui.com/bbs/forum.ph...New ProjectsCheburashka: Static Code Analysis Rule-set for Visual Studio SSDT projectsFree Workflow: Free Workflow Project aim to use Microsoft Workflow Manager as hosting environment to Business Process Workflow , it has its own database to manage users tasks.Jenkins Tray: Jenkins tray real time notifyMoneyManagement: ?? WCF ??????????????WCF?????。??????????????????????????????。MoonSharp: An interpreter for a very close cousin of the Lua language, written in C# for the .NET, Mono, Xamarin and Unity3D platforms.OITPMS_MVC: Online Issue Tracking and Project Management System This project was created for Bug tracking and issues that comes while creating project or on going projectSpiral Chrome: Spiral Chrome, the quick, simple, and user friendly method of phishing. Throttling Suite for Web API: The Throttling Suite provides throttling control capabilities to the .NET Web API applications. It is highly customizable product, yet simple to use.TP2 .NET: Trabajo practico 2 para la clase de .net??????-??????【??】??????????: ?????????????????,???????????????。???????????,??????:????、????、???????! ????-????【??】????????: ????????????????,???????????,??????????????,??????????,??????????????!?????-?????【??】?????????: ??????????????????,?????????????,????,?????????,?????????????,?????,?????! ??????-??????【??】??????????: ??????????????????、????,??100%????,??????,????????????,???????????! ????-????【??】????????: ?????????????????????:????、????、??????????????,????????。????????! ?????-?????【??】?????????: ???????????????????,??????????,????????、????,??????????,??????????。 ?????-?????【??】?????????: ???????,??????:?????,?????,??????,??????????,????????。????????! ??????-??????【??】??????????: ?????????????????,??????????、??????,??????????、????、????、???????。 ??????-??????【??】??????????: ?????????????、??????????????????,????????,?????,??????,????,????,????! ????-????【??】????????: ???????????????????????????、????、????、???????????,????,????! ?????-?????【??】?????????: ?????????????????????,???????????????,????????????????????! ?????-?????【??】?????????: ?????????????????????,???????????????,???????,?????,?????,????? !!! ??????-??????【??】??????????: ????????????????????,????????:??、??、???,?????????????????????! ??????-??????【??】??????????: ??????????????????????,????“???????,???????”?????,????????????! ??????-??????【??】??????????: ???????????,????,????,??????,????“????、????、????、????”????????,??????. ??????-??????【??】??????????: ?????????????,?????,???????????,???????,????,????,????,?????。 ?????-?????【??】?????????: ????????????,?????,???????????,???????,????,????,????,?????。 ??????-??????【??】??????????: ?????????????????????????????,??????????,????,????,?????????、??????,??????。 ????-????【??】????????: ????????????????????????,????,????,??????????。???????????????,??,??,??????????,??????... ?????-?????【??】?????????: ???????????????,?????????????? ??。????????、????、????、?????????? ???????。 ??????-??????【??】??????????: ????????????、???、??、??????????????????????????????,????????????????! ????-????【??】????????: ??????????????6?,???????????????????????????,??????????????,?????????! ?????-?????【??】?????????: ???????????????8?,????????,????????,??????????,?????,????? ,????????!

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • CodePlex Daily Summary for Thursday, May 22, 2014

    CodePlex Daily Summary for Thursday, May 22, 2014Popular ReleasesTerraMap (Terraria World Map Viewer): TerraMap 1.0.5: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors. Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed installer not uninstalling older versions T...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-2.5.434.0: Issue hot fixed: 102.127666 - Incorrectly formed command EXCEPT () in MDX query for the exclusion of several elements among of the subordinates 102.132877 - Incorrectly generate MDX query with using VisualTotals, the function HIERARCHIZE - should be inside 102.132887 - If in the MemberChoice select an item with child, and then delete one of the child, the parent misses the result build the test project (...\Users\Public\Documents\Ranet.UILibrary.Olap-2.5 Samples\UILibrary.Olap\Cs\Ranet...R.NET: R.NET 1.5.13: R.NET 1.5.13 is a beta release towards R.NET 1.6. You are encouraged to use it now and give feedback. See the documentation for setup and usage instructions. Main changes for R.NET 1.5.13 Changed the nuget packaging to distribute via nuget.org at R.NET Community and R.NET FSharp Utility. Without entering into details, this was necessary to facilitate the distribution of the packages. You are strongly encouraged to use nuget to manage the dependency of your work on R.NET, rather than the bin...Adaptive Access Layers: AAL 2.0: Major rework with breaking changes. Much more flexible registration of implementation strategy and support for methods, properties and events.Google Analytics SDK for Windows 8 and Windows Phone: Google Analytics SDK 1.2.08: Recommended for Xaml/C# developers: Download the package through NuGet. Recommended for JS and C++ developers: Download the new native vsix (Visual Studio SDK) package above. NEW FEATURES & FIXESSee the full list of changes since the last public release SHOUT OUTSDacianMujdar for the pull request to add support for campaigns. aclassen for the pull request to add support for resolved phone models in the WP7 & 8 Silverlight versions. Alan Mendelevich for great open source library PhoneNam...DbSharp: DbSharpApplication (Binary files): A zip file that include DbSharpApplication.exe. Initial release.Multiwfn: Multiwfn 3.3.3: Multiwfn 3.3.3WebExtras: v1.4.0-Beta-1: Enh: Adding support for jQuery UI framework Enh: Adding support for jqPlot charting library Dropping dependency on MoreLinq library Note: Html.LabelForV2(...) extension method has now been deprecated. You should use Html.RequiredFieldLabelFor(...) extension method instead. This extension method will be removed in future versions.????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。MISAO: Ver. 5.4: Fix bugs (Nicovideo viwer add-in) Add Masakari option (Nicovideo viwer add-in)QuickMon: Version 3.11: This release adds some major changes to the core monitoring engine. 1. Polling overrides: Each collector entry can specify a minimum time updating is allowed for it and dependent collector entries. 2. Polling frequency sliding: Additional to polling overrides a collector entry can specify 'sliding' polling frequency if the state remains the same. This means the frequency slows down reducing overhead of polling on a stagnant resource. 3. The monitor pack has an overriding frequency. If used...Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.WordMat: WordMat v. 1.06: Check WordMat.blogspot.com for a complete description of new features.Wsus Package Publisher: Release v1.3.1405.17: Add Russian translation (thanks to VSharmanov) Fix a bug that make WPP to crash if the user click on "Connect/Reload" while the Report Tab is loading. Enhance the way WPP store the password for remote computers command.MoreTerra (Terraria World Viewer): More Terra 1.12.9: =========== = Compatibility = =========== Updated to account for new format 1.2.4.1 =========== = Issues = =========== all items have not been added. Some colors for new tiles may be off. I wanted to get this out so people have a usable program.LINQ to Twitter: LINQ to Twitter v3.0.3: Supports .NET 4.5x, Windows Phone 8.x, Windows 8.x, Windows Azure, Xamarin.Android, and Xamarin.iOS. New features include Status/Lookup, Mute APIs, and bug fixes. 100% Twitter API v1.1 coverage, Async, Portable Class Library (PCL).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.26.0: Added access to the Release Notes during 'Check for Updates...'' Debug panels Added support for generic types members Members are grouped into 'Raw View' and 'Non-Public members' categories Implemented dedicated (array-like) view for Lists and Dictionaries http://download-codeplex.sec.s-msft.com/Download?ProjectName=csscriptnpp&DownloadId=846498ClosedXML - The easy way to OpenXML: ClosedXML 0.70.0: A lot of fixes. See history.SFDL.NET: SFDL.NET (2.2.9.2): Changelog: Neues Icon Xup.in CnL Plugin BugfixSEToolbox: SEToolbox 01.030.008 Release 1: Fixed cube editor failing to apply color to cubes. Added to cube editor, replace cube dialog, and Build Percent dialog. Corrected for hidden asteroid ore, allowing rare ore to show when importing an asteroid, or converting a 3d model to an asteroid (still appears to be limitations on rare ore in small asteroids). Allowed ore selection to Asteroid file import. (Can copy/import and convert existing asteroid to another ore). Added progress bars to common long running operations. Fixed ...New Projects<a href="jAvAsCrIpT&colon;alert&lpar;69&rpar;">CLICK HERE TO GET FREE MONEY</a>: <a href="jAvAsCrIpT&colon;alert&lpar;69&rpar;">CLICK HERE TO GET FREE MONEY</a> canopyazure: canopyazureCI&T ULS Log Viewer: Ferramenta para ajudar o desenvolvedor Sharepoint analisar os arquivos de log.EmissorCTE: E uma DLL que ira enviar, receber e cancelar o CTE, também ira fazer geração do DACTEF5 BIG-IP Local Traffic Manager: A .NET wrapper for F5 iControl service-enabled management API.Hydrodesktop Excel-Addin: HydroDesktop ExcelPage Manifest Extractor: Extract a Sharepoint Page Manifest/XMLProject Euler Solutions By multiple1902: Project Euler Solutions in FSharp. By Weisi Dai (multiple1902) <weisi@x-research.com>VerySimpleBackup: Simple command line tool for backup any directory (using Volume Shadow Copy), using archiving (ZIP) with optional password and copy it to FTP (cycle supported).Waf Music Manager: The Waf Music Manager is a simple and fast application that makes fun to manage the local music collection.?????-?????【??】?????????: ??????????????????,???,??????????、???????????????????。??????,????、????,??????! ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ???????????????????????,????,????“???、???、???”?????,?????,?????????????????。??????! ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ??????-??????【??】??????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ??????-??????【??】??????????: ???????????????:??????!?????!???:????、????、????、????。??,??????????!??????. ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ?????1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????!

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTML Tidy in NetBeans IDE

    - by Geertjan
    First step in integrating HTML Tidy (via its JTidy implementation) into NetBeans IDE: The reason why I started doing this is because I want to integrate this into the pluggable analyzer functionality of NetBeans IDE that I recently blogged about, i.e., where the FindBugs functionality is found. So a logical first step is to get it working in an Action class, after which I can port it into the analyzer infrastructure: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionReferences; import org.openide.awt.ActionRegistration; import org.openide.cookies.EditorCookie; import org.openide.cookies.LineCookie; import org.openide.loaders.DataObject; import org.openide.text.Line; import org.openide.text.Line.ShowOpenType; import org.openide.util.Exceptions; import org.openide.util.NbBundle.Messages; import org.openide.windows.IOProvider; import org.openide.windows.InputOutput; import org.openide.windows.OutputEvent; import org.openide.windows.OutputListener; import org.openide.windows.OutputWriter; import org.w3c.tidy.Tidy; @ActionID(     category = "Tools", id = "org.jtidy.TidyAction") @ActionRegistration(     displayName = "#CTL_TidyAction") @ActionReferences({     @ActionReference(path = "Loaders/text/html/Actions", position = 150),     @ActionReference(path = "Editors/text/html/Popup", position = 750) }) @Messages("CTL_TidyAction=Run HTML Tidy") public final class TidyAction implements ActionListener {     private final DataObject context;     private final OutputWriter writer;     private EditorCookie ec = null;     public TidyAction(DataObject context) {         this.context = context;         ec = context.getLookup().lookup(org.openide.cookies.EditorCookie.class);         InputOutput io = IOProvider.getDefault().getIO("HTML Tidy", false);         io.select();         writer = io.getOut();     }     @Override     public void actionPerformed(ActionEvent ev) {         Tidy tidy = new Tidy();         try {             writer.reset();             StringWriter stringWriter = new StringWriter();             PrintWriter errorWriter = new PrintWriter(stringWriter);             tidy.setErrout(errorWriter);             tidy.parse(context.getPrimaryFile().getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (final String string : split) {                 final int end = string.indexOf(" c");                 if (string.startsWith("line")) {                     writer.println(string, new OutputListener() {                         @Override                         public void outputLineAction(OutputEvent oe) {                             LineCookie lc = context.getLookup().lookup(LineCookie.class);                             int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                             Line line = lc.getLineSet().getOriginal(lineNumber - 1);                             line.show(ShowOpenType.OPEN, Line.ShowVisibilityType.FOCUS);                         }                         @Override                         public void outputLineSelected(OutputEvent oe) {}                         @Override                         public void outputLineCleared(OutputEvent oe) {}                     });                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }     } } The string parsing above is ugly but gets the job done for now. A problem integrating this into the pluggable analyzer functionality is the limitation of its scope. The analyzer lets you select one or more projects, or individual files, but not a folder. So it doesn't work on folders in the Favorites window, for example, which is where I'd like to apply HTML Tidy, across multiple folders via the analyzer functionality. That's a bit of a bummer that I'm hoping to get around somehow.

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.1 MARCH 2010

    - by Tim Dexter
    Latest roll up patch for 10.1.3.4.1 is now out in the wild. Yep, there are bug fixes but the guys have implemented some great enhancements. I'll be covering some of them over the coming weeks, from collapsing bookmarks in your PDFs to better MS AD support to 'true' Excel templates, yes you read that correctly! Patch is available from Oracle's support site. Just search for patch 9546699. Here's the contents and readme, apologies for the big list but at least you can search against it for a particular fix. This patch contains backports of following bugs for BI Publisher Enterprise 10.1.3.4.0 and 10.1.3.4.1. 6193342 - REG:SAMPLE DATA FILE FOR PDF FORM MAPPING IS NOT VALIDATED 6261875 - ERRONEOUS PRECISION VALIDATION ON ONLINE ANALYZER 6439437 - NULL POINTER EXCEPTION WHEN PROCESSING TABLE OF CONTENT 6460974 - BACS EFT PAYMENT INSTRUCTION OUTPUT FILE IS EMPTY 6939721 - BIP: REPORT BUSTING DELIVERY KEY VALUES CANNOT CONTAIN SEVERAL SPECIAL CHARACTER 6996069 - USING XML DB FOR BI REPOSITORY FAILS WITH RESOURCENOTFOUNDEXCEPTION 7207434 - TIMEZONE:SHOULD NOT DO TIMEZONE CONVERSION AGAINST CANONICAL DATE YYYY-MM-DD 7371531 - SUPPORT FOR CSV OUTPUT FOR STRUCTURED XML AND NON SQL DATA SOURCES 7596148 - ER: LDAP FOR MS AD TO SEARCH FROM AD ROOT 7646139 - WEBSERVICES ERROR 7829516 - BIP STANDALONE FAILS TO BURST USING XSL-FO TEMPLATES 8219848 - PDF TEMPLATE REPORT NOT PERFORMING PAGE BREAK 8232116 - PARAMETER VALUE IS PASSED AS NULL,IF IT CONTAINS 'AND' WITHIN THE STRING 8250690 - NOT ABLE TO UPLOAD TEMPLATE VIA BIP API 8288459 - ER: QUERY BUILDER OPTION TO NOT INCLUDE TABLENAME. PREFIX IN SQL 8289600 - REPORT TITLE AND DESCRIPTION CAN'T SUPPORT MULTIPLE LANGUAGES 8327080 - CAN NOT CONFIGURE ORACLE EBUSINESS SUITE SECURITY MODEL WITH ORACLE RAC 8332164 - AN XDO PROPERTY TO ENABLE DEBUG LOGGING 8333289 - WEB SERVICE JOBS FAIL AFTER BIP STARTED UP 8340239 - HTTP NOTIFY IS MISSING IN SCHEDULEREPORTREQUEST 8360933 - UNABLE TO USE LOGGED IN BI USER AS THE WSSECURITY USERNAME IN A VARIABLE FORMAT 8400744 - ADMINISTRATOR USER DOES NOT HAVE FULL ADMINISTRATOR RIGHTS 8402436 - CRASH CAUSED BY UNDETERMINED ATTRKEY ERROR IN MULTI-THREADED 8403779 - IMPOSSIBLE TO CONFIGURE PARAMETER FOR A REPORT 8412259 - PDF, RTF OUTPUT NOT HANDLING THE TABLE BORDER AND CONTENT OVERFLOWS TO NEXT PAGE 8483919 - DYNAMIC DATASOURCE WEBSERVICE SHOULD WORK WITH SERVERSIDE CONNECTIONS 8444382 - ID ATTRIBUTE IN TITLE-PAGE DOES NOT WORK WITH SELECTACTION PROPERTY 8446681 - UI LANGUAGE IS NOT REFLECTED AT THE FIRST LOG IN 8449884 - PUBLICREPORTSERVICE FAILS ON EMAIL DELIVERY USING BIP 10.1.3.4.0D+ - NPE 8454858 - DB: XMLP_ADMIN CAN SEE ALL THE FOLDERS BUT ONLY HAS VIEW PERMISSIONS 8458818 - PDFBOOKBINDER FAILS WITH OUTOFMEMORY ERROR WHEN TRYING TO BIND > 1500 PDFS 8463992 - INCORRECT IMPLEMENTATION OF XLIFF SPECIFICATION 8468777 - BI PUBLISHER QUERY BUILDER NOT LOADING SCHEMA OBJECTS 8477310 - QUERY BUILDER NOT WORK WITH SSL ON STANDALONE OC4J 8506701 - POSITIVE PAY FILE WITH OPTIONS NOT CREATING FILE CHECKS OVER 2500 8506761 - PERFORMANCE: PDFBOOKBINDER CLASS TAKES 4 HOURS TO BIND 4000 PAGES 8535604 - NPE WHEN CLICKING "ANALYZER FOR EXCEL" BUTTON IN ALL_* REPORTS 8536246 - REMOVE-PDF-FIELDS DOES NOT WORK WITH CHECKBOXES WITH OPT ARRAY 8541792 - NULLPOINTER EXCEPTION WHILE USING SFTP PROTOCOL 8554443 - LOGGING TIME STAMP IN 10G: THE HOUR PART IS WRONG 8558007 - UNABLE TO LOGIN BIP WITH UNPRIVILEGED USER WHEN XDB IS USED FOR REPORSITORY STOR 8565758 - NEED TO CONNECT IMPERSONATION TO DATA SOURCE WITH PL/SQL FUNCTION 8567235 - EFTPROCESSOR AND XDO DEBUG ENABLED CAUSES ORG.XML.SAX.SAXPARSEEXCEPTION 8572216 - EFTPROCESSOR NOT THREAD SAFE - CAUSING CORRUPTED REPORTS TO BE GENERATED 8575776 - LANDSCAPE REPORT ORIENTAION NOT SELECTED WHEN REPORT IS PRINTED WITH PS 8588330 - XLIFF GENERATING WITH WRONG MAXWIDTH ATTRIBUTE IN SOME TRANS-UNITS 8584446 - EFTGENERATOR DOES NOT USE XSLT SCALABILITY - JAVA.LANG.OUTOFMEMORY EXCEPTION 8594954 - ENG: BIP NOTIFY MESSAGE BECOMES ENGLISH 8599646 - ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 8605110 - PDFSIGNATURE API ENCOUNTERS JAVA.LANG.NULLPOINTEREXCEPTION ON PDF WITH WATERMARK 8660915 - BURSTING WITH DATA TEMPLATE NOT WORKING WITH OPTION: VALUE=FALSE 8660920 - ER: EXTRACT XHTML DATA USING XDODTEXE IN XHTML FORMAT 8667150 - PROBLEM WITH 3RD APPLICATION ABOUT PDF GENERATED WITH BI PUBLISHER 8683547 - "CLICK VIEW REPORT BUTTON TO GENERATE THE REPORT" MESSAGE IS DISPLAYED 8713080 - SEARCH" PARAMETER IS NOT SHOWING NON ENGLISH DATA IN INTERNET EXPLORER 8724778 - EXCEL ANALYZER PARAMETERS DO NOT WORK WITH EXCEL 2007 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 8728807 - DYNAMIC JDBC DATA SOURCE WITH PRE-PROCESS FUNCTION BASED ON EXISTING DATA SOURCE 8759558 - XDO TEMPLATE SHOWS CURRENCY IN WRONG FORMAT FOR DUNNING 8792894 - EFTPROCESSOR DOES NOT SUPPORT XSL TEMPLATE AS INPUTSTREAM 8793550 - BIP GENERATES CSV REPORTS OUTPUT FORMAT WITH EXTENTION .OUT NOT .CSV IN EMAIL 8819869 - PERIOD CLOSE VALUE SUMMARY REPORT (XML) RUNNING INTO WARNING 8825732 - MY FOLDERS LINK BROKEN WITH USER NAME THAT INCLUDES A SLASH (/) OBIEE SECURITY 8831948 - TRYING TO GENERATE A SCATTER PLOT USING THE CHART WIZARD 8842299 - SEEDED QUERY ALWAYS RETURNS RESULTS BASED ON FIRST COLUMN 8858027 - NODE.GETTEXTCONTEXT() NOT AVAILABLE IN 10G UNDER OC4J 8859957 - REPORT TITLE ALIGNMENT GOES BAD FOR REPORTS WITH XLIFF FILE ATTACHED 8860957 - ER: IMPROVE PERFORMANCE OF ANSWERS PARAMETERS 8891537 - GETREPORTPARAMETERS WEB SERVICE API ISSUES WITH OAAM REPORTS 8891558 - GETTING SQLEXCEPTION IN GENERATEREPORT WEB SERVICE API ON OAAM REPORTS 8927796 - ER: DYANAMIC DATA SOURCE SUPPORT BY DATA SOURCE NAME 8969898 - BI PUBLISHER WEB SERVICE GETREPORTPARAMETERS DOES NOT TRANSLATE PARAMETER LABEL 8998967 - MULTIPLE XSL PREDICATES ELEMENT[A='A'] [B='B'] CAUSES XML-22019 ERROR 9012511 - SCALABLE MODE IS NOT WORKING IN XMLPUBLISHER 10.1.3.4 9016976 - ER: PRINT XSL-T AND FOPROCESSING TIMING INFORMATION 9018580 - WEB SERVICE CALL FAILS WHEN REPORT INCLUDES SEARCH TYPE 9018657 - JOB FAILS WHEN LOV QUERY CONTAINS BIND VARIABLES :XDO_USER_UI_LOCALE 9021224 - PERFORMANCE ISSUE TO VIEW DASHBOARD PAGE WITH BIP REPORT LINKS 9022440 - ER: SUPPORT "COMB OF N CHARACTERS" FEATURE PDF FORM TEXT FIELDS 9026236 - XPATH DOES NOT WORK CORRECTLY IN 10.1.3.4.1 9051652 - FILE EXTENSION OF CSV OUTPUT IS TXT WHEN IT IS EXPORTED FROM REPORT VIEWER 9053770 - WHEN SENDING CSV REPORT OUTPUT BY EMAIL SOMETIMES IT IS SENT WITHOUT EXTENSION 9066483 - PDFBOOKBINDER LEAVE SOME TEMPORARY FILES AFTER MERGING TITLE PAGE OR TOC 9102420 - USE RELATIVE PATHS IN HYPERLINKS 9127185 - CHECKBOX NOT WORK ON SUB TEMPLATE 9149679 - BASE URL IS NOT PASSED CORRECTLY 9149691 - PROVIDE A WAY TO DISABLE THE ABILITY TO CREATE SCHEDULED REPORT JOB "PUBLIC" 9167822 - NOTIFICATION URL BREAKS ON FOLDER NAMES WITH SPACES 9167913 - CHARTS ARE MISSING IN PDF OUTPUTS WHEN THE DEFAULT OUTPUT FORMAT IS NOT A PDF 9217965 - REPORT HISTORY TAKES LONG TIME TO RENDER THE PAGE 9236674 - BI PUBLISHER PARAMETERS DO NOT CASCADE REFRESH AFTER SECOND PARAMETER 9283933 - OPTION TO COLLAPSE PDF OUTPUT BOOKMARKS BY DEFAULT 9287245 - SAVE COMPLETED SCHEDULED REPORTS IN ITS REPORT NAME AND NOT IN A GENERIC NAME 9348862 - ADD FEATURE TO DISABLE THE XSLT1.0-COMPATIBILITY IN RTF TEMPLATE 9355897 - ER: NEED A SAFE DIVIDE FUNCTION 9364169 - UIX 2.3.6.6 PATCH UPTAKE FOR 10.1.3.4.1 9365153 - LEADING WHITESPACE CHARACTERS IN A FIELD TRIMMED WHEN RUN VIEW OR EXPORT TO .CSV 9389039 - LONG TEXT IS NOT WRAPPED PROPERLY IN THE AUTOSHAPE ON RTF TEMPLATE 9475697 - ENH: SUB-TEMPLATE:DYNAMIC VARIABLE WITH PARAMETER VALUE IN CALL-TEMPLATE CLAUSE 9484549 - CHANGE DEFAULT FOR "XSLT1.0-COMPATIBILITY" TO FALSE FOR 10G 9508499 - UNABLE TO READ EXCEL FILE IF MORE THAN 1800 ROWS GENERATED 9546078 - EMAIL DELIVERY INFORMATION SHOULD NOT BE SAVED AND AUTO-FED IN JOB SUBMISSION 9546101 - EXCEPTION OCCURS WHEN SFTP/FTP REMOTE FILENAME DOSE NOT CONTAIN A SLASH '/' 9546117 - SFTP REPORT DELIVERY FAILS WITH NO CLASS DEF FOUND EXCEPTION ON WEBLOGIC 9.2 Following bugs are included in 10.1.3.4.1 and they are only applied to 10.1.3.4.0. 4612604 - FROM EDGE ATTRIBUTE OF HEADER AND FOOTER IS NOT PRESERVED 6621006 - PARAMNAMEVALUE ELEMENT DEFINITION SHOULD HAVE PARAMETER TYPE 6811967 - DATE PARAMETER NOT HANDLING DATE OFFSET WHEN PASSED UPPERCASE Z FOR OFFSET 6864451 - WHEN BIP REPORTS TIMEOUT, THE PROCESS TO LOG BACK IN IS NOT USER FRIENDLY 6869887 - FUSION CURRENCY BRD:4.1.4/4.1.6 OVERRIDINDG MASK /W XSLT._XDOCURMASKS /W SYMBOL 6959078 - "TEXT FIELD CONTAINS COMMA-SEPARATED VALUES" DOESN'T WORK IN CASE OF STRING 6994647 - GETTING ERROR MESSAGE SAYING JOB FAILED EVEN THOUGH WORKS OK IN BI PUBLISHER 7133143 - ENABLE USER TO ENTER 'TODAY' AS VALUE TO DATE PARAMETER IN SCHEDULE REPORT UI 7165117 - QA_BIP_FUNC:-CLOSED LIFE TIME REPORT ERROR MESSAGE IN CMD 7167068 - LEADER-LENGTH OR RULE-THICKNESS PROPRTY IS TOO LARGE 7219517 - NEED EXTENSION FUNCTIONS TO URL ENCODE TEXT STRING. 7269228 - TEMPLATEHELPER PRODUCES A GARBLED OUTPUT WHEN INVOKED BY MULTIPLE THREADS 7276813 - GETREPORTPARAMETERSRETURN ELEMENT SHOULD HAVE DEFAULT VALUE 7279046 - SCHDEULER:UNABLE TO DELETE A JOB USING API 7280336 - ER: BI PUBLISHER - SITEMINDER SUPPORT - GENERIC NON-ORACLE SSO SUPPORT 7281468 - MODIFY SQL SERVER PROPERTIES TO USE HYP DATA DIRECT IN JDBCDEFAULTS.XML 7281495 - PLEASE ADD SUPPORTED DBS TO JDBCDEFAULT.XML AND LIST EACH DB VERSION SEPARATELY 7282456 - FUSION CURRENCY BRD 4.1.9.2: CURRENCY AMOUTS SHOULD NOT BE WRAPPED. MINUS SIGN 7282507 - FUSION CURRENCY BRD4.1.2.5:DISPLAY CURRENCY AND LOCALE DERIVED CURRENCY SYMBOL 7284780 - FUSION CURRENCY BRD 4.1.12.4 CORRECTLY ALIGN NEGATIVE CURRENCY AMOUNTS 7306874 - OPP ERROR - JAVA.LANG.OUTOFMEMORYERROR: ZIP002:OUTOFMEMORYERROR, MEM_ERROR 7309596 - SIEBELCRM: BIP ENHANCEMENT REQUEST FOR SIEBEL PARAMETERIZATION 7337173 - UI LOCALE IS ALWAYS REWRITTEN TO EN WHEN MOVE FROM DASHBOARD 7338349 - REG:ANALYZER REPORT WITH AVERAGE FUNCTION FAIL TO RUN FOR NON INTERACTIVE FORMAT 7343757 - OUTPUT FORMAT OF TEMPLATES IS NOT SAVING 7345989 - SET XDK REPLACEILLEGALCHARS AND ENHANCE XSLTWRAPPER WARNING 7354775 - UNEXPECTED BEHAVIOR OF LAYOUT TEMPLATE PARAMETER OF RUNREPORT WEBSERVICES API 7354798 - SEQUENCE ORDER OF PARAMETERS FOR THE RUNREPORT WEBSERVICES API 7358973 - PARALLEL SFTP DELIVERY FAILS DUE TO SSHEXCEPTION: CORRUPT MAC ON INPUT 7370110 - REGN:FAIL WHEN USE JNDI TO XMLDB REPORT REPOSITORY 7375859 - NEW WEBSERVICE REQUIRED FOR RUNREPORT 7375892 - REQUIRE NEW WEBSERVICE TO CHECK IF REPORTFOLDER EXISTS 7377686 - TEXT-ALIGN NOT APPLIED IN PDF IN HEBREW LOCALE 7413722 - RUNREPORT API DOES NOT PASS BACK ANY GENERATED EXCEPTIONS TO SCHEDULEREPORT 7435420 - FUSION CURRENCY: SUPPORT MICROSOFT(JAVA) FORMAT MASK WITH CURRENCY 7441486 - ER: ADD PARAMETER FOR SFTP TO BURSTING QUERY 7458169 - SSO WITH OID LDAP COULD NOT FETCH OID ROLES 7461161 - EMAIL DELIVERY FAILS - DELIVERYEXCEPTION: 0 BYTE AVAILABLE IN THE GIVEN INPU 7580715 - INCORRECT FORMATTING OF DATES IN TIMEZONE GMT+13 7582694 - INVALID MAXWIDTH VALUE CAUSES NLS FAILURES 7583693 - JAVA.LANG.NULLPOINTEREXCEPTION RAISED WHEN GENERATING HRMS BENEFITS PDF REPORT 7587998 - NEWLY CREATED USERS IN OID CANT ACCESS REPORTS UNTILL BI PUBLISHER IS RESTARTED 7588317 - TABLE OF CONTENT ALWAYS IN THE SAME FONT 7590084 - REMOVING THE BIP ENTERPRISE BANNER BUT KEEPING THE REPORTS & SCHEDULES TAB 7590112 - SOMEONE NOT PRIVILEGED ACCESS BIP DIRECTLY SHOULD GET A CUSTOM PAGE 7590125 - AUTOMATING CREATION OF USERS AND ROLES 7597902 - TIMEZONE SUPPORT IN RUNREPORT WEBSERVICE API 7599031 - XML PUBLISHER SUM(CURRENT-GROUP()) FAILS 7609178 - ISSUE WITH TAGS EXTRACTED FROM RTF TEMPLATE 7613024 - HEADER/FOOTER SETTINGS OF RTF TEMPLATE ARE NOT RETAINING IN THE RTF OUTPUT 7623988 - ADD XSLT FUNCTION TO PRINT XDO PROPERTIES 7625975 - RETRIEVING PARAMETER LOV FROM RTF TEMPLATE 7629445 - SPELL OUT A NUMBER INTO WORDS 7641827 - ANALYTICS FROZED AFTER PAGE TAB WHICH INCLUDES [BI PUBLISHER REPORT] WERE CLICKE 7645504 - BIP REPORT FROZED AFTER THE SAME DASHBOARD BIP REPORTS WERE CLICKED SIMULTANEOUS 7649561 - RECEIVE 'TO MANY OPEN FILE HANDLES' ERROR CAUSING BI TO CRASH 7654155 - BIP REMOVES THE FIRST FILE SEPARATOR WHEN RE-ENTER REPOSITORY LOCATION IN ADMIN 7656834 - NEED AN OPTION TO NOT APPEND SCHEMA NAME IN GENERATED QUERY 7660292 - ER: XDOPARSER UPGRADE TO XDK 11G 7687862 - BIP DATA EXTRACTING ENHANCEMENT FOR SIEBEL BIP INTEGRATION 7694875 - ADMINISTRATOR IS SUPER USER WHETHER CONFIGURED MANDATORY_USER_ROLE OR NOT 7697592 - BI PUBLISHER STRINGINDEXOUTOFBOUNDSEXCEPTION WHEN PRINTING LABEL FROM SIM 7702372 - ARABIC/ENGLISH NUMBER/DATE PROBLEM, TOTAL PAGE NUMBER NOT RENDERED IN ENGLISH 7707987 - OUTOFMEMORY BURSTING A BI PUBLISHER REPORT BI SERVER DATA SOURCE 7712026 - ER: CHANGE CHART OUTPUT FORMAT TO PNG IN HTML OUTPUT 7833732 - THE 'SEARCH' PARAMETER TYPE CANNOT BE USED IN IE6 UNDER WINDOWS 8214839 - ER: INCREASE COLUMN SIZE IN SCHEDULER TABLE XMLP_SCHED_JOB 8218271 - ISSUES WHILE CONVERTING EXCEL TO XML 8218452 - BI PUBLISHER STANDALONE : GRAPHICS WITHOUT COLORS IF MORE THAN 33 PAGES 8250980 - USER WITH XMLP_ADMIN RESPONSIBILITY IS NOT ABLE TO EDIT REPORT IN BIP 8262410 - IMPOSSIBLE TO PRINT PDF CREATED BY BI PUBLISHER VIA 3RD PARTY PDF APPLICATION 8274369 - QA: CANNOT DELETE EMAIL SERVER UNDER DELIVERY CONFIGURATION 8284173 - FO:VISIBILITY="HIDDEN" DOESN'T WORK WITH FO:PAGE-NUMBER-CITATION 8288421 - THE VALUE OF VIEW BY GO BACK TO MY HISTORY IN SCHEDULES TAB 8299212 - REG: THE SPECIFICAL BI USER DIDN'T GET THE CORRECT REPORT HISTORY 8301767 - ORA-01795 ERROR OCCURED AFTER ACCESSING DASHBOARD PAGE WHICH INCLUDES BIP 8304944 - ADD SIEBEL SECURITY MODEL IN BI PUBLISHER 10.1.3.4.1 8312814 - QA:HOT:OBI SERVER JDBC DRIVER BIJDBC14.JAR IN XMLPSERVER.WAR IS INCORRECT 8323679 - BI PUBLISHER SENDS HTML REPORT TO OUTLOOK CLIENT AS ATTACHMENT NOT INLINE 8370794 - HISTORY OF COMPLETED SCHEDULER JOBS STILL SHOW ONE AS RUNNING ON CLUSTER ENV 8390970 - OUT OF MEMORY EXCEPTION RAISED, WHILE SAVING THE DATA 8393681 - CHECKBOX IS SHOWING UP AS CHECKED WHEN DATA IS NOT CHECKED VALUE 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 UIX fixes: 6866363 - SUPPORT FOR JAVA DATE FORMAT AS PER JDK 1.4 AND ABOVE 6829124 - DATE PARAMETER NOT HANDLING DATE OFFSET AS PER JAVA STANDARDS ---------------------------- INSTALLATION FOR ENTERPRISE ---------------------------- Upgrade from 10.1.3.4.0d (patch 8284524, 8398280) and 10.1.3.4.1 does not require step 8 and step 9. 1 - Make a backup copy of the xmlp-server-config.xml file located in <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 2 - Back up all the directories under the BI Publisher repository (for example: {Oracle_Home}/xmlp/XMLP). 3 - If you are using Scheduling, back up your existing BI Publisher Scheduler schema. 4 - Shut down BI Publisher. 5 - Undeploy the BI Publisher application ("xmlpserver") from your J2EE application server. See your application server documentation for instructions how to undeploy an application. 6 - Deploy the 10.1.3.4 xmlpserver.ear or xmlpserver.war to your application server. See "Manually Installing BI Publisher to Your J2EE Application Server" secition of BI Publisher Installation Guide for guidelines for your application server type. 7 - Copy the saved backup copy of the xmlp-server-config.xml file from step 1 to the newly created BI Publisher <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 8 - Copy ssodefaults.xml to the following directory. And replace [host]:[port] with your server's information. Default values for other properties can be updated depending on your configuration. <Existing Repository>\XMLP\Admin\Security 9 - Copy database-config.xml to the following directory. <Existing Repository>\XMLP\Admin\Scheduler 10 - Restart xmlpserver application or Application Server ---------------------------------- IBM WEBSPHERE 6.1 DEPLOYMENT NOTE ---------------------------------- When users fail to log on to BI Publisher with "HTTP 500 Internal Server Error" on WebSphere 6.1, you must change Class Loader configuration to avoid the error. (bug7506253 - XMLPSERVER WON'T START AFTER DEPLOYMENT TO WEBSPHERE 6.1) SystemErr.log: java.lang.VerifyError: class loading constraint violated (class: oracle/xml/parser/v2/XMLNode method: xdkSetQxName(Loracle/xml/util/QxName;)V) at pc: 0 .... Class Loader Configuration Steps: 1 - Login to WebSphere Admin console. Click Enterprise Applications under Applications menu 2 - Click xmlpserver application name from the list 3 - Select "Class loading and update detection" 4 - Update class loader configuration as follows in Class Loader -> General Properties * Polling interval for updated files: [0] Seconds * Class loader order: [x] Classes loaded with application class loader first * WAR class loader policy: [x] Single class loader for application 5 - Apply this change and save the new configuration. 6 - Restart xmlpserver application Please refer to WebSphere 6.1 documentation for more details. "http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html"> http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html ------------------------------------------------------- Oracle WebLogic Server 11g R1 (10.3.1) Deployment NOTE ------------------------------------------------------- If you are deploying BI Publisher to WebLogic Server 10.3.1, you must add the following setting at startup for the domain that contains the BI Publisher server in the /weblogic_home/user_projects/domains/base_domain/bin/startWebLogic.sh script : -Dtoplink.xml.platform=oracle.toplink.platform.xml.jaxp.JAXPPlatform This setting is required to enable BI Publisher to find the TopLink JAR files to create the Scheduler tables.

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Thursday, March 01, 2012

    CodePlex Daily Summary for Thursday, March 01, 2012Popular ReleasesMetodología General Ajustada - MGA: 01.09.08: Cambios John: Cambios en el MDI: Habilitación del menú e ícono de Imprimir. Deshabilitación de menú Ayuda y opciones de Importar y Exportar del menú Proyectos temporalmente. Integración con código de Crystal Report. Validaciones con Try-Catch al generar los reportes, personalización de los formularios en estilos y botones y validación de selección de tipo de reporte. Creación de instalador con TODOS los cambios y la creación de las carpetas asociadas a los RPT.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.01: Whats NewAdded New Plugin "Ventrian News Articles Link Selector" to select an Article Link from the News Article Module (This Plugin is not visible by default in your Toolbar, you need to manually add the 'newsarticleslinks' to your toolbarset) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEditorNewsArticlesLinks.png File-Browser: Added Paging to the Files List. You can define the Page Size in the Options (Default Value: 20) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEdito...MyRouter (Virtual WiFi Router): MyRouter 1.0 (Beta): A friendlier User Interface. A logger file to catch exceptions so you may send it to use to improve and fix any bugs that may occur. A feedback form because we always love hearing what you guy's think of MyRouter. Check for update menu item for you to stay up to date will the latest changes. Facebook fan page so you may spread the word and share MyRouter with friends and family And Many other exciting features were sure your going to love!WPF Sound Visualization Library: WPF SVL 0.3 (Source, Binaries, Examples, Help): Version 0.3 of WPFSVL. This includes three new controls: an equalizer, a digital clock, and a time editor.Thai Flood Watch: Thai Flood Watch - Source: non commercial use only ** This project supported by Department of Computer Science KhonKaen University Thailand.ZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-Notes.NET Assembly Information: Assembly Information 2.1.0.1: - Fixed the issue in which AnyCPU binaries were shown as 32bit - Added support to show the errors in-case if some dlls failed to load.FluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Kinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...Media Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projectsabac: abac cn websiteAION Launcher: simple aion launcher...just edit the background image of your choosing inside the code and other things such as the links for the buttons and the ip adress and port of the serverAXTFSTool: Dynamics AX tool that connects to your project's TFS and lists the objects your colleagues have changed. Written in C#, still under development and improvements. Useful for team leaders, deployment managers, etc.cookieTopo: Topo map viewerCrmFetchKit.js: Simple Library at allows the execution of fetchxml queries via JavaScript for Dynamics CRM 2011 (using the new WCF endpoints). Like the CrmRestKit this framework uses the promise/A capacities of jQuery. The code and the idea for this framework bases on the CrmServiceToolkit (http://crmtoolkit.codeplex.com/) developed by Daniel Cai. cy univerX engine: ????????DNSAPI.NET: A common API for managing DNS servers on Windows. This project is based on the work I started back in 2002 when I needed to create a web front-end for Windows' DNS server using the .Net framework. The plan is to expand on the project and include support for the BIND server on Windows too. ego.net: ego.netfdTFS: Team Foundation Server Source Control Plugin for FlashDevelopGeoWPS: GeoWPS is an implementation of the OGC WPS. It will be developed in C#. IThink: A new project.King Garden: Boy King's .net practical projects.King Garret: Boy King's .net learning projects.LottoCheck: Follow LottoNot-Terraria: This is a like terraria game but NOT terrariaPassword Protector: Password Protector SharePoint 2010 BlobCache Manager: Manage your web application's blobcache settings directly in the central administration.SharePoint 2010 SilverLight Multiple File Uploader: SharePoint 2010 SilverLight Multiple File Uploader for Documents Libraries with MetaData.Sharepoint Tool Collection: I want to Integrate Various Utilities of Sharepoint at one place. It is for easy working of user or developer. Ex-1. A utility which takes some params & csv file and upload 100s of items on the sharepoint list easily. Ex-2 A utility to upload documents in a library. etc.SQLCLR Cmd Exec Framework Example: For users of MS SQL Server, xp_cmdshell is a utility that we usually want to have disabled. However there are still cases where calling a command line is needed. This project provides an framework/example to make command line calls. It is not meant as an xp_cmdshell replacement but as a workaround.Symmetric Designs Python 3.2: Symmetric Designs for Python 3.2 helps graphical artists to design and develop their own designs freestyle. It uses the pygame module for Python 3.2. It can also be analysed in order to get a grasp of graphics programming in Python.Terminsoft open CLR libraries: Terminsoft open CLR libraries. The first is Terminsoft.Intervals, intended for modeling the sets of intervals with elements, the comparison operation is defined for. The second is Terminsoft.Syntax, intended for text parsing and transformation and built upon regular expressions.Thai Flood Watch: Thai Flood Watch provides useful information, up-to-date and visual access to the major canal in Bangkok, Thailand using data from department of drainage and sewerage. Easily monitor river and canal flow information in Bangkok area, right from your hand.TheNerd: Sample video game source code. Using Sunburn.Unity.WebAPI: A library that allows simple Integration of Microsoft's Unity IoC container with ASP.NET's WebAPI. This project includes a bespoke DependencyResolver that creates a child container per HTTP request and disposes of all registered IDisposable instances at the end of the request.Wholemy.RemoteTouch: The project is a remote touch-sensitive keyboard with a customizable interface which allows to supplement control of another computer, regardless of the wires. For example, if you have not so fast Tablet PC - a client and a fast desktop computer - the server using the network.WindowPlace: WindowPlace makes it possible to save Window positions and sizes to a profile. Switching between profiles will effortlessly move and resize your windows. Help improve productivity - especially for multi-monitor systems. Developed in C# using WPF and a few Windows API calls in the background. WP Error Manager (Devv.Core.WPErrorManager): Library to log, handle and report errors on Windows Phone 7 apps. Fully customizable and extremely easy to implement. Works with any WP7 app. Tested with the emulator, Nokia Lumia 800 and Samsung Focus Flash.WPMatic: Windows Phone7 App to manage Homematic (eQ-3) Devices. The App is like the Homematic Central Configuration Unit (CCU) in German.www.Nabaza.com Freeware and Ebooks: www.Nabaza.com Freeware and Ebooks by William R. NabazaZap: Zap is a light weight .NET communication framework. It is designed for programs running in local area network. Zap provides code generation tool that enables user to call remote methods, add/remote event listener to remote objects, while hides the lower details.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Simple MSBuild Configuration: Updating Assemblies With A Version Number

    - by srkirkland
    When distributing a library you often run up against versioning problems, once facet of which is simply determining which version of that library your client is running.  Of course, each project in your solution has an AssemblyInfo.cs file which provides, among other things, the ability to set the Assembly name and version number.  Unfortunately, setting the assembly version here would require not only changing the version manually for each build (depending on your schedule), but keeping it in sync across all projects.  There are many ways to solve this versioning problem, and in this blog post I’m going to try to explain what I think is the easiest and most flexible solution.  I will walk you through using MSBuild to create a simple build script, and I’ll even show how to (optionally) integrate with a Team City build server.  All of the code from this post can be found at https://github.com/srkirkland/BuildVersion. Create CommonAssemblyInfo.cs The first step is to create a common location for the repeated assembly info that is spread across all of your projects.  Create a new solution-level file (I usually create a Build/ folder in the solution root, but anywhere reachable by all your projects will do) called CommonAssemblyInfo.cs.  In here you can put any information common to all your assemblies, including the version number.  An example CommonAssemblyInfo.cs is as follows: using System.Reflection; using System.Resources; using System.Runtime.InteropServices;   [assembly: AssemblyCompany("University of California, Davis")] [assembly: AssemblyProduct("BuildVersionTest")] [assembly: AssemblyCopyright("Scott Kirkland & UC Regents")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyTrademark("")]   [assembly: ComVisible(false)]   [assembly: AssemblyVersion("1.2.3.4")] //Will be replaced   [assembly: NeutralResourcesLanguage("en-US")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Cleanup AssemblyInfo.cs & Link CommonAssemblyInfo.cs For each of your projects, you’ll want to clean up your assembly info to contain only information that is unique to that assembly – everything else will go in the CommonAssemblyInfo.cs file.  For most of my projects, that just means setting the AssemblyTitle, though you may feel AssemblyDescription is warranted.  An example AssemblyInfo.cs file is as follows: using System.Reflection;   [assembly: AssemblyTitle("BuildVersionTest")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Next, you need to “link” the CommonAssemblyinfo.cs file into your projects right beside your newly lean AssemblyInfo.cs file.  To do this, right click on your project and choose Add | Existing Item from the context menu.  Navigate to your CommonAssemblyinfo.cs file but instead of clicking Add, click the little down-arrow next to add and choose “Add as Link.”  You should see a little link graphic similar to this: We’ve actually reduced complexity a lot already, because if you build all of your assemblies will have the same common info, including the product name and our static (fake) assembly version.  Let’s take this one step further and introduce a build script. Create an MSBuild file What we want from the build script (for now) is basically just to have the common assembly version number changed via a parameter (eventually to be passed in by the build server) and then for the project to build.  Also we’d like to have a flexibility to define what build configuration to use (debug, release, etc). In order to find/replace the version number, we are going to use a Regular Expression to find and replace the text within your CommonAssemblyInfo.cs file.  There are many other ways to do this using community build task add-ins, but since we want to keep it simple let’s just define the Regular Expression task manually in a new file, Build.tasks (this example taken from the NuGet build.tasks file). <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <UsingTask TaskName="RegexTransform" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll"> <ParameterGroup> <Items ParameterType="Microsoft.Build.Framework.ITaskItem[]" /> </ParameterGroup> <Task> <Using Namespace="System.IO" /> <Using Namespace="System.Text.RegularExpressions" /> <Using Namespace="Microsoft.Build.Framework" /> <Code Type="Fragment" Language="cs"> <![CDATA[ foreach(ITaskItem item in Items) { string fileName = item.GetMetadata("FullPath"); string find = item.GetMetadata("Find"); string replaceWith = item.GetMetadata("ReplaceWith"); if(!File.Exists(fileName)) { Log.LogError(null, null, null, null, 0, 0, 0, 0, String.Format("Could not find version file: {0}", fileName), new object[0]); } string content = File.ReadAllText(fileName); File.WriteAllText( fileName, Regex.Replace( content, find, replaceWith ) ); } ]]> </Code> </Task> </UsingTask> </Project> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If you glance at the code, you’ll see it’s really just going a Regex.Replace() on a given file, which is exactly what we need. Now we are ready to write our build file, called (by convention) Build.proj. <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildProjectDirectory)\Build.tasks" /> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> <SolutionRoot>$(MSBuildProjectDirectory)</SolutionRoot> </PropertyGroup>   <ItemGroup> <RegexTransform Include="$(SolutionRoot)\CommonAssemblyInfo.cs"> <Find>(?&lt;major&gt;\d+)\.(?&lt;minor&gt;\d+)\.\d+\.(?&lt;revision&gt;\d+)</Find> <ReplaceWith>$(BUILD_NUMBER)</ReplaceWith> </RegexTransform> </ItemGroup>   <Target Name="Go" DependsOnTargets="UpdateAssemblyVersion; Build"> </Target>   <Target Name="UpdateAssemblyVersion" Condition="'$(BUILD_NUMBER)' != ''"> <RegexTransform Items="@(RegexTransform)" /> </Target>   <Target Name="Build"> <MSBuild Projects="$(SolutionRoot)\BuildVersionTest.sln" Targets="Build" /> </Target>   </Project> Reviewing this MSBuild file, we see that by default the “Go” target will be called, which in turn depends on “UpdateAssemblyVersion” and then “Build.”  We go ahead and import the Bulid.tasks file and then setup some handy properties for setting the build configuration and solution root (in this case, my build files are in the solution root, but we might want to create a Build/ directory later).  The rest of the file flows logically, we setup the RegexTransform to match version numbers such as <major>.<minor>.1.<revision> (1.2.3.4 in our example) and replace it with a $(BUILD_NUMBER) parameter which will be supplied externally.  The first target, “UpdateAssemblyVersion” just runs the RegexTransform, and the second target, “Build” just runs the default MSBuild on our solution. Testing the MSBuild file locally Now we have a build file which can replace assembly version numbers and build, so let’s setup a quick batch file to be able to build locally.  To do this you simply create a file called Build.cmd and have it call MSBuild on your Build.proj file.  I’ve added a bit more flexibility so you can specify build configuration and version number, which makes your Build.cmd look as follows: set config=%1 if "%config%" == "" ( set config=debug ) set version=%2 if "%version%" == "" ( set version=2.3.4.5 ) %WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild Build.proj /p:Configuration="%config%" /p:build_number="%version%" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now if you click on the Build.cmd file, you will get a default debug build using the version 2.3.4.5.  Let’s run it in a command window with the parameters set for a release build version 2.0.1.453.   Excellent!  We can now run one simple command and govern the build configuration and version number of our entire solution.  Each DLL produced will have the same version number, making determining which version of a library you are running very simple and accurate. Configure the build server (TeamCity) Of course you are not really going to want to run a build command manually every time, and typing in incrementing version numbers will also not be ideal.  A good solution is to have a computer (or set of computers) act as a build server and build your code for you, providing you a consistent environment, excellent reporting, and much more.  One of the most popular Build Servers is JetBrains’ TeamCity, and this last section will show you the few configuration parameters to use when setting up a build using your MSBuild file created earlier.  If you are using a different build server, the same principals should apply. First, when setting up the project you want to specify the “Build Number Format,” often given in the form <major>.<minor>.<revision>.<build>.  In this case you will set major/minor manually, and optionally revision (or you can use your VCS revision number with %build.vcs.number%), and then build using the {0} wildcard.  Thus your build number format might look like this: 2.0.1.{0}.  During each build, this value will be created and passed into the $BUILD_NUMBER variable of our Build.proj file, which then uses it to decorate your assemblies with the proper version. After setting up the build number, you must choose MSBuild as the Build Runner, then provide a path to your build file (Build.proj).  After specifying your MSBuild Version (equivalent to your .NET Framework Version), you have the option to specify targets (the default being “Go”) and additional MSBuild parameters.  The one parameter that is often useful is manually setting the configuration property (/p:Configuration="Release") if you want something other than the default (which is Debug in our example).  Your resulting configuration will look something like this: [Under General Settings] [Build Runner Settings]   Now every time your build is run, a newly incremented build version number will be generated and passed to MSBuild, which will then version your assemblies and build your solution.   A Quick Review Our goal was to version our output assemblies in an automated way, and we accomplished it by performing a few quick steps: Move the common assembly information, including version, into a linked CommonAssemblyInfo.cs file Create a simple MSBuild script to replace the common assembly version number and build your solution Direct your build server to use the created MSBuild script That’s really all there is to it.  You can find all of the code from this post at https://github.com/srkirkland/BuildVersion. Enjoy!

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • CodePlex Daily Summary for Tuesday, January 11, 2011

    CodePlex Daily Summary for Tuesday, January 11, 2011Popular ReleasesArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...VSSpeedster - Parallel Builds for VS: VSSpeedster 1.2 (beta): - Improved Parallel Builds - Cancel running Parallel Build using Ctrl+BreakASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Multiple server ASP.NET Reverse Ajax: This sample project demonstrates how is easy to scale your web applications via PokeInHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 beta Released: Hi, Today Visifire is released along with one new feature * Inlines property has been implemented in Title. Now onwards you can customize the text content in Title. Please check out the Visifire documentation for more information. This release contains fix for the following bugs: * Styles for chart elements were not working as expected. * Bar chart was not drawn properly if AxisMinimum property was set to a value above zero base line. * In DateTime axis, AxisLables were no...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...New Projects.NET Event Spy: Full information available here: http://martincarolan.blogspot.com/2011/01/secret-project.html Simple development/debugging tool that hooks into and monitors events raised on any .NET object3DTweet: 3Dtweet is an effort to make tweets appear in a aesthetic manner to the users of windows phone.Its developed using VS2010 expresss.Agile .NET with SCRUM and XP: Source code for the book Apress Professional Agile .NET Development with SCRUM and XPBeskid Niski Agroturystyka: Travel Poland, turystyka w beskidzie niskim. Agroturystyka w miejscowosci Losie nad zalewem KlimkowkaCoding better: A better coding labs for .net new feature.FBApp: A simple facebook app I was busy with over the holidays as an experiment to try out the facebook api. Is currently not complete but I wanted to get some criticism on it for my 1st web app. It is developed using WPF and C#. Freemium Helper for WebMatrix: The Freemium Helper for WebMatrix provides an easy way to apply the Freemium model into your WebMatrix site. Using different user groups (or roles), it allows you to easily enable or disable features on your pages depending on the stock-keeping unit the user has paid for.Haversine Distance Calculation: A very small project that implements the Haversine formula, which calculates the great circle distance between two points on the earth's surface. The points are latitude / longitude coordinates in DD. The formula is implemented client side with javascript and server side with C#.Hexa.Core: Hexa.Core is our implementation of the Domain Driven Design Architecture. Also providing a set of helper classes for ASP.Net and WCF development.Minecraft NBT reader: A simple Minecraft NBT reader.MobSoft: MobSoft is silverlight based news related application designed to test the new functionalitities in Silverlight 4netduino Helpers: The 'netduino Helpers' is a C# driver set for common hardware components and features convenient wrappers around complex .Net Micro Framework features such as: Analog joysticks, Real-time clock, 8*8 LED matrix, Shift register, runtime assembly & resource loader, bitmaps, etc.NewsGator Social Connectors for Sharepoint 2010: This project contains social connectors for the NewsGator SharePoint platform and supports sending messages to Twitter and LinkedIn just by putting tags in the text #li for sending to linkedin #tw for sending to twitterNon Profit Contact Relationship Management: A non profit contact relationship management software intended to help those in the non profit arena manage donors, sponsors, and prospects.OpenAGE: OpenAGE, short for Open Advance Game Engine, is aimed at developing a new Advanced Game engine strictly for the PC and Xbox360 gaming System using XNA 4.0, and Visual Studio 2010OpenAutoPoster: OpenAutoPoster automates some of the boring everyday tasks of aggregating, linking and posting that haunts content creators.Phefer WoodTurning Sketcher: Draw out your own turnings before you hit the wood. Import images and trace around them, print them out with the length and width measuresments.Simple Script Interpreter- A simple GPLEX/GPPG (Lex/Yacc) Primer: Simple Script is a simple implementation of an interpreter language built with GPlex and Gppg (Lex/Yacc). It's developed in C#.SP2010Tutorials: Code for learning SharePoint 2010The Social Developer: This is a social developer tool for programmers to create and share projects using the .Net framework and other technologies and integrate it into a socialistic approach of sharing the work load and the resources needed to develop high level applications. Traffic-sign Classification: Traffic sign shape classification and localization.unnamedyet: Experimental! para Investigadores de Sistemas. Objetivo! desarrollar una praxis tal que con un conjunto finito y discreto de términos para describir sea posible auto-demostrar y ejecutar cualquier proposición dada.VSSpeedster - Parallel Builds for VS: Improve the performance of your Visual Studio: - Parallel Builds integrated in visual studioWebservice Xslt Transformer WebPart for SharePoint 2010: The Dynamic Webservice Xslt Transformer WebPart makes it much easier for SharePoint Developers and Administrators to call the webservice and transform the results directly to HTML by providing their own custom xslt. The properties can be set on the webpart by using the UI.WilWaNet.HASH: An ASP.NET MVC web site designed for tracking nutrition for the purposes of losing weight. Tracks calories, fat calories, fat grams and saturated fat along with daily weight and exercise. Includes daily Basic Metabolic Rate calculation and graphing functions.WP7 Try it 01: The first try in wp7WPF TryIt 01: Quan ly Nhan khau WPF ApplicationWX Alerter CAP/XML: NWS Alerter using CAP 1.1 alerting protocol. The goal of this project is to consume weather alerts from the NWS site. The user will select the city or SAME code/zone to watch. As alerts trigger notices will display and info will fill the Alert Tab.

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Stop Spinning Your Wheels&hellip; Sage Advice for Aspiring Developers

    - by Mark Rackley
    So… lately I’ve been tasked with helping bring some non-developers over the hump and become full-fledged, all around, SharePoint developers. Well, only time will tell if I’m successful or a complete failure. Good thing about failures though, you know what NOT to do next time! Anyway, I’ve been writing some sort of code since I was about 10 years old; so I sometimes take for granted the effort some people have to go through to learn a new technology. I guess if I had to say I was an “expert” in one thing it would be learning (and getting “stuff” done) in new technologies. Maybe that’s why I’ve embraced SharePoint and the SharePoint community. SharePoint is the first technology I haven’t been able to master or get everything done without help from other people. I KNOW I’ll never know it all and I learn something new every day.  It keeps it interesting, it keeps me motivated, and keeps me involved. So, what some people may consider a downside of SharePoint, I definitely consider a plus. Crap.. I’m rambling. Where was I? Oh yeah… me trying to be helpful. Like I said, I am able to quickly and effectively pick up new languages, technology, etc. and put it to good use. Am I just brilliant? Well, my mom thinks so.. but maybe not. Maybe I’ve just been doing it for a long time…. 25 years in some form or fashion… wow I’m old… Anyway, what I lack in depth I make up for in breadth and being the “go-to” guy wherever I work when someone needs to “get stuff done”.  Let’s see if I can take some of that experience and put it to practical use to help new people get up to speed faster, learn things more effectively, and become that go-to guy. First off…  make sure you… Know The Basics I don’t have the time to teach new developers the basics, but you gotta know them. I’ve only been “taught” two languages.. Fortran 77 and C… everything else I’ve picked up from “doing”. I HAD to know the basics though, and all new developers need to understand the very basics of development.  97.23% of all languages will have the following: Variables Functions Arrays If statements For loops / While loops If you think about it, most development is “if this, do this… or while this, do this…”.  “This” may be some unique method to your language or something you develop, but the basics are the basics. YES there are MANY other development topics you need to understand, but you shouldn’t be scratching your head trying to figure out what a ”for loop” is… (Also learn about classes and hashtables as quickly as possible). Once you have the basics down it makes it much easier to… Learn By Doing This may just apply to me and my warped brain.  I don’t learn a new technology by reading or hearing someone speak about it. I learn by doing. It does me no good to try and learn all of the intricacies of a new language or technology inside-and-out before getting my hands dirty. Just show me how to do one thing… let me get that working… then show me how to do the next thing.. let me get that working… Now, let’s see what I can figure out on my own. Okay.. now it starts to make sense. I see how the language works, I can step through the code, and before you know it.. I’m productive in a new technology. Be careful here though…. make sure you… Don’t Reinvent The Wheel People have been writing code for what… 50+ years now? So, why are you trying to tackle ANYTHING without first Googling it with Bing to see what others have done first? When I was first learning C# (I had come from a Java background) I had to call a web service.  Sure! No problem! I’d done this many times in Java. So, I proceeded to write an HTTP Handler, called the Web Service and it worked like a charm!!!  Probably about 2.3 seconds after I got it working completely someone says to me “Why didn’t you just add a Web Reference?” Really? You can do that?  oops… I just wasted a lot of time. Before undertaking the development of any sort of utility method in a new language, make sure it’s not already handled for you… Okay… you are starting to write some code and are curious about the possibilities? Well… don’t just sit there… Try It And See What Happens This is actually one of my biggest pet peeves. “So… ‘x++’ works in C#, but does it also work in JavaScript?”   Really? Did you just ask me that? In the time it spent for you to type that email, press the send button, me receive the email, get around to reading it, and replying with “yes” you could have tested it 47 times and know the answer! Just TRY it! See what happens! You aren’t doing brain surgery. You aren’t going to kill anyone, and you BETTER not be developing in production. So, you are not going to crash any production systems!! Seriously! Get off your butt and just try it yourself. The extra added benefit is that it doesn’t work, the absolute best way to learn is to… Learn From Your Failures I don’t know about you… but if I screw up and something doesn’t work, I learn A LOT more debugging my problem than if everything magically worked. It’s okay that you aren’t perfect! Not everyone can be me? In the same vein… don’t ask someone else to debug your problem until you have made a valiant attempt to do so yourself. There’s nothing quite like stepping through code line by line to see what it’s REALLY doing… and you’ll never feel more stupid sometimes than when you realize WHY it’s not working.. but you realize... you learn... and you remember. There is nothing wrong with failure as long as you learn from it. As you start writing more and more and more code make sure that you ALWAYS… Develop for Production You will soon learn that the “prototype” you wrote last week to show as a “proof of concept” is going to go directly into production no matter how much you beg and plead and try to explain it’s not ready to go into production… it’s going to go straight there.. and it’s like herpes.. it doesn’t go away and there’s no fixing it once it’s in there.  So, why not write ALL your code like it will be put in production? It MIGHT take a little longer, but in the long run it will be easier to maintain, get help on, and you won’t be embarrassed that it’s sitting on a production server for everyone to use and see. So, now that you are getting comfortable and writing code for production it is important to to remember the… KISS Principle… Learn It… Love It… Keep It Simple Stupid Seriously.. don’t try to show how smart you are by writing the most complicated code in history. Break your problem up into discrete steps and write each step. If it turns out you have some redundancy, you can always go back and tweak your code later.  How bad is it when you write code that LOOKS cocky? I’ve seen it before… some of the most abstract and complicated classes when a class wasn’t even needed! Or the most elaborate unreadable code jammed into one really long line when it could have been written in three lines, performed just as well, and been SOOO much easier to maintain. Keep it clear and simple.. baby steps people. This will help you learn the technology, debug problems, AND it will help others help you find your problems if they don’t have to decipher the Dead Sea Scrolls just to figure out what you are trying to do…. Really.. don’t be that guy… try to curb your ego and… Keep an Open Mind No matter how smart you are… how fast you type… or how much you get paid, don’t let your ego get in the way. There is probably a better way to do everything you’ve ever done. Don’t become so cocky that you can’t think someone knows more than you. There’s a lot of brilliant, helpful people out there willing to show you tricks if you just give them a chance. A very super-awesome developer once told me “So what if you’ve been writing code for 10 years or more! Does your code look basically the same? Are you not growing as a developer?” Those 10 years become pretty meaningless if you just “know” that you are right and have not picked up new tips, tricks, methods, and patterns along the way. Learn from others and find out what’s new in development land (you know you don’t have to specifically use pointers anymore??). Along those same lines… If it’s not working, first assume you are doing something wrong. You have no idea how much it annoys people who are trying to help you when you first assume that the help they are trying to give you is wrong. Just MAYBE… you… the person learning is making some small mistake? Maybe you didn’t describe your problem correctly? Maybe you are using the wrong terminology? “I did exactly what you said and it didn’t work.”  Oh really? Are you SURE about that? “Your solution doesn’t work.”  Well… I’m pretty sure it works, I’ve used it 200 times… What are you doing differently? First try some humility and appreciation.. it will go much further, especially when it turns out YOU are the one that is wrong. When all else fails…. Try Professional Training Some people just don’t have the mindset to go and figure stuff out. It’s a gift and not everyone has it. If everyone could do it I wouldn’t have a job and there wouldn’t be professional training available.  So, if you’ve tried everything else and no light bulbs are coming on, contact the experts who specialize in training. Be careful though, there is bad training out there. Want to know the names of some good places? Just shoot me a message and I’ll let you know. I’m boycotting endorsing Andrew Connell anymore until I get that free course dangit!! So… that’s it.. that’s all I got right now. Maybe you thought all of this is common sense, maybe you think I’m smoking crack. If so, don’t just sit there, there’s a comments section for a reason. Finally, what about you? What tips do you have to help this aspiring to learn the dark arts??

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >