Search Results

Search found 834 results on 34 pages for 'looping'.

Page 31/34 | < Previous Page | 27 28 29 30 31 32 33 34  | Next Page >

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Exam 70-480 Study Material: Programming in HTML5 with JavaScript and CSS3

    - by Stacy Vicknair
    Here’s a list of sources of information for the different elements that comprise the 70-480 exam: General Resources http://www.w3schools.com (As pointed out in David Pallmann’s blog some of this content is unverified, but it is a decent source of information. For more about when it isn’t decent, see http://www.w3fools.com ) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/ (A guy who did a lot of what I did already, sadly I found this halfway through finishing my resources list. This list is expertly put together so I would recommend checking it out.) http://davidpallmann.blogspot.com/2012/08/microsoft-certification-exam-70-480.html http://pluralsight.com/training/Courses (Yes, this isn’t free, but if you look at the course listing there is an entire section on HTML5, CSS3 and Javascript. You can always try the trial!)   Some of the links I put below will overlap with the other resources above, but I tried to find explanations that looked beneficial to me on links outside those already mentioned.   Test Breakdown Implement and Manipulate Document Structures and Objects (24%) Create the document structure. o This objective may include but is not limited to: structure the UI by using semantic markup, including for search engines and screen readers (Section, Article, Nav, Header, Footer, and Aside); create a layout container in HTML http://www.w3schools.com/html/html5_new_elements.asp   Write code that interacts with UI controls. o This objective may include but is not limited to: programmatically add and modify HTML elements; implement media controls; implement HTML5 canvas and SVG graphics http://www.w3schools.com/html/html5_canvas.asp http://www.w3schools.com/html/html5_svg.asp   Apply styling to HTML elements programmatically. o This objective may include but is not limited to: change the location of an element; apply a transform; show and hide elements   Implement HTML5 APIs. o This objective may include but is not limited to: implement storage APIs, AppCache API, and Geolocation API http://www.w3schools.com/html/html5_geolocation.asp http://www.w3schools.com/html/html5_webstorage.asp http://www.w3schools.com/html/html5_app_cache.asp   Establish the scope of objects and variables. o This objective may include but is not limited to: define the lifetime of variables; keep objects out of the global namespace; use the “this” keyword to reference an object that fired an event; scope variables locally and globally http://robertnyman.com/2008/10/09/explaining-javascript-scope-and-closures/ http://www.quirksmode.org/js/this.html   Create and implement objects and methods. o This objective may include but is not limited to: implement native objects; create custom objects and custom properties for native objects using prototypes and functions; inherit from an object; implement native methods and create custom methods http://www.javascriptkit.com/javatutors/object.shtml http://www.crockford.com/javascript/inheritance.html http://stackoverflow.com/questions/1635116/javascript-class-method-vs-class-prototype-method http://www.javascriptkit.com/javatutors/proto.shtml     Implement Program Flow (25%) Implement program flow. o This objective may include but is not limited to: iterate across collections and array items; manage program decisions by using switch statements, if/then, and operators; evaluate expressions http://www.javascriptkit.com/jsref/looping.shtml http://www.javascriptkit.com/javatutors/varshort.shtml http://www.javascriptkit.com/javatutors/switch.shtml   Raise and handle an event. o This objective may include but is not limited to: handle common events exposed by DOM (OnBlur, OnFocus, OnClick); declare and handle bubbled events; handle an event by using an anonymous function http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html http://javascript.info/tutorial/bubbling-and-capturing   Implement exception handling. o This objective may include but is not limited to: set and respond to error codes; throw an exception; request for null checks; implement try-catch-finally blocks http://www.javascriptkit.com/javatutors/trycatch.shtml   Implement a callback. o This objective may include but is not limited to: receive messages from the HTML5 WebSocket API; use jQuery to make an AJAX call; wire up an event; implement a callback by using anonymous functions; handle the “this” pointer http://www.w3.org/TR/2011/WD-websockets-20110419/ http://www.html5rocks.com/en/tutorials/websockets/basics/ http://api.jquery.com/jQuery.ajax/   Create a web worker process. o This objective may include but is not limited to: start and stop a web worker; pass data to a web worker; configure timeouts and intervals on the web worker; register an event listener for the web worker; limitations of a web worker https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers http://www.html5rocks.com/en/tutorials/workers/basics/   Access and Secure Data (26%) Validate user input by using HTML5 elements. o This objective may include but is not limited to: choose the appropriate controls based on requirements; implement HTML input types and content attributes (for example, required) to collect user input http://diveintohtml5.info/forms.html   Validate user input by using JavaScript. o This objective may include but is not limited to: evaluate a regular expression to validate the input format; validate that you are getting the right kind of data type by using built-in functions; prevent code injection http://www.regular-expressions.info/javascript.html http://msdn.microsoft.com/en-us/library/66ztdbe6(v=vs.94).aspx https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Operators/typeof http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://stackoverflow.com/questions/942011/how-to-prevent-javascript-injection-attacks-within-user-generated-html   Consume data. o This objective may include but is not limited to: consume JSON and XML data; retrieve data by using web services; load data or get data from other sources by using XMLHTTPRequest http://www.erichynds.com/jquery/working-with-xml-jquery-and-javascript/ http://www.webdevstuff.com/86/javascript-xmlhttprequest-object.html http://www.json.org/ http://stackoverflow.com/questions/4935632/how-to-parse-json-in-javascript   Serialize, deserialize, and transmit data. o This objective may include but is not limited to: binary data; text data (JSON, XML); implement the jQuery serialize method; Form.Submit; parse data; send data by using XMLHTTPRequest; sanitize input by using URI/form encoding http://api.jquery.com/serialize/ http://www.javascript-coder.com/javascript-form/javascript-form-submit.phtml http://stackoverflow.com/questions/327685/is-there-a-way-to-read-binary-data-into-javascript https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/encodeURI     Use CSS3 in Applications (25%) Style HTML text properties. o This objective may include but is not limited to: apply styles to text appearance (color, bold, italics); apply styles to text font (WOFF and @font-face, size); apply styles to text alignment, spacing, and indentation; apply styles to text hyphenation; apply styles for a text drop shadow http://www.w3schools.com/css/css_text.asp http://www.w3schools.com/css/css_font.asp http://nicewebtype.com/notes/2009/10/30/how-to-use-css-font-face/ http://webdesign.about.com/od/beginningcss/p/aacss5text.htm http://www.w3.org/TR/css3-text/ http://www.css3.info/preview/box-shadow/   Style HTML box properties. o This objective may include but is not limited to: apply styles to alter appearance attributes (size, border and rounding border corners, outline, padding, margin); apply styles to alter graphic effects (transparency, opacity, background image, gradients, shadow, clipping); apply styles to establish and change an element’s position (static, relative, absolute, fixed) http://net.tutsplus.com/tutorials/html-css-techniques/10-css3-properties-you-need-to-be-familiar-with/ http://www.w3schools.com/css/css_image_transparency.asp http://www.w3schools.com/cssref/pr_background-image.asp http://ie.microsoft.com/testdrive/graphics/cssgradientbackgroundmaker/default.html http://www.w3.org/TR/CSS21/visufx.html http://www.barelyfitz.com/screencast/html-training/css/positioning/ http://davidwalsh.name/css-fixed-position   Create a flexible content layout. o This objective may include but is not limited to: implement a layout using a flexible box model; implement a layout using multi-column; implement a layout using position floating and exclusions; implement a layout using grid alignment; implement a layout using regions, grouping, and nesting http://www.html5rocks.com/en/tutorials/flexbox/quick/ http://www.css3.info/preview/multi-column-layout/ http://msdn.microsoft.com/en-us/library/ie/hh673558(v=vs.85).aspx http://dev.w3.org/csswg/css3-grid-layout/ http://dev.w3.org/csswg/css3-regions/   Create an animated and adaptive UI. o This objective may include but is not limited to: animate objects by applying CSS transitions; apply 3-D and 2-D transformations; adjust UI based on media queries (device adaptations for output formats, displays, and representations); hide or disable controls http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Find elements by using CSS selectors and jQuery. o This objective may include but is not limited to: choose the correct selector to reference an element; define element, style, and attribute selectors; find elements by using pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Structure a CSS file by using CSS selectors. o This objective may include but is not limited to: reference elements correctly; implement inheritance; override inheritance by using !important; style an element based on pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Technorati Tags: 70-480,CSS3,HTML5,HTML,CSS,JavaScript,Certification

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • CodePlex Daily Summary for Friday, October 04, 2013

    CodePlex Daily Summary for Friday, October 04, 2013Popular ReleasesMoreTerra (Terraria World Viewer): Version 1.11: Release Notes Release 1.11 =========== =Bug Fixes= =========== Now works with Terraria 1.2 wld files. =============== =Known Issues= =============== Not all tiles and items are accounted for. Missing tiles just show up as pink. This is actively being worked on but wanted to get a build out that works with 1.2VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Support for "ImageTeam.org linksStyleMVVM: 3.1.4: This release has virtually no code change but adds multiple new Item templates for the Windows Phone 8 platformSystem Center Orchestrator Community Project: Orchestrator Visio and Word Generator 1.5: This tool lets you export Orchestrator runbooks as a Visio diagram, and you can also generate an optional Word file as well. Components exported as of v1.5 are : - Title of the runbook - Activities and their names/thumbnails/description (description is displayed as a callout in the Visio diagram, attached to the shape of the activity) - Links and their names/colors - Looping and their interval Thumbnails and activities are grouped in the Visio diagram, for easy manipulation of the diagra...State of Decay Save Manager: Version 1.0.4: Add version at bottom of formDNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...SimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.RDFSharp - Start playing with RDF!: RDFSharp-0.6.6: GENERAL (NEW) Introduction of INT64 hashing engine (codenamed "Greta"); QUERY (FIX) Incorrect query evaluation due to faulty detection of optional patterns (v0.6.5 regression); (FIX) Missing update of PatternGroupID information after adding patterns and filters to a pattern group; (FIX) Ensure Context information of a pattern is not null before trying to collect it as variable; (MISC) Changed semantics of Context information of a pattern: if not provided, it will be ignored; (MISC...Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.C# Intellisense for Notepad++: Release v1.0.7.2: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1002September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0AudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.4 binaries: Improved performance of GroupByAcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...New ProjectsAnalysis Services Activity Viewer 2012: This project was created from a blog post I wrote back in February 2013 http://redphoenix.me/2013/08/22/upgrade-activity-viewer-2008-to-sql-server-2012/ARYSTA SYSTEM: Notebook System A/R Sales * Sales Order * Delivery * Sales Invoice * Sales Return * A/R Credit Memo * A/R Debit Memo A/P Purchasing * Purchase Order BEWELL SYSTEM: This system is design for Bewell-C Incorporated. Project Manager: Ben Penafiel System Engineer: Jhay Camba Report Designer: William Damasco Caching IOC: Caching IOC Container This will cache any Interface return results making it really easy to introduce caching to your solution.Grupo Anclita: Trabajo Final Laboratorio 4Halcyonic Skin by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/halcyonic/ ProPro: project about other projectsS3Unlock: Unlocks S3 agents that are stuck installing.sdfsdlfsdlkj01: dsfdffsdSerendipity - Responsive Skin for DNN: This is an HTML Template by Elemis, converted for use in DNN. Elemis URL: http://elemisfreebies.com/premium-themes/ Free for personal use and ed. purposes only.SmartSystemMenu: Smart system menu for you.SP 2013 Custom MultiTenant Adminstration: This Project creates a custom SP 2013 Tenant Admin site template covering limitations of existing tenant admin site.Telephasic Skin by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/telephasic/TelerikedIn: Social web app trying to look and feel like the famous LinkedInTerra 2: Generador de personajes para el juego de rol Terra2tsydev01: TextLineUpdateVds2465 Parser: This Project is about implementing a Parser for the Vds2465 protocol. It includes parsing and generating Vds2465 telegram bytes.Your Appliances: Empresa De ElectrodomesticosZeroFour by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/zerofour.

    Read the article

  • CodePlex Daily Summary for Sunday, December 12, 2010

    CodePlex Daily Summary for Sunday, December 12, 2010Popular ReleasesWii Backup Fusion: Wii Backup Fusion 0.9 Beta: - Aqua or brushed metal style for Mac OS X - Shows selection count beside ID - Game list selection mode via settings - Compare Files <-> WBFS game lists - Verify game images/DVD/WBFS - WIT command line for log (via settings) - Cancel possibility for loading games process - Progress infos while loading games - Localization for dates - UTF-8 support - Shortcuts added - View game infos in browser - Transfer infos for log - All transfer routines rewritten - Extract image from image/WBFS - Support....NETTER Code Starter Pack: v1.0.beta: '.NETTER Code Starter Pack ' contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies and frameworks based on Microsoft .NET Framework. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database (for database driven scenarios). The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic...WPF Multiple Document Interface (MDI): Beta Release v1.1: WPF.MDI is a library to imitate the traditional Windows Forms Multiple Document Interface (MDI) features in WPF. This is Beta release, means there's still work to do. Please provide feedback, so next release will be better. Features: Position dependency property MdiLayout dependency property Menu dependency property Ctrl + F4, Ctrl + Tab shortcuts should work Behavior: don’t allow negative values for MdiChild position minimized windows: remember position, tile multiple windows, ...EnhSim: EnhSim 2.2.1 ALPHA: 2.2.1 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated th...NuGet (formerly NuPack): NuGet 1.0 Release Candidate: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (http://go.microsoft.com/fwlink/?LinkID=206669) and package format. See http://nupack.codeplex.com/documentation?title=Nuspe...Free Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.DNN Simple Article: DNNSimpleArticle Module V00.00.03: The initial release of the DNNSimpleArticle module (labelled V00.00.03) There are C# and VB versions of this module for this initial release. No promises that going forward there will be packages for both languages provided for future releases. This module provides the following functionality Create and display articles Display a paged list of articles Articles get created as DNN ContentItems Categorization provided through DNN Taxonomy SEO functionality for article display providi...UOB & ME: UOB_ME 2.5: latest versionAutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration value .NET Version: 3.5 sp1My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).SharePoint Packager: SharePoint Packager release: Full source code and precompiled / ILmerged exeTweetSharp: TweetSharp v2.0.0.0 - Preview 4: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous fixes and improvements to core engine Twitter API coverage: a...myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingNew ProjectsABR Manager: Fast and small Adobe brushes preset files manager developed in native c++.Aplicativo lembretes: Com esse aplicativo você poderá agentar atividades de forma que quando o dia e a hora do compromisso marcado chegar, o aplicativo ira lembra-lo com um sinal sonoro e visual e até poderá desligar seu pc. O programa ficará em execução de forma discreta no canto direito superiorBrogue: Grevious Sword of Carnage: * 24 damage * makes annoying sound on hitsC# LZF Real-Time Compression Algorithm: Improved C# LZF Compressor, a very small and extremely efficient real-time data compression library. The compression algorithm is extremely fast. Well suited for real-time data intensive applications (e.g.: packet streaming, embedded devices...).GIFT: gift appHeroBeastCodeProject: create a HeroBeastCodeProjects project,opensource my codeiFinance: The wpf/sl solution for personal finance manager.MailOnSocketChange: Send an alert via email, if a certain socket/TCP-connection-changes state on the PC running the MailOnSocketChange applicationMath Teacher for kids: This application will help your kids learn and practice Math. midnc: Proyecto en silverlight para el control interno de registro de actividades y eventosMusicWho: MusicWho compose music. It use biological neuron firing signals to generate music. so, what will the brain sound like? we'll see.Nabaztag Enterprise Services: The goal of the Nabaztag Enterprise Services is to come up with a free .NET implementation of the Nabaztag server. Additionally we think of building a whole platform around the core services using the newsest an coolest .NET technologies.pkEditor: pkEditor is a Poketscript or Pokescript script generator and editor.Second Block: Second Block is an block-based 3D multiplayer game.Share my speed: Coding4Fun Window Phone 7 "Share my speed" applicationSharePoint Image Resizer: Image Resizer allows to create custom policies to control image size of pictures in WSS 3.0/SharePoint 2007.ShoutStreamSource: ShoutStreamSource provides you an implementation of the MediaStreamSource of the ShoutCast protocol for the Windows Phone 7. It makes it possible to play a ShoutCast stream using a MediaElement on the phone.SSIS Reporting Pack: A suite of SQL Server Reporting Services reports that operate over the SSIS catalog in SQL Server code-named DenaliSunshine2011: testTrabalhando com Functoid Table Looping: Usado em conjunto com o Functoid “Table Extractor”, serve para criar estruturas de any registros podendo conter valores de campos, valores constantes, e valores que resultam de outros functoids. VKontakte API SDK: SDK for work with most popular russian social site VKontakte.ruwho's who: iYasmin: Yasmin - A WPF based Anime Database.YiDeSOFT: YiDe is EzDesk~!?????????? ?????: ?????????? ????? ?????????? ? ?????????????? ????????? ??????? ?????????????????? ? ????????? ???????

    Read the article

  • Filtering List Data with a jQuery-searchFilter Plugin

    - by Rick Strahl
    When dealing with list based data on HTML forms, filtering that data down based on a search text expression is an extremely useful feature. We’re used to search boxes on just about anything these days and HTML forms should be no different. In this post I’ll describe how you can easily filter a list down to just the elements that match text typed into a search box. It’s a pretty simple task and it’s super easy to do, but I get a surprising number of comments from developers I work with who are surprised how easy it is to hook up this sort of behavior, that I thought it’s worth a blog post. But Angular does that out of the Box, right? These days it seems everybody is raving about Angular and the rich SPA features it provides. One of the cool features of Angular is the ability to do drop dead simple filters where you can specify a filter expression as part of a looping construct and automatically have that filter applied so that only items that match the filter show. I think Angular has single handedly elevated search filters to first rate, front-row status because it’s so easy. I love using Angular myself, but Angular is not a generic solution to problems like this. For one thing, using Angular requires you to render the list data with Angular – if you have data that is server rendered or static, then Angular doesn’t work. Not all applications are client side rendered SPAs – not by a long shot, and nor do all applications need to become SPAs. Long story short, it’s pretty easy to achieve text filtering effects using jQuery (or plain JavaScript for that matter) with just a little bit of work. Let’s take a look at an example. Why Filter? Client side filtering is a very useful tool that can make it drastically easier to sift through data displayed in client side lists. In my applications I like to display scrollable lists that contain a reasonably large amount of data, rather than the classic paging style displays which tend to be painful to use. So I often display 50 or so items per ‘page’ and it’s extremely useful to be able to filter this list down. Here’s an example in my Time Trakker application where I can quickly glance at various common views of my time entries. I can see Recent Entries, Unbilled Entries, Open Entries etc and filter those down by individual customers and so forth. Each of these lists results tends to be a few pages worth of scrollable content. The following screen shot shows a filtered view of Recent Entries that match the search keyword of CellPage: As you can see in this animated GIF, the filter is applied as you type, displaying only entries that match the text anywhere inside of the text of each of the list items. This is an immediately useful feature for just about any list display and adds significant value. A few lines of jQuery The good news is that this is trivially simple using jQuery. To get an idea what this looks like, here’s the relevant page layout showing only the search box and the list layout:<div id="divItemWrapper"> <div class="time-entry"> <div class="time-entry-right"> May 11, 2014 - 7:20pm<br /> <span style='color:steelblue'>0h:40min</span><br /> <a id="btnDeleteButton" href="#" class="hoverbutton" data-id="16825"> <img src="images/remove.gif" /> </a> </div> <div class="punchedoutimg"></div> <b><a href='/TimeTrakkerWeb/punchout/16825'>Project Housekeeping</a></b><br /> <small><i>Sawgrass</i></small> </div> ... more items here </div> So we have a searchbox txtSearchPage and a bunch of DIV elements with a .time-entry CSS class attached that makes up the list of items displayed. To hook up the search filter with jQuery is merely a matter of a few lines of jQuery code hooked to the .keyup() event handler: <script type="text/javascript"> $("#txtSearchPage").keyup(function() { var search = $(this).val(); $(".time-entry").show(); if (search) $(".time-entry").not(":contains(" + search + ")").hide(); }); </script> The idea here is pretty simple: You capture the keystroke in the search box and capture the search text. Using that search text you first make all items visible and then hide all the items that don’t match. Since DOM changes are applied after a method finishes execution in JavaScript, the show and hide operations are effectively batched up and so the view changes only to the final list rather than flashing the whole list and then removing items on a slow machine. You get the desired effect of the list showing the items in question. Case Insensitive Filtering But there is one problem with the solution above: The jQuery :contains filter is case sensitive, so your search text has to match expressions explicitly which is a bit cumbersome when typing. In the screen capture above I actually cheated – I used a custom filter that provides case insensitive contains behavior. jQuery makes it really easy to create custom query filters, and so I created one called containsNoCase. Here’s the implementation of this custom filter:$.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; This filter can be added anywhere where page level JavaScript runs – in page script or a seperately loaded .js file.  The filter basically extends jQuery with a : expression. Filters get passed a tokenized array that contains the expression. In this case the m[3] contains the search text from inside of the brackets. A filter basically looks at the active element that is passed in and then can return true or false to determine whether the item should be matched. Here I check a regular expression that looks for the search text in the element’s text. So the code for the filter now changes to:$(".time-entry").not(":containsNoCase(" + search + ")").hide(); And voila – you now have a case insensitive search.You can play around with another simpler example using this Plunkr:http://plnkr.co/edit/hDprZ3IlC6uzwFJtgHJh?p=preview Wrapping it up in a jQuery Plug-in To make this even easier to use and so that you can more easily remember how to use this search type filter, we can wrap this logic into a small jQuery plug-in:(function($, undefined) { $.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; $.fn.searchFilter = function(options) { var opt = $.extend({ // target selector targetSelector: "", // number of characters before search is applied charCount: 1 }, options); return this.each(function() { var $el = $(this); $el.keyup(function() { var search = $(this).val(); var $target = $(opt.targetSelector); $target.show(); if (search && search.length >= opt.charCount) $target.not(":containsNoCase(" + search + ")").hide(); }); }); }; })(jQuery); To use this plug-in now becomes a one liner:$("#txtSearchPagePlugin").searchFilter({ targetSelector: ".time-entry", charCount: 2}) You attach the .searchFilter() plug-in to the text box you are searching and specify a targetSelector that is to be filtered. Optionally you can specify a character count at which the filter kicks in since it’s kind of useless to filter at a single character typically. Summary This is s a very easy solution to a cool user interface feature your users will thank you for. Search filtering is a simple but highly effective user interface feature, and as you’ve seen in this post it’s very simple to create this behavior with just a few lines of jQuery code. While all the cool kids are doing Angular these days, jQuery is still useful in many applications that don’t embrace the ‘everything generated in JavaScript’ paradigm. I hope this jQuery plug-in or just the raw jQuery will be useful to some of you… Resources Example on Plunker© Rick Strahl, West Wind Technologies, 2005-2014Posted in jQuery  HTML5  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Toon shader with Texture. Can this be optimized?

    - by Alex
    I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline AND their original texture at the same time. The result is actually a very nice Cel Shading effect of the model texture, but it is havling the speed of the program, it's quite very slow even with just 3 models on screen... Since the result was kind of hacked together, I am thinking that maybe I am performing some extra steps or extra rendering tasks that maybe are not needed, and are slowing down the game? Something unnecessary that maybe you guys could spot? Both MD2 and 3DS loader have an InitToon() function called upon creation to load the shader initToon(){ int i; // Looping Variable ( NEW ) char Line[255]; // Storage For 255 Characters ( NEW ) float shaderData[32][3]; // Storate For The 96 Shader Values ( NEW ) FILE *In = fopen ("Shader.txt", "r"); // Open The Shader File ( NEW ) if (In) // Check To See If The File Opened ( NEW ) { for (i = 0; i < 32; i++) // Loop Though The 32 Greyscale Values ( NEW ) { if (feof (In)) // Check For The End Of The File ( NEW ) break; fgets (Line, 255, In); // Get The Current Line ( NEW ) shaderData[i][0] = shaderData[i][1] = shaderData[i][2] = float(atof (Line)); // Copy Over The Value ( NEW ) } fclose (In); // Close The File ( NEW ) } else return false; // It Went Horribly Horribly Wrong ( NEW ) glGenTextures (1, &shaderTexture[0]); // Get A Free Texture ID ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind This Texture. From Now On It Will Be 1D ( NEW ) // For Crying Out Loud Don't Let OpenGL Use Bi/Trilinear Filtering! ( NEW ) glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage1D (GL_TEXTURE_1D, 0, GL_RGB, 32, 0, GL_RGB , GL_FLOAT, shaderData); // Upload ( NEW ) } This is the drawing for the animated MD2 model: void MD2Model::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) // ORIGINAL PART OF THE FUNCTION //Figure out the two frames between which we are interpolating int frameIndex1 = (int)(time * (endFrame - startFrame + 1)) + startFrame; if (frameIndex1 > endFrame) { frameIndex1 = startFrame; } int frameIndex2; if (frameIndex1 < endFrame) { frameIndex2 = frameIndex1 + 1; } else { frameIndex2 = startFrame; } MD2Frame* frame1 = frames + frameIndex1; MD2Frame* frame2 = frames + frameIndex2; //Figure out the fraction that we are between the two frames float frac = (time - (float)(frameIndex1 - startFrame) / (float)(endFrame - startFrame + 1)) * (endFrame - startFrame + 1); // I ADDED THESE FROM NEHE'S TUTORIAL FOR FIRST PASS (TOON SHADE) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); // ADDED THESE FROM NEHE'S FOR SECOND PASS (OUTLINE) glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) // HERE I AM PARSING THE VERTICES AGAIN (NOT IN THE ORIGINAL FUNCTION) FOR THE OUTLINE AS PER NEHE'S TUT glBegin (GL_TRIANGLES); // Tell OpenGL What We Want To Draw for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd (); // Tell OpenGL We've Finished glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); } Whereas this is the drawToon function in the 3DS loader void Model_3DS::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) //ORIGINAL CODE if (visible) { glPushMatrix(); // Move the model glTranslatef(pos.x, pos.y, pos.z); // Rotate the model glRotatef(rot.x, 1.0f, 0.0f, 0.0f); glRotatef(rot.y, 0.0f, 1.0f, 0.0f); glRotatef(rot.z, 0.0f, 0.0f, 1.0f); glScalef(scale, scale, scale); // Loop through the objects for (int i = 0; i < numObjects; i++) { // Enable texture coordiantes, normals, and vertices arrays if (Objects[i].textured) glEnableClientState(GL_TEXTURE_COORD_ARRAY); if (lit) glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); // Point them to the objects arrays if (Objects[i].textured) glTexCoordPointer(2, GL_FLOAT, 0, Objects[i].TexCoords); if (lit) glNormalPointer(GL_FLOAT, 0, Objects[i].Normals); glVertexPointer(3, GL_FLOAT, 0, Objects[i].Vertexes); // Loop through the faces as sorted by material and draw them for (int j = 0; j < Objects[i].numMatFaces; j ++) { // Use the material's texture Materials[Objects[i].MatFaces[j].MatIndex].tex.Use(); // AFTER THE TEXTURE IS APPLIED I INSERT THE TOON FUNCTIONS HERE (FIRST PASS) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) // THIS IS AN ADDED SECOND PASS AT THE VERTICES FOR THE OUTLINE glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) for (int j = 0; j < Objects[i].numMatFaces; j ++) { glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); glPopMatrix(); } Finally this is the tex.Use() function that loads a BMP texture and somehow gets blended perfectly with the Toon shading void GLTexture::Use() { glEnable(GL_TEXTURE_2D); // Enable texture mapping glBindTexture(GL_TEXTURE_2D, texture[0]); // Bind the texture as the current one }

    Read the article

  • CodePlex Daily Summary for Wednesday, June 26, 2013

    CodePlex Daily Summary for Wednesday, June 26, 2013Popular ReleasesNaked Objects: Naked Objects Release 5.5.0: This release includes a number of significant improvements to the usability of the UI, some of which involve new programming conventions or attributes: Action dialogs now appear as pop-up modal dialogs instead of as a new page; query-only actions have an Apply as well as an OK button. See https://nakedobjects.codeplex.com/workitem/175 When a reference object is expanded in-line there is a button to jump straight to an Edit view of that object see https://nakedobjects.codeplex.com/workitem/1...VeraCrypt: VeraCrypt version 1.0b: Changes since version 1.0a :Enhance RIPEMD160 implementation in BootLoaded by using the compiler uint32 type Don't position legacy flag in volume header for newer VeraCrypt releasesPlayer Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for WAMS. Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expecting only finite values. (...SSIS DQS Matching Transformation: SSIS DQS Matching Transformation 1.0: Initial release of the SSIS DQS Matching Component.AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsStyleMVVM: 3.0.2: This is a minor feature and bug fix release Features: ExportWhenDebuggerIsAttacedAttribute - new attribute that marks an attribute to only be exported when the debugger is attahced InjectedFilterAttributeFilterProvider - new Attribute Filter provider for MVC that injects the attributes Performance Improvements - minor speed improvements all over, and Import collections is now 50% faster Bug Fixes: Open Generic Constraints are now respected when finding exports Fix for fluent registrat...WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Keyboard Image Viewer: 1.5.4: Upgraded folder picker dialog to better version on Win7+ Fixed bug that stopped slideshow from looping back to the start of the list. Added crash dialog that allows you to see and copy exception stack traces for fixing.DependencyAnalysis (Egg and Gherkin): 0.9.4: - Create Visual Studio 2012 Addin for ad-hoc analysis of your project - Display metrics in a grid - Adequate performing serialization between Addin (Visual Studio process) and AnalysisHost process - Display dependencies as graph (proximity graph) - Create a logo for the project - Constructors of anonymous types no longer hide constructors of the declaring type during "build dependencies" phase - Type descriptors were added multiple times to SubmoduleDescriptor. Types collection, same instanc...Bloomberg API Emulator: Bloomberg API Emulator (v 1.0.5): This version contains the full Java port of my original C# code. I just finished the MarketDataSubscription request type. I will start working on a C++ port of my C# code.Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseNew Projects.Net Encryption App: This is a C#.Net desktop application that will let users encrypt and de-crypt files with the algorithm of their choosing.AutoSPDocumenter: AutoSPDocumenter utilises PowerShell to document SharePoint farms and provide output in usable formats.Azure Storage Redirector: Azure Storage Redirector. Redirects requests to Global Azure Storage to China Azure Storage. ?????Azure Storage?request??????Azure storagebrownbag: Simple project to show branching, merging, and shelvingChannel9's Absolute Beginner Series: Channel 9's absolute beginner series source code. From Windows Phone 7, Windows Phone 8, Windows Store applications, one stop area for all the seriesFAST for Sharepoint 2010 Query Tool (.NET 3.5): .NET 3.5 version of the FAST Search for SharePoint MOSS 2010 Query Tool (https://fastforsharepoint.codeplex.com/). For environments without .NET 4.0HAOest Framework: HAOest????ListManager: ListManager????? by HADB of HAOestMyPS: mypsNNRel: NNRelO - BV - 2: TestOpen XML SDK for JavaScript: Small JavaScript library that enables you to implement Open XML functionality anywhere you can use JavaScript.Orchard Prefix free: Provides a script manifest for the Prefix free script libraries.PVDesktop: PVDesktop is an application for designing and analyzing specific solar energy sites.Red the sound TowePlay: School project at ISEN Lille. Creation of a collaborative music creation software in C# using.NETScience Kits for Kids: This project is the service for Childhood Education which aged 4 - 8.SharePoint ULS for PowerShell: Allows PowerShell to log to the SharePoint ULS.SSIS DQS Matching Transformation: The SSIS DQS Matching Transformation uses Data Quality Services (DQS) to find duplicate data within the SSIS data flow.TARVOS Computer Networks Simulator: Discrete event-based network simulator, supports simulating MPLS architecture, several RSVP-TE protocol functionalities and fast recovery.Web API Explorer 4 DNN: Web API Explorer for DNN(R) aids module development allowing you to examine the Routing Table entries for a DotNetNuke(R) portal.

    Read the article

  • How to do MVC the right way

    - by Ieyasu Sawada
    I've been doing MVC for a few months now using the CodeIgniter framework in PHP but I still don't know if I'm really doing things right. What I currently do is: Model - this is where I put database queries (select, insert, update, delete). Here's a sample from one of the models that I have: function register_user($user_login, $user_profile, $department, $role) { $department_id = $this->get_department_id($department); $role_id = $this->get_role_id($role); array_push($user_login, $department_id, $role_id); $this->db->query("INSERT INTO tbl_users SET username=?, hashed_password=?, salt=?, department_id=?, role_id=?", $user_login); $user_id = $this->db->insert_id(); array_push($user_profile, $user_id); $this->db->query(" INSERT INTO tbl_userprofile SET firstname=?, midname=?, lastname=?, user_id=? ", $user_profile); } Controller - talks to the model, calls up the methods in the model which queries the database, supplies the data which the views will display(success alerts, error alerts, data from database), inherits a parent controller which checks if user is logged in. Here's a sample: function create_user(){ $this->load->helper('encryption/Bcrypt'); $bcrypt = new Bcrypt(15); $user_data = array( 'username' => 'Username', 'firstname' => 'Firstname', 'middlename' => 'Middlename', 'lastname' => 'Lastname', 'password' => 'Password', 'department' => 'Department', 'role' => 'Role' ); foreach ($user_data as $key => $value) { $this->form_validation->set_rules($key, $value, 'required|trim'); } if ($this->form_validation->run() == FALSE) { $departments = $this->user_model->list_departments(); $it_roles = $this->user_model->list_roles(1); $tc_roles = $this->user_model->list_roles(2); $assessor_roles = $this->user_model->list_roles(3); $data['data'] = array('departments' => $departments, 'it_roles' => $it_roles, 'tc_roles' => $tc_roles, 'assessor_roles' => $assessor_roles); $data['content'] = 'admin/create_user'; parent::error_alert(); $this->load->view($this->_at, $data); } else { $username = $this->input->post('username'); $salt = $bcrypt->getSalt(); $hashed_password = $bcrypt->hash($this->input->post('password'), $salt); $fname = $this->input->post('firstname'); $mname = $this->input->post('middlename'); $lname = $this->input->post('lastname'); $department = $this->input->post('department'); $role = $this->input->post('role'); $user_login = array($username, $hashed_password, $salt); $user_profile = array($fname, $mname, $lname); $this->user_model->register_user($user_login, $user_profile, $department, $role); $data['content'] = 'admin/view_user'; parent::success_alert(4, 'User Sucessfully Registered!', 'You may now login using your account'); $data['data'] = array('username' => $username, 'fname' => $fname, 'mname' => $mname, 'lname' => $lname, 'department' => $department, 'role' => $role); $this->load->view($this->_at, $data); } } Views - this is where I put html, css, and JavaScript code (form validation code for the current form, looping through the data supplied by controller, a few if statements to hide and show things depending on the data supplied by the controller). <!--User registration form--> <form class="well min-form" method="post"> <div class="form-heading"> <h3>User Registration</h3> </div> <label for="username">Username</label> <input type="text" id="username" name="username" class="span3" autofocus> <label for="password">Password</label> <input type="password" id="password" name="password" class="span3"> <label for="firstname">First name</label> <input type="text" id="firstname" name="firstname" class="span3"> <label for="middlename">Middle name</label> <input type="text" id="middlename" name="middlename" class="span3"> <label for="lastname">Last name</label> <input type="text" id="lastname" name="lastname" class="span3"> <label for="department">Department</label> <input type="text" id="department" name="department" class="span3" list="list_departments"> <datalist id="list_departments"> <?php foreach ($data['departments'] as $row) { ?> <option data-id="<?php echo $row['department_id']; ?>" value="<?php echo $row['department']; ?>"><?php echo $row['department']; ?></option> <?php } ?> </datalist> <label for="role">Role</label> <input type="text" id="role" name="role" class="span3" list=""> <datalist id="list_it"> <?php foreach ($data['it_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <datalist id="list_collection"> <?php foreach ($data['tc_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <datalist id="list_assessor"> <?php foreach ($data['assessor_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <p> <button type="submit" class="btn btn-success">Create User</button> </p> </form> <script> var departments = []; var roles = []; $('#list_departments option').each(function(i){ departments[i] = $(this).val(); }); $('#list_it option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#list_collection option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#list_assessor option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#department').blur(function(){ var department = $.trim($(this).val()); $('#role').attr('list', 'list_' + department); }); var password = new LiveValidation('password'); password.add(Validate.Presence); password.add(Validate.Length, {minimum: 10}); $('input[type=text]').each(function(i){ var field_id = $(this).attr('id'); var field = new LiveValidation(field_id); field.add(Validate.Presence); if(field_id == 'department'){ field.add(Validate.Inclusion, {within : departments}); } else if(field_id == 'role'){ field.add(Validate.Inclusion, {within : roles}) } }); </script> The codes above are actually code from the application that I'm currently working on. I'm working on it alone so I don't really have someone to review my code for me and point out the wrong things in it so I'm posting it here in hopes that someone could point out the wrong things that I've done in here. I'm also looking for some guidelines in writing MVC code like what are the things that should be and shouldn't be included in views, models and controllers. How else can I improve the current code that I have right now. I've written some really terrible code before(duplication of logic, etc.) that's why I want to improve my code so that I can easily maintain it in the future. Thanks!

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C# Bind DataTable to Existing DataGridView Column Definitions

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { PerformQuery(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); UpdateResults(entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.AddRange(new DataColumn[] { new DataColumn("event_type", typeof(Int32)), new DataColumn("event_time", typeof(DateTime)), new DataColumn("event_detail", typeof(String))}); using (SqlConnection conn = new SqlConnection(DSN)) { using (SqlDataAdapter adapter = new SqlDataAdapter( "SELECT event_type, event_time, event_detail FROM event_log " + "WHERE event_time >= @start AND event_time <= @stop", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new SqlParameter("@start", start), new SqlParameter("@stop", stop)}); adapter.Fill(entries); } } return entries; } Update I'd like to summarize and provide some additional information I've learned from the discussion here and debugging efforts since I originally posted this question. I am refactoring old code that retrieved records from a database, collected those records as an array, and then later iterated through the array to populate a DataGridView row by row. Threading was originally implemented to compensate and keep the UI responsive during the unnecessary looping. I have since stripped out Thread/Invoke; everything now occurs on the same execution thread (thank you, Sam). I am attempting to replace the slow, unwieldy approach using a DataTable which I can fill with a DataAdapter, and assign to the DataGridView through it's DataSource property (above code updated). I've iterated through the entries DataTable's rows to verify the table contains the expected data before assigning it as the DataGridView's DataSource. foreach (DataRow row in entries.Rows) { System.Diagnostics.Trace.WriteLine( String.Format("{0} {1} {2}", row[0], row[1], row[2])); } One of the column of the DataGridView is a custom DataGridViewColumn to stylize the event_type value. I apologize I didn't mention this before in the original post but I wasn't aware it was important to my problem. I have converted this column temporarily to a standard DataGridViewTextBoxColumn control and am no longer experiencing the Exception. The fields in the DataTable are appended to the list of fields that have been pre-specified in Design view of the DataGridView. The records' values are being populated in these appended fields. When the run time attempts to render the cell a null value is provided (as the value that should be rendered is done so a couple columns over). In light of this, I am re-titling and re-tagging the question. I would still appreciate it if others who have experienced this can instruct me on how to go about binding the DataTable to the existing column definitions of the DataGridView.

    Read the article

  • Need help with setting up comet code

    - by Saif Bechan
    Does anyone know off a way or maybe think its possible to connect Node.js with Nginx http push module to maintain a persistent connection between client and browser. I am new to comet so just don't understand the publishing etc maybe someone can help me with this. What i have set up so far is the following. I downloaded the jQuery.comet plugin and set up the following basic code: Client JavaScript <script type="text/javascript"> function updateFeed(data) { $('#time').text(data); } function catchAll(data, type) { console.log(data); console.log(type); } $.comet.connect('/broadcast/sub?channel=getIt'); $.comet.bind(updateFeed, 'feed'); $.comet.bind(catchAll); $('#kill-button').click(function() { $.comet.unbind(updateFeed, 'feed'); }); </script> What I can understand from this is that the client will keep on listening to the url followed by /broadcast/sub=getIt. When there is a message it will fire updateFeed. Pretty basic and understandable IMO. Nginx http push module config default_type application/octet-stream; sendfile on; keepalive_timeout 65; push_authorized_channels_only off; server { listen 80; location /broadcast { location = /broadcast/sub { set $push_channel_id $arg_channel; push_subscriber; push_subscriber_concurrency broadcast; push_channel_group broadcast; } location = /broadcast/pub { set $push_channel_id $arg_channel; push_publisher; push_min_message_buffer_length 5; push_max_message_buffer_length 20; push_message_timeout 5s; push_channel_group broadcast; } } } Ok now this tells nginx to listen at port 80 for any calls to /broadcast/sub and it will give back any responses sent to /broadcast/pub. Pretty basic also. This part is not so hard to understand, and is well documented over the internet. Most of the time there is a ruby or a php file behind this that does the broadcasting. My idea is to have node.js broadcasting /broadcast/pub. I think this will let me have persistent streaming data from the server to the client without breaking the connection. I tried the long-polling approach with looping the request but I think this will be more efficient. Or is this not going to work. Node.js file Now to create the Node.js i'm lost. First off all I don't know how to have node.js to work in this way. The setup I used for long polling is as follows: var sys = require('sys'), http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write(new Date()); res.close(); seTimeout('',1000); }).listen(8000); This listens to port 8000 and just writes on the response variable. For long polling my nginx.config looked something like this: server { listen 80; server_name _; location / { proxy_pass http://mydomain.com:8080$request_uri; include /etc/nginx/proxy.conf; } } This just redirected the port 80 to 8000 and this worked fine. Does anyone have an idea on how to have Node.js act in a way Comet understands it. Would be really nice and you will help me out a lot. Recources used An example where this is done with ruby instead of Node.js jQuery.comet Nginx HTTP push module homepage Faye: a Comet client and server for Node.js and Rack To use faye I have to install the comet client, but I want to use the one supplied with Nginx. Thats why I don't just use faye. The one nginx uses is much more optimzed. extra Persistant connections Going evented with Node.js

    Read the article

  • Visual C++ 2010 Winform Errors C2238, C2059, C1075.

    - by tracelez
    It errors when I try compile. I cut the code out of the program and the program works and complies correctly. Not sure why it doesn't like this. This part of the code does single digit math with numeric strings that are converted into a char arrays. ** Error 2 error C2238: unexpected token(s) preceding ';' C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 10 1 Win32 Form c++ ** Error 1 error C2059: syntax error : 'namespace' C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 10 1 Win32 Form c++ ** Error 3 error C1075: end of file found before the left brace '{' at 'c:\users\alpha\documents\visual studio 2010\projects\win32 form c++\win32 form c++\Form1.h(40)' was matched C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 23 1 Win32 Form c++ ////// ////// // chX[] and chY[] are char arrays from functional part of program std::reverse( chX, &chX[ strlen( chX ) ] ); std::reverse( chY, &chY[ strlen( chY ) ] ); // makes sure x is larger or equal to y...makes looping logic easier if (strlen(chX) < strlen(chY)) { char *chZ = chX; chX = chY; chY = chZ; } //Variables for this part of the program char chX2; char chY2; std::string strSum; int sum = 0; bool carryth1 = false; int x=0; int y=0; for (int i = 0; i <= (strlen(chX)-1); i++) { if (i <= strlen(chY)-1) { chX2= chX[i]; chY2= chY[i]; x = atoi(chX2); y = atoi(chY2); //x = atoi(chX[i]); sum = x+y+(int)carryth1; carryth1 = false; if (sum > 9) { if(i == 0) { sum -=10; strSum = itoa(sum); carryth1 = true; } else { sum -=10; strSum += itoa(sum); carryth1 = true; } } else { if(i == 0) { strSum = itoa(sum); } else { strSum += itoa(sum); } } else { y = 0; chX2= chX[i]; x = atoi(chX2); sum = x+y+(int)carryth1; if((i == strlen(chX)-1)&& (carryth1 == true) && (x == 9)) { strSum += "10"; } else { strSum = itoa(sum); } } std::reverse( strSum, &strSum[ strlen( strSum ) ] ); //Creates new string for txtDisplay this simplifies conversions String^ strDisplay = "X is " + gcnew String((strX1.c_str())) + " Y is " +gcnew String((strY1.c_str())) + " \r\n " ; strDisplay += "The sum of the X + Y = "; txtDisplay->Text = gcnew String((strSum.c_str())) ;

    Read the article

  • Advance way of using UIView convertRect method to detect CGRectIntersectsRect multiple times

    - by Chris
    I recently asked a question regarding collision detection within subviews, with a perfect answer. I've come to the last point in implementing the collision on my application but I've come across a new issue. Using convertRect was fine getting the CGRect from the subView. I needed it to be a little more complex as it wasn't exactly rectangles that needed to be detected. on XCode I created an abstract class called TileViewController. Amongst other properties it has a IBOutlet UIView *detectionView; I now have multiple classes that inherit from TileViewController, and each class there are multiple views nested inside the detectionView which I have created using Interface Builder. The idea is an object could be a certain shape or size, I've programatically placed these 'tiled' detection points bottom center of each object. A user can select an item and interactive with it, in this circumstance move it around. Here is my touchesMoved method -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch *touch = [[event allTouches] anyObject]; CGPoint location = [touch locationInView:touch.view]; interactiveItem.center = location; // The ViewController the user has chosen to interact with interactiveView.view.center = location; // checks if the user has selected an item to interact with if (interactiveItem) { // First get check there is more then 1 item in the collection NSUInteger assetCount = [itemViewCollection count]; //NSMutableArray that holds the ViewControllers int detectionCount = 0; // To count how many times a CGRectIntersectsRect occured UIView *parentView = self.view; // if there is more then 1 item begin collision detection if (assetCount > 1) { for (TileViewController *viewController in itemViewCollection) { if (viewController.view.tag != interactiveView.view.tag) { if (viewController.detectionView.subviews) { for (UIView *detectView in viewController.detectionView.subviews) { CGRect viewRect; viewRect = [detectView convertRect:[detectView frame] toView:parentView]; // I could have checked to see if the below has subViews but didn't - In my current implementation it does anyway for (UIView *detectInteractView in interactiveView.detectionView.subviews) { CGRect interactRect; interactRect = [detectInteractView convertRect:[detectInteractView frame] toView:parentView]; if (CGRectIntersectsRect(viewRect, interactRect) == 1) { NSLog(@"collision detected"); [detectView setBackgroundColor:[UIColor blueColor]]; [detectInteractView setBackgroundColor:[UIColor blueColor]]; detectionCount++; } else { [detectView setBackgroundColor:[UIColor yellowColor]]; [detectInteractView setBackgroundColor:[UIColor yellowColor]]; } } } } } } // Logic if no items collided if (detectionCount == 0) { NSLog(@"Do something"); } } } } Now the method itself works to an extent but I don't think it's working with the nested values properly as the detection is off. A simplified version of this method works - Using CGRectIntersectsRect on the detectionView itself so I'm wondering if I'm looping through and checking the views correctly? I wasn't sure whether it was comparing in the same view but I suspect it is, I did modify the code slightly at one point, rather then comparing the values in self.view I took the viewController.detectView's UIViews into the interactiveView.detectView but the outcome was the same. It's rigged so the subviews change colour, but they change colour when they are not even touching, and when they do touch the wrong UIviews are changing colour Many thanks in advance

    Read the article

  • Token replacement

    - by ClarkeyBoy
    Hey, I currently have a system on my website whereby I can put something like "[cfe]" anywhere in the site and, when the page is rendered, it will replace it with the root to the customer front end (same for "[afe]" and admin front end - so in the admin front end I can put "[cfe]/Default.aspx" to link to the homepage on the customer front end. This is in place as I have a development version of the site, then a test and a live version too. All 3 may have different roots to each section (for example the way the website is set up, the root to the admin front end in test is "/test/Administration/", but in live and development it is just "/Administration/"). Which version it is depends on the URL - all my development sites are in a folder called "development", whereas test is in a folder called "test" and any live urls do not contain either of these. There are also 3 different databases - one for each. All 3, obviously, require a different connection string. I also have a string replacement function in place which can change, for example, "[Product:Cards]" to point to the Cards catalogue page. Problem is that for this I go through all the products and do a replacement on "[Product:" & Product.Name() & "]". However I would like to take this further. I would like to pick up these custom strings when the page is rendered so it picks up "[Product:Cards]" and then goes off to find product "Cards" and replaces the string with a link to the Cards page, rather than looping through all the products and doing a replace just on the off chance that there are any replacements to make. One use for this, which I may start using in the future if I can figure out how to do this, is like on Wikipedia where you put the title of the page you want to point to, then a divider (think its a pipe from memory) then the link text. I would like to apply this to the above situation. This way broken links can also be picked up, and reported to admin (a major advantage as they can then locate them and remove the link or add the product / page that it refers to). I would like to take this to the stage where content of entire pages can be rearranged (kinda like web parts, but not as advanced as that). I mean like so you can put [layout type="3columnImageTopRight" image="imageurl"]Content here[/layout]. This will display, as specified, an image in the top right (with padding at the left and bottom) and 3 columns - maybe with the image spanning one or two columns). The imageurl can be specified as another token: maybe like [Image:imagename.gif] or something. This replaces it with the root to the image folder and then the specified filename. I have not really looked into how I am going to split the content into 3 columns yet, but this would be something to look at for my dissertation and then implement after my project deadline at least. Does anyone have any ideas or pointers which could help me with this? Also if this is not strictly token replacement then please point me to what it is, so I can further develop this. Thanks in advance, Regards, Richard Clarke

    Read the article

  • WPF: Updating visibility of controls not updating the screen

    - by Brad McBride
    I will preface this by stating that I am new to WPF programming and may be making multiple errors. Any insight that can be provided to help me improve in my skills are greatly appreciated. I am working with a WPF application and am looping through a list of objects that contain properties that describe a document that should be built on the fly and automatically printed. I am attempting to display a small grid in the interface that shows the document being built before it is printed. This serves two purposes: one, it allows the user to see work being done by the application. Two, it renders the items on the screen so that I can then have something to actually print since WPF appears to not be able to load an image for printing dynamicaly without displaying it on the screen. In my code, I am setting the various elements in the grid and setting the visibility to visible. However, the UI is not updating and the printed document doesn't look as intended since the image never shows up on the screen. Here is the XAML that I have set up <Grid x:Name="LayoutRoot" Background="Black"> <Grid Name="previewGrid" Grid.Row="1" Grid.Column="1" Background="White" Visibility="Hidden"> <Canvas Name="pageCanvas" HorizontalAlignment="Center" VerticalAlignment="Center"> <Grid Name="pageGrid" Width="163" Height="211"> <Grid.ColumnDefinitions> <ColumnDefinition Width="81.5"></ColumnDefinition> <ColumnDefinition Width="81.5"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Name="copyright" TextAlignment="Center" HorizontalAlignment="Center" VerticalAlignment="Bottom"></TextBlock> <Image Name="pageImage" Grid.Column="1" HorizontalAlignment="Center" VerticalAlignment="Center"></Image> </Grid> </Canvas> .....canvas for pages 2-4 not shown but structure is the same as for pageGrid..... </Grid> </Grid> </Window> Here is the code behind that is supposed to set the elements. previewGrid.Visibility = Windows.Visibility.Visible pageURI = New Uri(pageCollection(i).iamgeURL, UriKind.Absolute) pageGrid.Visibility = Windows.Visibility.Visible bmp.BeginInit() bmp.StreamSource = getCachedURLStream(cardURI) bmp.EndInit() pageImage.Source = bmp copyright.Text = copyrightText cardPreviewGrid.UpdateLayout() ' More code that prints the visual element pageGrid previewGrid.Visibility = Windows.Visibility.Hidden The code in codebehind loops through a number of times depending on how many different documents the user prints. Basically it builds a visual element for a page, prints an XPS version of it and then builds the next page and prints it, etc. Once all pages have been processed, the job is actually sent to the printer. The only purpose of this application is to let the user print these documents so there is not other task that they can do in the application while the documents print. I thought that putting this task in a background thread would help to update the UI but since I am trying to manipulate items directly on the UI thread it would appear that this option won't work for me. What am I doing wrong here and how can I improve the code so that I can get the behavior that I am trying to achieve?

    Read the article

  • How to get IMediaControl.Run() to start a file playing with no delay

    - by MusiGenesis
    I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next. I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames. I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches: Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video. Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file. Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least). Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time. The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing. Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable. More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this. Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch. Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.

    Read the article

  • Core Plot never stops asking for data, hangs device

    - by Ben Collins
    I'm trying to set up a core plot that looks somewhat like the AAPLot example on the core-plot wiki. I have set up my plot like this: - (void)initGraph:(CPXYGraph*)graph forDays:(NSUInteger)numDays { self.cplhView.hostedLayer = graph; graph.paddingLeft = 30.0; graph.paddingTop = 20.0; graph.paddingRight = 30.0; graph.paddingBottom = 20.0; CPXYPlotSpace *plotSpace = (CPXYPlotSpace*)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(numDays)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(1)]; CPLineStyle *lineStyle = [CPLineStyle lineStyle]; lineStyle.lineColor = [CPColor blackColor]; lineStyle.lineWidth = 2.0f; // Axes NSLog(@"Setting up axes"); CPXYAxisSet *xyAxisSet = (id)graph.axisSet; CPXYAxis *xAxis = xyAxisSet.xAxis; // xAxis.majorIntervalLength = CPDecimalFromFloat(7); // xAxis.minorTicksPerInterval = 7; CPXYAxis *yAxis = xyAxisSet.yAxis; // yAxis.majorIntervalLength = CPDecimalFromFloat(0.1); // Line plot with gradient fill NSLog(@"Setting up line plot"); CPScatterPlot *dataSourceLinePlot = [[[CPScatterPlot alloc] initWithFrame:graph.bounds] autorelease]; dataSourceLinePlot.identifier = @"Data Source Plot"; dataSourceLinePlot.dataLineStyle = nil; dataSourceLinePlot.dataSource = self; [graph addPlot:dataSourceLinePlot]; CPColor *areaColor = [CPColor colorWithComponentRed:1.0 green:1.0 blue:1.0 alpha:0.6]; CPGradient *areaGradient = [CPGradient gradientWithBeginningColor:areaColor endingColor:[CPColor clearColor]]; areaGradient.angle = -90.0f; CPFill *areaGradientFill = [CPFill fillWithGradient:areaGradient]; dataSourceLinePlot.areaFill = areaGradientFill; dataSourceLinePlot.areaBaseValue = CPDecimalFromString(@"320.0"); // OHLC plot NSLog(@"OHLC Plot"); CPLineStyle *whiteLineStyle = [CPLineStyle lineStyle]; whiteLineStyle.lineColor = [CPColor whiteColor]; whiteLineStyle.lineWidth = 1.0f; CPTradingRangePlot *ohlcPlot = [[[CPTradingRangePlot alloc] initWithFrame:graph.bounds] autorelease]; ohlcPlot.identifier = @"OHLC"; ohlcPlot.lineStyle = whiteLineStyle; ohlcPlot.stickLength = 2.0f; ohlcPlot.plotStyle = CPTradingRangePlotStyleOHLC; ohlcPlot.dataSource = self; NSLog(@"Data source set, adding plot"); [graph addPlot:ohlcPlot]; } And my delegate methods like this: #pragma mark - #pragma mark CPPlotdataSource Methods - (NSUInteger)numberOfRecordsForPlot:(CPPlot *)plot { NSUInteger maxCount = 0; NSLog(@"Getting number of records."); if (self.data1 && [self.data1 count] > maxCount) { maxCount = [self.data1 count]; } if (self.data2 && [self.data2 count] > maxCount) { maxCount = [self.data2 count]; } if (self.data3 && [self.data3 count] > maxCount) { maxCount = [self.data3 count]; } NSLog(@"%u records", maxCount); return maxCount; } - (NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSLog(@"Getting record @ idx %u", index); return [NSNumber numberWithInt:index]; } All the code above is in the view controller for the view hosting the plot, and when initGraph is called, numDays is 30. I realize of course that this plot, if it even worked, would look nothing like the AAPLot example. The problem I'm having is that the view is never shown. It finished loading because viewDidLoad is the method that calls initGraph above, and the NSLog statements indicate that initGraph finishes. What's strange is that I return a value of 54 from numberOfRecordsForPlot, but the plot asks for more than 54 data points. in fact, it never stops asking. The NSLog statement in numberForPlot:field:recordIndex prints forever, going from 0 to 54 and then looping back around and continuing. What's going on? Why won't the plot stop asking for data and draw itself?

    Read the article

  • populate a tree view with an xml file

    - by syedsaleemss
    Im using .net windows form application. I have an xml file.I want to populate a tree view with data from a xml file. I am doing this using the following code. private void button1_Click(object sender, EventArgs e) { try { this.Cursor = System.Windows.Forms.Cursors.WaitCursor; //string strXPath = "languages"; string strRootNode = "Treeview Sample"; OpenFileDialog Dlg = new OpenFileDialog(); Dlg.Filter = "All files(*.*)|*.*|xml file (*.xml)|*.txt"; Dlg.CheckFileExists = true; string xmlfilename = ""; if (Dlg.ShowDialog() == DialogResult.OK) { xmlfilename = Dlg.FileName; } // Load the XML file. //XmlDocument dom = new XmlDocument(); //dom.Load(xmlfilename); XmlDocument doc = new XmlDocument(); doc.Load(xmlfilename); string rootName = doc.SelectSingleNode("/*").Name; textBox4.Text = rootName.ToString(); //XmlNode root = dom.LastChild; //textBox4.Text = root.Name.ToString(); // Load the XML into the TreeView. this.treeView1.Nodes.Clear(); this.treeView1.Nodes.Add(new TreeNode(strRootNode)); TreeNode tNode = new TreeNode(); tNode = this.treeView1.Nodes[0]; XmlNodeList oNodes = doc.SelectNodes(textBox4.Text); XmlNode xNode = oNodes.Item(0).ParentNode; AddNode(ref xNode, ref tNode); this.treeView1.CollapseAll(); this.treeView1.Nodes[0].Expand(); this.Cursor = System.Windows.Forms.Cursors.Default; } catch (Exception ex) { this.Cursor = System.Windows.Forms.Cursors.Default; MessageBox.Show(ex.Message, "Error"); } } private void AddNode(ref XmlNode inXmlNode, ref TreeNode inTreeNode) { // Recursive routine to walk the XML DOM and add its nodes to a TreeView. XmlNode xNode; TreeNode tNode; XmlNodeList nodeList; int i; // Loop through the XML nodes until the leaf is reached. // Add the nodes to the TreeView during the looping process. if (inXmlNode.HasChildNodes) { nodeList = inXmlNode.ChildNodes; for (i = 0; i <= nodeList.Count - 1; i++) { xNode = inXmlNode.ChildNodes[i]; inTreeNode.Nodes.Add(new TreeNode(xNode.Name)); tNode = inTreeNode.Nodes[i]; AddNode(ref xNode, ref tNode); } } else { inTreeNode.Text = inXmlNode.OuterXml.Trim(); } } My xml file is this:"hello.xml" - - abc hello how ru - def i m fine - ghi how abt u Now after using the above code I am able to populate the tree view. But I dont like to populate the complete xml file. I should get only till languages language key value I don't want abc how are you etc..... I mean to say the leaf nodes. Please help me

    Read the article

  • Recursive XML through XSLT to XML

    - by Patrick
    Essentially, I have XML structured like this: <A> <B> <1>data</1> <2>data</2> <C> <1>data</1> <2>data</2> <B> <1>data</1> <2>data</2> <C> <B> <1>data</1> <2>data</2> </B> </C> </B> <B> <1>data</1> <2>data</2> </B> </C> </B> </A> I am trying to get the output to look like this: <A> <B 1="data" 2="data"> <C 1="data" 2="data"> <B 1="data" 2="data"> <C> <B 1="data" 2="data" > </B> </C> </B> <B 1="data" 2="data" > </B> </C> </B> </A> I have figured out how to put everything as attributes and start looping through the elements. The issue I am facing is that when trying to get below the first C, nothing happens. Here is my code: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:output method="xml" indent="yes"/> <xsl:template match="/"> <MenuDataResult> <B> <xsl:apply-templates /> </B> </MenuDataResult> </xsl:template> <xsl:template match="B"> <xsl:for-each select="B"> <B ItemID="{B/ItemID/text()}" ItemType="{ItemType/text()}" ItemSubType="{ItemSubType/text()}" ItemTitle="{ItemTitle/text()}" ItemImage="{ItemImage/text()}" ItemImageOverride="{ItemImageOverride/text()}" ItemLink="{ItemLink/text()}" ItemTarget="{ItemTarget/text()}>"> <xsl:for-each select="C"> <xsl:apply-templates select="C"/> </xsl:for-each> </B> </xsl:for-each> </xsl:template> <xsl:template match="C"> <C ID="{ID/text()}" Title="{Title/text()}" Template="{Template/text()}" Type="{Type/text()}" Link="{Link/text()}" ParentID="{ParentID/text()}" AncestorID="{AncestorID/text()}" FolderID="{FolderID/text()}" Description="{Description/text()}" Image="{Image/text()}" ImageOverride="{ImageOverride/text()}"> <xsl:for-each select="B"> <xsl:apply-templates select=".//B"/> </xsl:for-each> </C> </xsl:template> </xsl:stylesheet>

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34  | Next Page >