Search Results

Search found 2734 results on 110 pages for 'michael wagner'.

Page 7/110 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • fast, clean, C, timsort implementation?

    - by Drew Wagner
    Does anyone know of a clean implementation of timsort? The Python sources contain a description and code for the original timsort, but it is understandably full of python-specific calls. I have a smoothly varying 2D array of double floats that I would like to sort as quickly as possible. It ought to contain a lot of monotonically increasing and decreasing runs. I'd like to try timsorting the rows individually, and then merging the sorted rows. If you know of a better sort technique, I'm open to suggestions. Thanks!

    Read the article

  • Nested/Child TransactionScope Rollback

    - by Robert Wagner
    I am trying to nest TransactionScopes (.net 4.0) as you would nest Transactions in SQL Server, however it looks like they operate differently. I want my child transactions to be able to rollback if they fail, but allow the parent transaction to decide whether to commit/rollback the whole operation. A greatly simplified example of what I am trying to do: static void Main(string[] args) { using(var scope = new TransactionScope()) // Trn A { // Insert Data A DoWork(true); DoWork(false); // Rollback or Commit } } // This class is a few layers down static void DoWork(bool fail) { using(var scope = new TransactionScope()) // Trn B { // Update Data A if(!fail) { scope.Complete(); } } } I can't use the Suppress or RequiresNew options as Trn B relies on data inserted by Trn A. If I do use those options, Trn B is blocked by Trn A. Any ideas how I would get it to work, or if it is even possible using the System.Transactions namespace? Thanks

    Read the article

  • Does .NET have a built in IEnumerable for multiple collections?

    - by Bryce Wagner
    I need an easy way to iterate over multiple collections without actually merging them, and I couldn't find anything built into .NET that looks like it does that. It feels like this should be a somewhat common situation. I don't want to reinvent the wheel. Is there anything built in that does something like this: public class MultiCollectionEnumerable<T> : IEnumerable<T> { private MultiCollectionEnumerator<T> enumerator; public MultiCollectionEnumerable(params IEnumerable<T>[] collections) { enumerator = new MultiCollectionEnumerator<T>(collections); } public IEnumerator<T> GetEnumerator() { enumerator.Reset(); return enumerator; } IEnumerator IEnumerable.GetEnumerator() { enumerator.Reset(); return enumerator; } private class MultiCollectionEnumerator<T> : IEnumerator<T> { private IEnumerable<T>[] collections; private int currentIndex; private IEnumerator<T> currentEnumerator; public MultiCollectionEnumerator(IEnumerable<T>[] collections) { this.collections = collections; this.currentIndex = -1; } public T Current { get { if (currentEnumerator != null) return currentEnumerator.Current; else return default(T); } } public void Dispose() { if (currentEnumerator != null) currentEnumerator.Dispose(); } object IEnumerator.Current { get { return Current; } } public bool MoveNext() { if (currentIndex >= collections.Length) return false; if (currentIndex < 0) { currentIndex = 0; if (collections.Length > 0) currentEnumerator = collections[0].GetEnumerator(); else return false; } while (!currentEnumerator.MoveNext()) { currentEnumerator.Dispose(); currentEnumerator = null; currentIndex++; if (currentIndex >= collections.Length) return false; currentEnumerator = collections[currentIndex].GetEnumerator(); } return true; } public void Reset() { if (currentEnumerator != null) { currentEnumerator.Dispose(); currentEnumerator = null; } this.currentIndex = -1; } } }

    Read the article

  • fastest way to sort the entries of a "smooth" 2D array

    - by Drew Wagner
    What is the fastest way to sort the values in a smooth 2D array? The input is a small filtered image: about 60 by 80 pixels single channel single or double precision float row major storage, sequential in memory values have mixed sign piecewise "smooth", with regions on the order of 10 pixels wide Output is a flat (about 4800 value) array of the sorted values, along with the indices that sort the original array.

    Read the article

  • A PHP script to stream internet radio?

    - by Honus Wagner
    I've been searching and searching and I haven't yet come up with a solution to host my own streaming audio player. I'm looking for a way to host an internet radio player that connects to whatever streams I enter in and plays them. I'm not looking to play my MP3s or anything like that. I'm looking to play content from 181.fm or 1Club.fm, for example. I'd even settle for ShoutCast-only streams. I've been to www.wavestreaming.com but it didnt work for me. I'm guessing its because in the very first box where you enter your website url, it leads in for you: http//www. then you fill in the rest. My site is https:// and does not contain a www. in the URL. I'm guessing that has something to do with it. Any links, suggestions for search topics, or even a brief technical overview of what I should be looking into would be greatly appreciated. Thanks for your time.

    Read the article

  • PHP: optimum configuration storage ?

    - by Jerome WAGNER
    Hello, My application gets configured via a lot of key/values (let's say 30.000 for instance) I want to find the best deployment method for these configurations, knowing that I want to avoid DEFINEs to allow for runtime re-configuration. I have thought of - pre-compiling them into an array via a php file - pre-compiling them into a tmpfs sqlite database - pre-compiling them into a memcached db what are my options for the best random access time to these configuration (memory is not an issue) ? Thanks Jerome

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • WebOrb - Serializing an object as a string

    - by Robert Wagner
    We have an Adobe Flex client talking to a .NET server using WebORB. Simplifying things, on the .NET side of things we have a struct that wraps a ulong like this: public struct MyStruct { private ulong _val; public override string ToString() { return _val.ToString("x16"); } // Parse method } I want the Flex client to treat this as a string. So that for the following server method: public void DoStuff(int i, MyStruct b); It can call it as DoStuff(1, "1234567890ABCDEF") I've tried playing with custom WebORB serializers, but the documentation is a bit scarce. Is this possible? If so how?

    Read the article

  • Application doesn't exit with 0 threads

    - by Bryce Wagner
    We have a WinForms desktop application, which is heavily multithreaded. 3 threads run with Application.Run and a bunch of other background worker threads. Getting all the threads to shut down properly was kind of tricky, but I thought I finally got it right. But when we actually deployed the application, users started experiencing the application not exiting. There's a System.Threading.Mutex to prevent them from running the app multiple times, so they have to go into task manager and kill the old one before they can run it again. Every thread gets a Thread.Join before the main thread exits, and I added logging to each thread I spawn. According to the log, every single thread that starts also exits, and the main thread also exits. Even stranger, running SysInternals ProcessExplorer show all the threads disappear when the application exits. As in, there are 0 threads (managed or unmanaged), but the process is still running. I can't reproduce this on any developers computers or our test environment, and so far I've only seen it happen on Windows XP (not Vista or Windows 7 or any Windows Server). How can a process keep running with 0 threads?

    Read the article

  • ShoutCast over SSL

    - by Honus Wagner
    So I've gone ahead and set up my ShoutCast server DNAS and set my DSP in Winamp on my host computer. The server listens on port 8000, so per some instructions I installed an output plugin for winamp (Shoutcast DSP) and used 8000 and the password to connect. Server accepts the connection. Now, what the heck do I do now? My host computer is SSL secured and the DNAS server is installed within the secure web directory (if that matters). My desired end result is that I want to listen to my ShoutCast setup at home (host computer) from any computer. I try browsing to my ip address and port 8000 (without using HTTPS) and it comes back with nothing. If I browse with HTTPS://my.server.com:8000, I get Error code: ssl_error_rx_record_too_long) Have I completely missed something, or am I just a total moron? Thanks.

    Read the article

  • Setting the Identity/Principal from a MessageInspector in WCF

    - by Robert Wagner
    I am developing a WCF service that receives the user's credentials in the SOAP header. These credentials are read on the server side using a MessageInspector. So far so good. I want to set the Thread.CurrentPrincipal to a custom principal (CustomPrincipal), but when I do this from the MessageInspector, it gets overridden by the time the service is invoked. When is the best time to set the principal? Also what is the best way to pass the principal, identity or credentials from the inspector to that location?

    Read the article

  • Can Gobby/Sobby be used for collaborative edition for a team of developers ?

    - by Jerome WAGNER
    Hello, Gobby/Sobby is an open source client/server for collaborative edition of plain text file (source code). My question is 4-fold : Can you share any real-life usage of Gobby/Sobby for development among a group of physically separated developers ? Is the project mature enough as a productivity tool ? What are the working use cases ? What versions should be used ? (It seems 'undo' feature is not yet officially packaged) Thanks Jerome

    Read the article

  • How to determine OS Platform with WMI?

    - by cary.wagner
    I am trying to figure out if there is a location in WMI that will return the OS Architecture (i.e. 32-bit or 64-bit) that will work across "all" versions of Windows. I thought I had figured it out looking at my Win2k8 system when I found the following: Win32_OperatingSystem / OSArchitecture I was wrong. It doesn't appear that this field exists on Win2k3 systems. Argh! So, is anyone aware of another field in WMI that "is" the same across server versions? If not, what about a registry key that is the same? I am using a tool that only allows me to configure simple field queries, so I cannot use a complex script to perform. Any help would be greatly appreciated! Cheers... Cary

    Read the article

  • YAQ: Yet Another Question

    - by Jerome WAGNER
    When you have to code something, you can either : start from scratch (Yet Another approach) fork another existing project participate in an existing project to add the features you miss What do you think are the keys aspects that make you choose option 1, 2 or 3 ?

    Read the article

  • How do I correctly decode unicode parameters passed to a servlet

    - by Grant Wagner
    Suppose I have: <a href="http://www.yahoo.com/" target="_yahoo" title="Yahoo!&#8482;" onclick="return gateway(this);">Yahoo!</a> <script type="text/javascript"> function gateway(lnk) { window.open(SERVLET + '?external_link=' + encodeURIComponent(lnk.href) + '&external_target=' + encodeURIComponent(lnk.target) + '&external_title=' + encodeURIComponent(lnk.title)); return false; } </script> I have confirmed external_title gets encoded as Yahoo!%E2%84%A2 and passed to SERVLET. If in SERVLET I do: Writer writer = response.getWriter(); writer.write(request.getParameter("external_title")); I get Yahoo!â„¢ in the browser. If I manually switch the browser character encoding to UTF-8, it changes to Yahoo!TM (which is what I want). So I figured the encoding I was sending to the browser was wrong (it was Content-type: text/html; charset=ISO-8859-1). I changed SERVLET to: response.setContentType("text/html; charset=utf-8"); Writer writer = response.getWriter(); writer.write(request.getParameter("external_title")); Now the browser character encoding is UTF-8, but it outputs Yahoo!â?¢ and I can't get the browser to render the correct character at all. My question is: is there some combination of Content-type and/or new String(request.getParameter("external_title").getBytes(), "UTF-8"); and/or something else that will result in Yahoo!TM appearing in the SERVLET output?

    Read the article

  • SQL Server 2008 - Keyword search using table Join

    - by Aaron Wagner
    Ok, I created a Stored Procedure that, among other things, is searching 5 columns for a particular keyword. To accomplish this, I have the keywords parameter being split out by a function and returned as a table. Then I do a Left Join on that table, using a LIKE constraint. So, I had this working beautifully, and then all of the sudden it stops working. Now it is returning every row, instead of just the rows it needs. The other caveat, is that if the keyword parameter is empty, it should ignore it. Given what's below, is there A) a glaring mistake, or B) a more efficient way to approach this? Here is what I have currently: ALTER PROCEDURE [dbo].[usp_getOppsPaged] @startRowIndex int, @maximumRows int, @city varchar(100) = NULL, @state char(2) = NULL, @zip varchar(10) = NULL, @classification varchar(15) = NULL, @startDateMin date = NULL, @startDateMax date = NULL, @endDateMin date = NULL, @endDateMax date = NULL, @keywords varchar(400) = NULL AS BEGIN SET NOCOUNT ON; ;WITH Results_CTE AS ( SELECT opportunities.*, organizations.*, departments.dept_name, departments.dept_address, departments.dept_building_name, departments.dept_suite_num, departments.dept_city, departments.dept_state, departments.dept_zip, departments.dept_international_address, departments.dept_phone, departments.dept_website, departments.dept_gen_list, ROW_NUMBER() OVER (ORDER BY opp_id) AS RowNum FROM opportunities JOIN departments ON opportunities.dept_id = departments.dept_id JOIN organizations ON departments.org_id=organizations.org_id LEFT JOIN Split(',',@keywords) AS kw ON (title LIKE '%'+kw.s+'%' OR [description] LIKE '%'+kw.s+'%' OR tasks LIKE '%'+kw.s+'%' OR requirements LIKE '%'+kw.s+'%' OR comments LIKE '%'+kw.s+'%') WHERE ( (@city IS NOT NULL AND (city LIKE '%'+@city+'%' OR dept_city LIKE '%'+@city+'%' OR org_city LIKE '%'+@city+'%')) OR (@state IS NOT NULL AND ([state] = @state OR dept_state = @state OR org_state = @state)) OR (@zip IS NOT NULL AND (zip = @zip OR dept_zip = @zip OR org_zip = @zip)) OR (@classification IS NOT NULL AND (classification LIKE '%'+@classification+'%')) OR ((@startDateMin IS NOT NULL AND @startDateMax IS NOT NULL) AND ([start_date] BETWEEN @startDateMin AND @startDateMax)) OR ((@endDateMin IS NOT NULL AND @endDateMax IS NOT NULL) AND ([end_date] BETWEEN @endDateMin AND @endDateMax)) OR ( (@city IS NULL AND @state IS NULL AND @zip IS NULL AND @classification IS NULL AND @startDateMin IS NULL AND @startDateMax IS NULL AND @endDateMin IS NULL AND @endDateMin IS NULL) ) ) ) SELECT * FROM Results_CTE WHERE RowNum >= @startRowIndex AND RowNum < @startRowIndex + @maximumRows; END

    Read the article

  • What is the best way to convert a hexidecimal string to a byte array (.NET)?

    - by Robert Wagner
    I have a hexidecimal string that I need to convert to a byte array. The best way (ie efficient and least code) is: string hexstr = "683A2134"; byte[] bytes = new byte[hexstr.Length/2]; for(int x = 0; x < bytes.Length; x++) { bytes[x] = Convert.ToByte(hexstr.Substring(x * 2, 2), 16); } In the case where I have a 32bit value I can do the following: string hexstr = "683A2134"; byte[] bytes = BitConverter.GetBytes(Convert.ToInt32(hexstr, 16)); However what about in the general case? Is there a better built in function, or a clearer (doesn't have to be faster, but still performant) way of doing this? I would prefer a built in function as there seems to be one for everything (well common things) except this particular conversion.

    Read the article

  • Autofac: Reference from a SingleInstance'd type to a HttpRequestScoped

    - by Michael Wagner
    I've got an application where a shared object needs a reference to a per-request object. Shared: Engine | Per Req: IExtensions() | Request If i try to inject the IExtensions directly into the constructor of Engine, even as Lazy(Of IExtension), I get a "No scope matching [Request] is visible from the scope in which the instance was requested." exception when it tries to instantiate each IExtension. How can I create a HttpRequestScoped instance and then inject it into a shared instance? Would it be considered good practice to set it in the Request's factory (and therefore inject Engine into RequestFactory)?

    Read the article

  • How can I find out if an data- attribute is set to an empty value?

    - by Stephan Wagner
    Is there a way to find out if an data- attribute is set to an empty value or if it is not set at all? See this fiddle example (Check the console when clicking on the elements): http://jsfiddle.net/StephanWagner/yy8qvwfp/ <div onclick="console.log($(this).attr('data-test'))">undefined</div> <div data-test="" onclick="console.log($(this).attr('data-test'))">empty</div> <!-- this one will also return an empty value --> <div data-test onclick="console.log($(this).attr('data-test'))">null</div> <div data-test="value" onclick="console.log($(this).attr('data-test'))">value</div> Im having the issue with the third example. I need to know if the attribute actually is set to an empty value or if it is not set at all. Is that actually possible? EDIT: The reason I'm asking is that I'm updating content with the attributes value, so data-test="" should update the content to an empty value, but data-test should do nothing at all

    Read the article

  • How can I get a default value in some instances but not others?

    - by Connor Wagner
    I am making an iPhone app and want to use an 'if' statement and a boolean to set default values in some instances but not others... is this possible? Are there alternative options if it is not possible? In the MainViewController.m I have: @interface MainViewController (){ BOOL moveOver; } [...] - (void)viewDidLoad { [super viewDidLoad]; _label.text = [NSString stringWithFormat:@"%i", computerSpeed]; } } [...] - (void)flipsideViewControllerDidFinish:(FlipsideViewController *)controller { [self dismissViewControllerAnimated:YES completion:nil]; moveOver = true; } The problem that it is redefined when the ViewDidLoad runs... I need a statement that will not redefine when the ViewDidLoad runs. I have something that I feel like is much closer to working... In the ViewDidLoad I have: if (playToInt != 10 || computerMoveSpeed != 3) { moveOver = TRUE; } which connects to my created method, gameLoop. It has if (moveOver == false) { computerMoveSpeed = 3; playToInt = 10; } I have tried putting the code in the gameLoop into the ViewDidLoad, but it had the same effect. When moveOver was false, the computerMoveSpeed and the playToInt were both seemingly 0. I have two UITextFields and typed 10 and 3 in them... does this not set it to the default? It seems to set the default to 0 for both, how do I change this? THIS IS A DIFFERENT ISSUE THAN THE THREE BOOLEAN VALUES QUESTION

    Read the article

  • Show dialog after Application.Exit()

    - by Bryce Wagner
    Under certain circumstances, I wish to display an error message to the user if the application didn't shut down, but MessageBox.Show() doesn't actually do anything after calling Application.Exit(). Is there a way to convince it to show a dialog after Application.Exit()?

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >