Search Results

Search found 1614 results on 65 pages for 'emps (expensive managemen'.

Page 34/65 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Ruby on Rails vs Grails vs. Spring ROO vs. Spring App

    - by lizdev
    Hi, I'm planning on writing a simple web application that will be used by lots of users (as complicated as a simple bookmarking app) and I'm trying to decide which framework/language to use. I'm very experienced with Spring/Hibernate and Java in general but new to both Grails and RoR (and Spring ROO). The only reason I'm considering RoR is because Java hosting is MUCH more expensive than RoR hosting (which is supported by almost any hosting vendor for 5$ per month). Assuming the price wasn't an issue, which one of the frameworks/languages mentioned above would you recommend for a Java developer (who knows how to configure Spring/Hibernate etc.)? I'm afraid that by using RoR I won't be able to easily support many users who are using the website at the same time. thanks

    Read the article

  • EJB3 Remote vs Webservices, performances?

    - by HerQuLe
    Hello I'm planning to a webapp where every guy using it would have a client that would run computations on its computer (because these computations can't be done on the server, too much load...), and then send results to the server. I guess there will be lot of people interested in my application and that's why i'm wonder if my architecture is good and if i'll be able to handle thousands of people. I'm planning to expose remote EJB through JNDI with Glassfish server, so 1000 people could use these EJB at the same time (i guess there could be 5-50 requests / second) to retrieve the data needed for local computation, and then to send the results... Is it expensive to expose EJB to many clients? Would it be better to use webservices, rmi, another solution? Would you recommend me another architecture for what i'm going to do?

    Read the article

  • Joomla "online-order" component for Joomla

    - by mIRU
    Hello Word :) Somebody can help me in finding an "online - ordering" component for joomla . That can contain the following function : Possibility of creating Form with pages like RS!Form Uploading Files Supporting Payment Gateways like PayPal , MasterCard , WebMoney , YandexMoney ,Money Bookers , Google Checkout ... more ) Saving data in database. A friendly control panel where admin can check statistics , income and expenditure, including offline transactions, and report on the data . A good example , for what I'm looking , is this component http://www.nbill.co.uk/ , but is to expensive and complicated . I'll be happy fore some links or advices :) . Thank a Lot ! for all who are trying to help me ! And sorry for my bad English :-)

    Read the article

  • Logging IP address for uniqueness without storing the IP address itself for privacy

    - by szabgab
    In a web application when logging some data I'd like to make sure I can identify data that came at differetn times but from the same IP address. On the other hand for privacy concerns as the data will be released publicly I'd like to make sure the actual IP cannot be retrieved. So I need some one way mapping of the IP addresses to some other strings that ensures 1-1 mapping. If I understand correctly then MD5, SHA1 or SHA256 could be a solution. I wonder if they are not too expensive in terms of processing needed? I'd be interested in any solution though if there is implementation in Perl that would be even better.

    Read the article

  • Batch select with SQLAlchemy

    - by muckabout
    I have a large set of values V, some of which are likely to exist in a table T. I would like to insert into the table those which are not yet inserted. So far I have the code: for value in values: s = self.conn.execute(mytable.__table__.select(mytable.value == value)).first() if not s: to_insert.append(value) I feel like this is running slower than it should. I have a few related questions: Is there a way to construct a select statement such that you provide a list (in this case, 'values') to which sqlalchemy responds with records which match that list? Is this code overly expensive in constructing select objects? Is there a way to construct a single select statement, then parameterize at execution time?

    Read the article

  • Silverlight Navigation Menu

    - by Justin
    I am using the Silverlight Business Application Template. The navigation consists of a few HyperlinkButtons in a StackPanel. I would like to create a more robust navigation menu (multilevel) with dropdowns and such. Telerik has one for silverlight (http://demos.telerik.com/silverlight/#Menu/FirstLook) I don't want to use Telerik control because its too expensive and I have tons of problems with Telerik. I've Googled it but including "Navigation" in my search seems to only include results about the navigation framework. Anyway, anyone have a good example?

    Read the article

  • PLINQ delayed execution

    - by tbischel
    I'm trying to understand how parallelism might work using PLINQ, given delayed execution. Here is a simple example. string[] words = { "believe", "receipt", "relief", "field" }; bool result = words.AsParallel().Any(w => w.Contains("ei")); With LINQ, I would expect the execution to reach the "receipt" value and return true, without executing the query for rest of the values. If we do this in parallel, the evaluation of "relief" may have began before the result of "receipt" has returned. But once the query knows that "receipt" will cause a true result, will the other threads yield immediately? In my case, this is important because the "any" test may be very expensive, and I would want to free up the processors for execution of other tasks.

    Read the article

  • Online Perforce Repositories

    - by Oliver Hume
    Is anyone aware of of anybody offering hosted perforce servers? It doesn't have to be free - but preferably not too expensive! My understanding of Perforce is that it's free to use for personal projects, which mine is. Currently I have a perforce server setup on the same machine as the code is on which doesn't offer much security in case of computer failure. If not, can anyone recommend one of the alternative solutions that is similar to Perforce? I have experience of SVN but cannot say I enjoy the experience.

    Read the article

  • How to read time phased data from Project Server 2007 directly from Project Server Database ?

    - by Nikhil Vaghela
    I am working on a custom web part for Project Web Access, for Project Server 2007. We are so far using PSI web services only to read and write data from and to Project Server 2007 databases. But there is a signinficant performance issue when you retrieve time phased data through Statusing web service, it is basically an expensive call for querying time phased data for each tasks. I want to access Time phased data entered by user for each tasks by directly hitting the Project Server Database. [ I do not want the solution suggested at this link : http://blogs.msdn.com/project_programmability/archive/2007/05/24/getting-at-the-task-time-phased-data.aspx as it reads data from reporting database which gets entry only after the project is published. ] I want to get time phased data as soon as user enters it. Any idea ? Thanks.

    Read the article

  • How can I replace this semaphore with a monitor?

    - by Kurru
    Hi In previous question of mine, someone had meantioned that using Semaphores were expensive in C# compared to using a monitor. So I ask this, how can I replace the semaphore in this code with a monitor? I need function1 to return its value after function2 (in a separate thread) has been completed. I had replaced the Semaphore.WaitOne with a Monitor.Wait and the Semaphore.Release with a Monitor.PulseAll but the PulseAll was being triggered before the Wait causing the program to hang. Any idea how to avoid that race condition? Semaphore semaphore = new Semaphore(0,1); byte b; public byte Function1() { // new thread starting in Function2; semaphore.WaitOne(); return b; } public void Function2() { // do some thing b = 0; semaphore.Release(); }

    Read the article

  • Set username credential for a new channel without creating a new factory

    - by Ramon
    I have a backend service and front-end services. They communicate via the trusted subsystem pattern. I want to transfer a username from the frontend to the backend and do this via username credentials as found here: http://msdn.microsoft.com/en-us/library/ms730288.aspx This does not work in our scenerio where the front-end builds a backend service channel factory via: channelFactory = new ChannelFactory<IBackEndService>(.....); Creating a new channel is done via die channel factory. I can only set the credentials one time after that I get an exception that the username object is read-only. channelFactory.Credentials.Username.Username = "myCoolFrontendUser"; var channel = channelFactory.CreateChannel(); Is there a way to create the channel factory only one time as this is expensive to create and then specify username credential when creating a channel?

    Read the article

  • Algorithm shortest path between all points

    - by Jeroen
    Hi, suppose I have 10 points. I know the distance between each point. I need to find the shortest possible route passing trough all points. I have tried a couple of algorithms (Dijkstra, Floyd Warshall,...) and the all give me the shortest path between start and end, but they don't make a route with all points on it. Permutations work fine, but they are to resource expensive. What algorithms can you advise me to look into for this problem? Or is there a documented way to do this with the above mentioned algorithms? Tnx Jeroen

    Read the article

  • How to get the data for intra-day candlestick charts for stocks on eg Nasdaq

    - by Chris
    Hi, For a learning exercise, i'm wanting to create candlestick (stock) graphs for stocks using zedgraph. Now on google finance, i can get daily open-high-low-close data which is perfect for making these graphs, but i'm wanting to create intra-day graphs, eg open-high-low-close data for an hour (or 5 mins, or 1 min even). Is there any way to get that kind of data without having to subscribe to any expensive service? I've heard opentick mentioned in an old SO question, but their site is defunct now. I was thinking of polling google finance once a minute to get the latest stock price, then with an hour's worth of 60 prices, i could then roughly calculate the open-high-low-close, but this is a bit of an estimation and i'm open to other suggestions. Cheers all.

    Read the article

  • C#: Cached Property: Easier way?

    - by Peterdk
    I have a object with properties that are expensive to compute, so they are only calculated on first access and then cached. private List<Note> notes; public List<Note> Notes { get { if (this.notes == null) { this.notes = CalcNotes(); } return this.notes; } } I wonder, is there a better way to do this? Is it somehow possible to create a Cached Property or something like that in C#?

    Read the article

  • FLEX: is PopupManager working with mouseover / mouseout events ?

    - by Patrick
    hi, I want to make work PopupManager as a Tooltip. So I want to create a popup everytime I move the mouse over my component and make it disappear when I move the mouse out. Moreover, I have many components in my canvas, so I need it not to be too expensive. Also, when I move the mouse out from the component, but over the pop-up, it should not disappear, because I want to click on the buttons inside it. I need something similar to gmail chat popups. Is it doable with PopupManager ? thanks

    Read the article

  • Comparison of ASP.Net Reporting Solutions

    - by Brian MacKay
    This week my team spent way too much time trying to do simple things in reporting services, and I've decided to start evaluating other options. I know there are some good options out there now that aren't too expensive. I've heard that Telerik, ActiveReports, and a few others are widely used. I was hoping to get some first hand accounts regarding reporting tools that you've used. Specifically, I definitely want to hear your thoughts about: Pain points and gotchas you ran into. Ease of use for report design. It's a little bizarre to me that Access still seems to hold the throne in this area! What's your favorite tool? Anything I've missed that seems important to you. Thanks a lot!

    Read the article

  • Jpeg calculating max size

    - by Doodle
    I have to say the I don't know much about how file formats work. My question is say I have a jpeg file that is 200 px by 200 px, how can one calculate what the maximum size that file could be in terms of megabytes/bytes? I think that the reasoning that led to the question will help some one answer me. I have a Java Applet the uploads Images that people draw with it to my server. I need to know what the max size that this file can conceivably reach. It is always going to be 200x200. It sounds dumb but are there colors that take more byte size then others and if so what is the most expensive one?

    Read the article

  • How to Access a descendant object's internal method in C#

    - by Giovanni Galbo
    I'm trying to access a method that is marked as internal in the parent class (in its own assembly) in an object that inherits from the same parent. Let me explain what I'm trying to do... I want to create Service classes that return IEnumberable with an underlying List to non-Service classes (e.g. the UI) and optionally return an IEnumerable with an underlying IQueryable to other services. I wrote some sample code to demonstrate what I'm trying to accomplish, shown below. The example is not real life, so please remember that when commenting. All services would inherit from something like this (only relevant code shown): public class ServiceBase<T> { protected readonly ObjectContext _context; protected string _setName = String.Empty; public ServiceBase(ObjectContext context) { _context = context; } public IEnumerable<T> GetAll() { return GetAll(false); } //These are not the correct access modifiers.. I want something //that is accessible to children classes AND between descendant classes internal protected IEnumerable<T> GetAll(bool returnQueryable) { var query = _context.CreateQuery<T>(GetSetName()); if(returnQueryable) { return query; } else { return query.ToList(); } } private string GetSetName() { //Some code... return _setName; } } Inherited services would look like this: public class EmployeeService : ServiceBase<Employees> { public EmployeeService(ObjectContext context) : base(context) { } } public class DepartmentService : ServiceBase<Departments> { private readonly EmployeeService _employeeService; public DepartmentService(ObjectContext context, EmployeeService employeeService) : base(context) { _employeeService = employeeService; } public IList<Departments> DoSomethingWithEmployees(string lastName) { //won't work because method with this signature is not visible to this class var emps = _employeeService.GetAll(true); //more code... } } Because the parent class lives is reusable, it would live in a different assembly than the child services. With GetAll(bool returnQueryable) being marked internal, the children would not be able to see each other's GetAll(bool) method, just the public GetAll() method. I know that I can add a new internal GetAll method to each service (or perhaps an intermediary parent class within the same assembly) so that each child service within the assembly can see each other's method; but it seems unnecessary since the functionality is already available in the parent class. For example: internal IEnumerable<Employees> GetAll(bool returnIQueryable) { return base.GetAll(returnIQueryable); } Essentially what I want is for services to be able to access other service methods as IQueryable so that they can further refine the uncommitted results, while everyone else gets plain old lists. Any ideas? EDIT You know what, I had some fun playing a little code golf with this... but ultimately I wouldn't be able to use this scheme anyway because I pass interfaces around, not classes. So in my example GetAll(bool returnIQueryable) would not be in the interface, meaning I'd have to do casting, which goes against what I'm trying to accomplish. I'm not sure if I had a brain fart or if I was just too excited trying to get something that I thought was neat to work. Either way, thanks for the responses.

    Read the article

  • SSL on Heroku / User Authentication Across Multiple Domains

    - by Euwyn
    Posted a previous question on this, but have a followup. I was trying to create a workaround to use SSL on the expensive custom domain. I'm willing to live with bumping a user to https://app.heroku.com from http://www.app.com for certain secure pages, and have monkey-patched SSL required to make this happen. However, now this issue is with making sure my User is logged in when I do so. As I understand, cookies aren't cross domain. Is there a way around this issue?

    Read the article

  • Are there free realtime financial data feeds since the demise of OpenQuant?

    - by Mel Cooper
    Now that the oligopole of market data providers successfully killed OpenQuant, does any alternative to proprietary and expensive subscriptions for realtime market data subsist? Ideally I would like to be able to monitor tick by tick securities from the NYSE, NASDAQ and AMEX (about 6000 symbols). Most vendors put a limit of 500 symbols watchable at the same time, this is unacceptable to me, even if one can imagine a rotation among the 500 symbols ie. making windows of 5 sec. of effective observation out of each minute for every symbol. Currently I'm doing this by a Java thread pool calling Google Finance, but this is unsatisfactory for several reasons, one being that Google doesn't return the volume traded. Any hint much appreciated, Cheers

    Read the article

  • split a list using linq

    - by Anonym
    I've the following code: var e = someList.GetEnumerator(); var a = new List<Foo>(); var b = new List<Foo>(); while(e.MoveNext()) { if(CheckCondition(e.Current)) { b.Add(e.Current); break; } a.Add(e.Current); } while(e.MoveNext()) b.Add(e.Current) This looks ugly. Basically, iterate through a list and add elements to one list until some condition kicks in, and add the rest to another list. Is there a better way e.g. using linq ? CheckCondition() is expensive, and the lists can be huge so I'd prefer to not do anything that iterates the lists twice.

    Read the article

  • Where can I find a nice .NET Tab Control for free?

    - by Nazgulled
    I'm doing this application in C# using the free Krypton Toolkit but the Krypton Navigator is a paid product which is rather expensive for me and this application is being developed on my free time and it will be available to the public for free. So, I'm looking for a free control to integrate better into my Krypton application because the default one doesn't quite fit and it will be different depending on the OS version... Any suggestions? P.S: I know I could owner-draw it but I'm trying not to have that kind of work... I prefer something already done if it exists for free. EDIT: I just found exactly what I wanted: http://www.angelonline.net/CodeSamples/Lib_3.5.0.zip

    Read the article

  • Digital Signature Device Recommendations (Mini Tablet)

    - by blu
    I'd like to capture signatures of application users with a mini-tablet/little pos signature device. I welcome any first hand experiences of which ones were good and which ones to stay away from. Off the top of my head I can think of a few features I'd like to see: USB interface Not too expensive (I don't know 100-200 dollars?) Be easy to integrate with a managed .NET application Also I realize most people, myself included, think of digitally signing assemblies with code instead of a mini-tablet device, if there is a more accurate phrase for this pleas let me know. Thanks for any input.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >