Search Results

Search found 22067 results on 883 pages for 'point clouds'.

Page 191/883 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • HttpModule Init method is called several times - why?

    - by MartinF
    I was creating a http module and while debugging I noticed something which at first (at least) seemed like weird behaviour. When i set a breakpoint in the init method of the httpmodule i can see that the http module init method is being called several times even though i have only started up the website for debugging and made one single request (sometimes it is hit only 1 time, other times as many as 10 times). I know that I should expect several instances of the HttpApplication to be running and for each the http modules will be created, but when i request a single page it should be handled by a single http application object and therefore only fire the events associated once, but still it fires the events several times for each request which makes no sense - other than it must have been added several times within that httpApplication - which means it is the same httpmodule init method which is being called every time and not a new http application being created each time it hits my break point (see my code example at the bottom etc.). What could be going wrong here ? is it because i am debugging and set a breakpoint in the http module? It have noticed that it seems that if i startup the website for debugging and quickly step over the breakpoint in the httpmodule it will only hit the init method once and the same goes for the eventhandler. If I instead let it hang at the breakpoint for a few seconds the init method is being called several times (seems like it depends on how long time i wait before stepping over the breakpoint). Maybe this could be some build in feature to make sure that the httpmodule is initialised and the http application can serve requests , but it also seems like something that could have catastrophic consequences. This could seem logical, as it might be trying to finish the request and since i have set the break point it thinks something have gone wrong and try to call the init method again ? soo it can handle the request ? But is this what is happening and is everything fine (i am just guessing), or is it a real problem ? What i am specially concerned about is that if something makes it hang on the "production/live" server for a few seconds a lot of event handlers are added through the init and therefore each request to the page suddenly fires the eventhandler several times. This behaviour could quickly bring any site down. I have looked at the "original" .net code used for the httpmodules for formsauthentication and the rolemanagermodule etc but my code isnt any different that those modules uses. My code looks like this. public void Init(HttpApplication app) { if (CommunityAuthenticationIntegration.IsEnabled) { FormsAuthenticationModule formsAuthModule = (FormsAuthenticationModule) app.Modules["FormsAuthentication"]; formsAuthModule.Authenticate += new FormsAuthenticationEventHandler(this.OnAuthenticate); } } here is an example how it is done in the RoleManagerModule from the .NET framework public void Init(HttpApplication app) { if (Roles.Enabled) { app.PostAuthenticateRequest += new EventHandler(this.OnEnter); app.EndRequest += new EventHandler(this.OnLeave); } } Do anyone know what is going on? (i just hope someone out there can tell me why this is happening and assure me that everything is perfectly fine) :) UPDATE: I have tried to narrow down the problem and so far i have found that the Init method being called is always on a new object of my http module (contary to what i thought before). I seems that for the first request (when starting up the site) all of the HttpApplication objects being created and its modules are all trying to serve the first request and therefore all hit the eventhandler that is being added. I cant really figure out why this is happening. If i request another page all the HttpApplication's created (and their moduless) will again try to serve the request causing it to hit the eventhandler multiple times. But it also seems that if i then jump back to the first page (or another one) only one HttpApplication will start to take care of the request and everything is as expected - as long as i dont let it hang at a break point. If i let it hang at a breakpoint it begins to create new HttpApplication's objects and starts adding HttpApplications (more than 1) to serve/handle the request (which is already in process of being served by the HttpApplication which is currently stopped at the breakpoint). I guess or hope that it might be some intelligent "behind the scenes" way of helping to distribute and handle load and / or errors. But I have no clue. I hope some out there can assure me that it is perfectly fine and how it is supposed to be?

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • Using RJS to replace innerHTML with a real live instance variable.

    - by Steve Cotner
    I can't for the life of me get RJS to replace an element's innerHTML with an instance variable's attribute, i.e. something like @thing.name I'll show all the code (simplified from the actual project, but still complete), and I hope the solution will be forehead-slap obvious to someone... In RoR, I've made a simple page displaying a random Chinese character. This is a Word object with attributes chinese and english. Clicking on a link titled "What is this?" reveals the english attribute using RJS. Currently, it also hides the "What is this?" link and reveals a "Try Another?" link that just reloads the page, effectively starting over with a new random character. This is fine, but there are other elements on the page that make their own database queries, so I would like to load a new random character by an AJAX call, leaving the rest of the page alone. This has turned out to be harder than I expected: I have no trouble replacing the html using link_remote_to and page.replace_html, but I can't get it to display anything that includes an instance variable. I have a Word resource and a Page resource, which has a home page, where all this fun takes place. In the PagesController, I've made a couple ways to get random words. Either one works fine... Here's the code: class PagesController < ApplicationController def home all_words = Word.find(:all) @random_word = all_words.rand @random_words = Word.find(:all, :limit => 100, :order => 'rand()') @random_first = @random_words[1] end end As an aside, the SQL call with :limit => 100 is just in case I think of some way to cycle through those random words. Right now it's not useful. Also, the 'rand()' is MySQL specific, as far as I know. In the home page view (it's HAML), I have this: #character_box = render(:partial => "character", :object => @random_word) if @random_word #whatisthis = link_to_remote "? what is this?", :url => { :controller => 'words', :action => 'reveal_character' }, :html => { :title => "Click for the translation." } #tryanother.{:style => "display:none"} = link_to "try another?", root_path Note that the #'s in this case represent divs (with the given ids), not comments, because this is HAML. The "character" partial looks like this (it's erb, for no real reason): <div id="character"> <%= "#{@random_word.chinese}" } %> </div> <div id="revealed" style="display:none"> <ul> <li><span id="english"><%= "#{@random_word.english_name}" %></span></li> </ul> </div> The reveal_character.rjs file looks like this: page[:revealed].visual_effect :slide_down, :duration => '.2' page[:english].visual_effect :highlight, :startcolor => "#ffff00", :endcolor => "#ffffff", :duration => '2.5' page.delay(0.8) do page[:whatisthis].visual_effect :fade, :duration => '.3' page[:tryanother].visual_effect :appear end That all works perfectly fine. But if I try to turn link_to "try another?" into link_to_remote, and point it to an RJS template that replaces the "character" element with something new, it only works when I replace the innerHTML with static text. If I try to pass an instance variable in there, it never works. For instance, let's say I've defined a second random word under Pages#home... I'll add @random_second = @random_words[2] there. Then, in the home page view, I'll replace the "try another?" link (previously pointing to the root_path), with this: = link_to_remote "try another?", :url => { :controller => 'words', :action => 'second_character' }, :html => { :title => "Click for a new character." } I'll make that new RJS template, at app/views/words/second_character.rjs, and a simple test like this shows that it's working: page.replace_html("character", "hi") But if I change it to this: page.replace_html("character", "#{@random_second.english}") I get an error saying I fed it a nil object: ActionView::TemplateError (undefined method `english_name' for nil:NilClass) on line #1 of app/views/words/second_character.rjs: 1: page.replace_html("character", "#{@random_second.english}") Of course, actually instantiating @random_second, @random_third and so on ad infinitum would be ridiculous in a real app (I would eventually figure out some better way to keep grabbing a new random record without reloading the page), but the point is that I don't know how to get any instance variable to work here. This is not even approaching my ideal solution of rendering a partial that includes the object I specify, like this: page.replace_html 'character', :partial => 'new_character', :object => @random_second As I can't get an instance variable to work directly, I obviously cannot get it to work via a partial. I have tried various things like: :object => @random_second or :locals => { :random_second => @random_second } I've tried adding these all over the place -- in the link_to_remote options most obviously -- and studying what gets passed in the parameters, but to no avail. It's at this point that I realize I don't know what I'm doing. This is my first question here. I erred on the side of providing all necessary code, rather than being brief. Any help would be greatly appreciated.

    Read the article

  • Mutating the expression tree of a predicate to target another type

    - by Jon
    Intro In the application I 'm currently working on, there are two kinds of each business object: the "ActiveRecord" type, and the "DataContract" type. So for example, we have: namespace ActiveRecord { class Widget { public int Id { get; set; } } } namespace DataContracts { class Widget { public int Id { get; set; } } } The database access layer takes care of "translating" between hierarchies: you can tell it to update a DataContracts.Widget, and it will magically create an ActiveRecord.Widget with the same property values and save that. The problem I have surfaced when attempting to refactor this database access layer. The Problem I want to add methods like the following to the database access layer: // Widget is DataContract.Widget interface DbAccessLayer { IEnumerable<Widget> GetMany(Expression<Func<Widget, bool>> predicate); } The above is a simple general-use "get" method with custom predicate. The only point of interest is that I 'm not passing in an anonymous function but rather an expression tree. This is done because inside DbAccessLayer we have to query ActiveRecord.Widget efficiently (LINQ to SQL) and not have the database return all ActiveRecord.Widget instances and then filter the enumerable collection. We need to pass in an expression tree, so we ask for one as the parameter for GetMany. The snag: the parameter we have needs to be magically transformed from an Expression<Func<DataContract.Widget, bool>> to an Expression<Func<ActiveRecord.Widget, bool>>. This is where I haven't managed to pull it off... Attempted Solution What we 'd like to do inside GetMany is: IEnumerable<DataContract.Widget> GetMany( Expression<Func<DataContract.Widget, bool>> predicate) { var lambda = Expression.Lambda<Func<ActiveRecord.Widget, bool>>( predicate.Body, predicate.Parameters); // use lambda to query ActiveRecord.Widget and return some value } This won't work because in a typical scenario, for example if: predicate == w => w.Id == 0; ...the expression tree contains a MemberAccessExpression instance which has a MemberInfo property (named Member) that point to members of DataContract.Widget. There are also ParameterExpression instances both in the expression tree and in its parameter expression collection (predicate.Parameters); After searching a bit, I found System.Linq.Expressions.ExpressionVisitor (its source can be found here in the context of a how-to, very helpful) which is a convenient way to modify an expression tree. Armed with this, I implemented a visitor. This simple visitor only takes care of changing the types in member access and parameter expressions. It may not be complete, but it's fine for the expression w => w.Id == 0. internal class Visitor : ExpressionVisitor { private readonly Func<Type, Type> dataContractToActiveRecordTypeConverter; public Visitor(Func<Type, Type> dataContractToActiveRecordTypeConverter) { this.dataContractToActiveRecordTypeConverter = dataContractToActiveRecordTypeConverter; } protected override Expression VisitMember(MemberExpression node) { var dataContractType = node.Member.ReflectedType; var activeRecordType = this.dataContractToActiveRecordTypeConverter(dataContractType); var converted = Expression.MakeMemberAccess( base.Visit(node.Expression), activeRecordType.GetProperty(node.Member.Name)); return converted; } protected override Expression VisitParameter(ParameterExpression node) { var dataContractType = node.Type; var activeRecordType = this.dataContractToActiveRecordTypeConverter(dataContractType); return Expression.Parameter(activeRecordType, node.Name); } } With this visitor, GetMany becomes: IEnumerable<DataContract.Widget> GetMany( Expression<Func<DataContract.Widget, bool>> predicate) { var visitor = new Visitor(...); var lambda = Expression.Lambda<Func<ActiveRecord.Widget, bool>>( visitor.Visit(predicate.Body), predicate.Parameters.Select(p => visitor.Visit(p)); var widgets = ActiveRecord.Widget.Repository().Where(lambda); // This is just for reference, see below Expression<Func<ActiveRecord.Widget, bool>> referenceLambda = w => w.Id == 0; // Here we 'd convert the widgets to instances of DataContract.Widget and // return them -- this has nothing to do with the question though. } Results The good news is that lambda is constructed just fine. The bad news is that it isn't working; it's blowing up on me when I try to use it (the exception messages are really not helpful at all). I have examined the lambda my code produces and a hardcoded lambda with the same expression; they look exactly the same. I spent hours in the debugger trying to find some difference, but I can't. When predicate is w => w.Id == 0, lambda looks exactly like referenceLambda. But the latter works with e.g. IQueryable<T>.Where, while the former does not (I have tried this in the immediate window of the debugger). I should also mention that when predicate is w => true, it all works just fine. Therefore I am assuming that I 'm not doing enough work in Visitor, but I can't find any more leads to follow on. Can someone point me in the right direction? Thanks in advance for your help!

    Read the article

  • CodePlex Daily Summary for Sunday, July 08, 2012

    CodePlex Daily Summary for Sunday, July 08, 2012Popular ReleasesBlackJumboDog: Ver5.6.7: 2012.07.08 Ver5.6.7 (1) ????????????????「????? Request.Receve()」?????????? (2) Web???????????FlMML customized: FlMML customized ??: FlMML customized ????。 ??、PCM??????????、??????。ecBlog: ecBlog 0.2: ecBlog alpha realaseTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...testtom07052012git02: r1: r1Cypher Bot: Cypher Bot 4.1: Cypher Bot is the most advanced encryption tool on the planet.... and now it actually works. That's right we fixed the bugs! For a full program summary go to the Home Page or visit www.iRareMedia.com So what's new? We've pretty much fixed all the bugs, but here's a run down if you wanna know exactly what's different: Fixed Installation / Setup Error, where an error message would display: "No Internet Connection, Try Again Later" Fixed File Encryption / Decryption error where the file exten...Coding4Fun Kinect Service: Coding4Fun Kinect Service v1.5: Requires Kinect for Windows SDK v1.5 Minor bug fixes + Kinect for Windows SDK v1.5 Aligning version with the Kinect for Windows SDK requiredtedplay: tedplay 1.1: tedplay 1.1 source and Win32 binary is out now. Changes are: SID card support Commodore 64 PSID music format support optimized FIR filter global hotkeys for skipping tracks (Windows only) module properties window (Windows only) mutable noise channel via GUI button (Windows only) disable SID card from the menu (Windows only) bugfixes PSID tunes are played on the C64 clock frequency but in a Commodore plus/4 virtual machine. The purpose is not to have yet another SID player, but t...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsNew ProjectsAdventures of Adventure Land: Adventures of Adventure Land is a new text based adventure. You will be able to level up, fight challenging enemies, use magical spells, and simply adventure.AdventureWorks Silverlight samples: AdventureWorks Silverlight samplesAFS.Collab.Duplex: my duplexarmmychan: ????????C# - WPF - .Net - MSSQL - Open Source Restaurant POS System: A C# .Net / WPF / MSSQL Restaurant Point of Sale (POS) Software that is PCI-DSS 2.0 Compliant running stable in many restaurants / integrated with MercuryPay.dcview: dcinside.com? ??? ? ?? ?? ??? ???? ???.DotNetNuke Contact Form: simple yet effective DotNetNuke contact formISBNdb.com Helper Library: A .net helper library that encapsulates all the functionality of the API at www.isbndb.com in .NET CLR objects.JPO Class Register: Simple class register.NACHA C# Class with WPF Test App: Do you need to generate a NACHA PPD file? This is great starting point for you! Actually, it's a great, almost finished, point for you! More info below.NotificationsWidget In ASP MVC: SummaryPayPal Manager: I decided to make a basic (for now) desktop client for PayPal to get better at WebRequests in VB.NET. I will be adding much more such as sending payments, etc.Pomodoro Timer Count Down App: Pomodoro count down timer Application Features ------------------------------- Pomodoro Mode Count down timer mode Start stop pause Notification Powershell HTML Highlight: Powershell html syntax highlightingProject F10_P1: F10 p1Razor-sharp your skills: This project will have details about the C# 2.0 C# 3.0 C# 4.0 C# 5.0 Salaria: Bienvenue sur notre projet "Salaria".SharePoint 2010 - Unlock SP Files: Unlock any file in SharePoint or get lock information of a SharePoint file.( Compatible with office 365) ""The file "" is locked for exclusive use by""SharePoint 2010 Metro Masterpage: This project will give you a full metro masterpage for sharepoint 2010SharePoint Cache Refresh Framework: This project's aim is build a small and easy to use framework for SharePoint developers to be able to control cached objects across servers in a SharePoint FarmTFSProjectTest: A test project.uhimania test project: testUpgrade SPSolution: This is a Sharepoint 2010 Management Shell cmdlet, which upgrades a sharepoint solution and installs/activates any new features added in the package.Video Frame Explorer: Video Frame Explorer allows you to make thumbnails (caps, previews) of video files. It supports of practically any videos-formats (even MP4, MKV, MOV if you havWML: WML

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Gradient algororithm produces little white dots

    - by user146780
    I'm working on an algorithm to generate point to point linear gradients. I have a rough, proof of concept implementation done: GLuint OGLENGINEFUNCTIONS::CreateGradient( std::vector<ARGBCOLORF> &input,POINTFLOAT start, POINTFLOAT end, int width, int height,bool radial ) { std::vector<POINT> pol; std::vector<GLubyte> pdata(width * height * 4); std::vector<POINTFLOAT> linearpts; std::vector<float> lookup; float distance = GetDistance(start,end); RoundNumber(distance); POINTFLOAT temp; float incr = 1 / (distance + 1); for(int l = 0; l < 100; l ++) { POINTFLOAT outA; POINTFLOAT OutB; float dirlen; float perplen; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * l * 0.5); perpoffset.y = static_cast<float>(nperp.y * l * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); outA.x = end.x + perpoffset.x + diroffset.x; outA.y = end.y + perpoffset.y + diroffset.y; OutB.x = start.x + perpoffset.x - diroffset.x; OutB.y = start.y + perpoffset.y - diroffset.y; for (float i = 0; i < 1; i += incr) { temp = GetLinearBezier(i,outA,OutB); RoundNumber(temp.x); RoundNumber(temp.y); linearpts.push_back(temp); lookup.push_back(i); } for (unsigned int j = 0; j < linearpts.size(); j++) { if(linearpts[j].x < width && linearpts[j].x >= 0 && linearpts[j].y < height && linearpts[j].y >=0) { pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 0] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 1] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 2] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 3] = (GLubyte) 255; } } lookup.clear(); linearpts.clear(); } return CreateTexture(pdata,width,height); } It works as I would expect most of the time, but at certain angles it produces little white dots. I can't figure out what does this. This is what it looks like at most angles (good) http://img9.imageshack.us/img9/5922/goodgradient.png But once in a while it looks like this (bad): http://img155.imageshack.us/img155/760/badgradient.png What could be causing the white dots? Is there maybe also a better way to generate my gradients if no solution is possible for this? Thanks

    Read the article

  • Rails - session information being cleared?

    - by Jty.tan
    Hi! I'm having a weird issue that I can't track down... For context, I have resources of Users, Registries, and Giftlines. Each User has many Registries. Each Registry has many Giftlines. It's a belongs to association for them in a reverse manner. What is basically happening, is that when I am creating a giftline, the giftline itself is created properly, and linked to its associated Registry properly, but then in the process of being redirected back to the Registry show page, the session[:user_id] variable is cleared and I'm logged out. As far as I can tell, where it goes wrong is here in the registries_controller: def show @registry = Registry.find(params[:id]) @user = User.find(@registry.user_id) if (params[:user_id] && (@user.login != params[:user_id]) ) flash[:notice] = "User #{params[:user_id]} does not have such a registry." redirect_to user_registries_path(session[:user_id]) end end Now, to be clear, I can do a show of the registry normally, and nothing weird happens. It's only when I've added a giftline does the session[:user_id] variable get cleared. I used the debugger and this is what seems to be happening. (rdb:19) list [20, 29] in /Users/kriston/Dropbox/ruby_apps/bee_registered/app/controllers/registries_controller.rb 20 render :action => 'new' 21 end 22 end 23 24 def show => 25 @registry = Registry.find(params[:id]) 26 @user = User.find(@registry.user_id) 27 if (params[:user_id] && (@user.login != params[:user_id]) ) 28 flash[:notice] = "User #{params[:user_id]} does not have such a registry." 29 redirect_to user_registries_path(session[:user_id]) (rdb:19) session[:user_id] "tester" (rdb:19) So from there we can see that the code has gotten back to the show command after the item had been added, and that the session[:user_id] variable is still set. (rdb:19) list [22, 31] in /Users/kriston/Dropbox/ruby_apps/bee_registered/app/controllers/registries_controller.rb 22 end 23 24 def show 25 @registry = Registry.find(params[:id]) 26 @user = User.find(@registry.user_id) => 27 if (params[:user_id] && (@user.login != params[:user_id]) ) 28 flash[:notice] = "User #{params[:user_id]} does not have such a registry." 29 redirect_to user_registries_path(session[:user_id]) 30 end 31 end (rdb:19) session[:user_id] "tester" (rdb:19) Stepping on, we get to this point. And the session[:user_id] is still set. At this point, the URL is of the format localhost:3000/registries/:id, so params[:user_id] fails, and the if condition doesn't occur. (Unless I am completely wrong .<) So then the next bit occurs, which is (rdb:19) list [1327, 1336] in /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb 1327 end 1328 1329 def perform_action 1330 if action_methods.include?(action_name) 1331 send(action_name) => 1332 default_render unless performed? 1333 elsif respond_to? :method_missing 1334 method_missing action_name 1335 default_render unless performed? 1336 else (rdb:19) session[:user_id] "tester" And then when I hit next... (rdb:19) next 2: session[:user_id] = /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:618 return index if nesting != 0 || aborted (rdb:19) list [613, 622] in /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb 613 private 614 def call_filters(chain, index, nesting) 615 index = run_before_filters(chain, index, nesting) 616 aborted = @before_filter_chain_aborted 617 perform_action_without_filters unless performed? || aborted => 618 return index if nesting != 0 || aborted 619 run_after_filters(chain, index) 620 end 621 622 def run_before_filters(chain, index, nesting) (rdb:19) session {:user_id=>nil, :session_id=>"49992cdf2ddc708b441807f998af7ddc", :return_to=>"/registries", "flash"=>{}, :_csrf_token=>"xMDI0oDaOgbzhQhDG7EqOlGlxwIhHlB6c71fWgOIKcs="} The session[:user_id] is cleared, and when the page renders, I'm logged out. .< Sooo.... Any idea why this is occurring? It just occurred to me that I'm not sure if I'm meant to be pasting large chunks of debug output in here... Somebody point out to me if I'm not meant to be doing this. . And yes, this only occurs when I have added a giftitem, and it is sending me back to the registry page. When I'm viewing it, the same code occurs, but the session[:user_id] variable isn't cleared. It's driving me mildly insane. Thanks!

    Read the article

  • Apache configurations for php "AddType text/html php" or "AddType application/x-httpd-php php .php"

    - by forestclown
    I am taking over an application server and discover that it contain the following settings: AddType text/html php Although it works, but my understanding is that it should set as following: AddType application/x-httpd-php php .php What are the key differences between the two settings? Although at this point my application (Built using CakePHP) is running fine with either configuration, but I am not sure if it will cause any strange behaviour. Thanks!

    Read the article

  • php extension COM_DOTNET.dll

    - by aXul
    I'm trying to add a PHP extension (PHP_COM_DOTNET)to my server, by writing the following in my php.ini [COM_DOT_NET] extension=php_com_dotnet.dll I downloaded the dll file and put it in my ext folder, but when restarting the server, I got the following errors cant find entry point zend_new_interned_string in php5ts.dll php startup: unable to load dynamic library php_com_dotnet.dll couldn't find especified process I'm using php 5.3.18 on a xampp-like package (vertrigoserv)

    Read the article

  • How to get DCOMperm.exe from Microsoft

    - by Doltknuckle
    So I've been trying to solve the "The application-specific permission settings do not grant Local Activation permission" problem and everything I've been reading says I need to get "DCOMperm.exe". There are plenty of links to usage and download links that point to non-MS sources. I'd like to get this direct from Microsoft, but I can't find it there. Some people say that it's part of an SDK, but I'm not sure which one. Does anyone have any experience getting this exe?

    Read the article

  • ZTE ZXDSL 831 Series firmware update

    - by Eugenio Miró
    Hi everybody! I'm having a problem with my broadband connection established through a ZTE ZXDSL 831 Series DSL modem and wandered if a firmware update could help me fix that. The networking problem is not important at this point but my question is where to find this modem firmware updates because ZTE does not provide the bits. Does anyone know where I can find it? Thanks in advance!

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • LCD repair parts for Gateway FPD2275W

    - by eidylon
    Hi all, I am wondering if anyone can point me to someplace where I can buy an inverter and/or backlight bulb for a Gateway FPD2275w LCD monitor. I've been googling, and looking on eBay, but can't seem to find them. I need to repair a monitor... I've done so before with a laptop LCD, so yes, I do know what I'm in for. I just need to find where to buy the parts. Thanks in advance!

    Read the article

  • IIS 7 returning 400 Bad Request on POST

    - by xenolf
    Greetings, i am trying to POST data in a MVC 3 application to a server running IIS 7 using jquery ajax. When i post normally to the server, everything works ok, just when i post with ajax the server returns a 400 Bad request. I already ran a trace on such a request but all i got from that was the following: ModuleName="ManagedPipelineHandler", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus="400", HttpReason="Bad Request", HttpSubStatus="0", ErrorCode="The operation completed successfully. (0x0)", ConfigExceptionInfo="" Can anyone point me into the right direction to solve this issue? Thanks

    Read the article

  • Weird Apache error - Apache hangs & needs restarting regularly

    - by Moe
    I've been recently receiving this weird error where Apache just becomes unresponsive and completely stops until it is manually restarted. It gets to a point where I can not longer retrieve apache status from cPanel, and all websites running apache just hang on "connecting" until it times out. Has anyone else received this problem? This is a screenshot of my top when this weird problem occurs, usually the top has all httpd and php processes. Thanks for your help

    Read the article

  • Current Motherboard Tech Overview

    - by Ian
    Welp, computer died. Time to get a new motherboard. I haven't been paying attention to the latest motherboard technology and associated models and chipsets. Can someone point me at an overview of what is currently available? I'm primarily interested in the Intel / nvidia side of things.

    Read the article

  • Does Oracle 10g work on Windows 7?

    - by Gold
    I try to install Oracle 10g on Windows 7 - but in the installation I get the following error: The procedure entry point GetProcessImageFileNameW could not be located in the dynamic library PSAPI.DLL. Thank's for any help.

    Read the article

  • Change default mouse cursor on OSX Mavericks

    - by Ziarno
    Is there any way to change the default mouse cursor on OSX Mavericks? More precisely, I'd like to make it look like in windows, and even more precisely I want to move the 'point' where the mouse cursor actually 'clicks' - on windows it's on the very end of the arrow, even outside of it, and on mac it's a little inside the arrow. I've done some research, and all the websites either tell me to get Mighty Mouse, which doesn't work on my system, and other's tell me how to change the size of my cursor.

    Read the article

  • mail merge e-mail in Word 2007 with attachment

    - by kevyn
    Is there a simple way to mail merge with Word 2007 and add an attachment? I've searched google, and all results point to pasting in VB code. I want to a small team of novice users to be able to mail merge e-mails and add attachments. Does anyone know a simple way of doing this without code?

    Read the article

  • NFS using FREENAS for ESXi

    - by maruti
    trying to create NFS share for ESXi4. Using FREENAS 0.71. once setup NFS mount point is setup could this be shared to Windows clients using CIFS/SMB service? I mean sharing the backup vms on NFS datastore to Windows clients using CIFS/SMB service?

    Read the article

  • How to put the wamp server online?

    - by srisar
    hay i have setup the wamp server so now im ready to put it online. but when i put it online and also forward the port 80, now when i point the browser to the wan address of mine, it showing my router's home page???!!!! i dont know how to slove it

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >