Search Results

Search found 23545 results on 942 pages for 'parallel task library'.

Page 7/942 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Borrow Harry Potter’s eBooks from Amazon Kindle Owner’s Lending Library

    - by Rekha
    From June 19, 2012, Amazon.com customers can borrow All 7 Harry Potter books from Kindle Owner’s Lending Library (KOLL). The books are available in English, French, Italian, German and Spanish. Prime Members of Amazon owning Kindle, can choose from 145,000 titles. US customers can borrow for free with no due dates and also as frequently as a month. There are no limits on the number of copies available for the customers. Anyone can read the books simultaneously by borrowing them. The bookmarks in the borrowed books are saved, for the customers to continue reading where they stopped even when they re-borrow the book. Prime members also have the opportunity to enjoy free two day shipping on millions of items and  unlimited streaming of over 18,000 movies and TV episodes. Amazon has got an exclusive license from J.K. Rowling’s Pottermore. The series cost between $7.99 and $9.99 for the individual books. Pottermore’s investment on these books are compensated by Amazon’s large payment. Via Amazon. CC Image Credit Amazon KOLL.

    Read the article

  • This Task Is Currently Locked by a Running Workflow and Cannot Be Edited

    - by Jayant Sharma
    Problem: In SharePoint Workflow, "This task is currently locked by a running workflow and cannot be edited" is the common exception, that we face. Solution: Generally this exception occurs 1.  when the number of items in the Task List gets highThis exception says that the workflow is not able to deliver the all the events at a given time and so the tasks get locked.  Out Of Box, the default event delivery throttle value is 15.  Event delivery throttle value Specifies the number of workflows that can be processed at the same time across all front-end Web serverslook at following link.(http://blogs.msdn.com/b/vincent_runge/archive/2008/09/16/about-the-workflow-eventdelivery-throttle-parameter.aspx)If the value returned by query is superior to the throttle (15 by default), any new workflow event will not be processed immediately. so we need to change it by stsadm command like...stsadm -o setproperty -pn workflow-eventdelivery-throttle -pv "20"(http://technet.microsoft.com/en-us/library/cc287939(office.12).aspx) 2. When we modify a Workflow Task from Custom TaskEdit Page.   when we try to modify the workflow task from outside workflow default Page, like custom workflow taskedit page. then is exception occurs.suppose we have custom task edit page with dropdown  and values are submitted/ Progress/ completed etc and we want to complete task from here. it will throw exception on SPWorkflowTask.AlterTask method, which changes the TaskStatus.When I debug, to find the root cause I actully found that the workflow is not locked. The InternalState flag of the workflow does not include the Locked flag bits(http://msdn.microsoft.com/en-us/library/dd928318(v=office.12).aspx) When I found this link http://geek.hubkey.com/2007/09/locked-workflow.htmlIt is exactly what I wanted. It says that "when the WorkflowVersion of the task list item is not equal to 1" then the error occurs. The solution that is propsed here works fantastically if ((int)task[SPBuiltInFieldId.WorkflowVersion] != 1){    SPList parentList = task.ParentList.ParentWeb.Lists[new Guid(task[SPBuiltInFieldId.WorkflowListId].ToString())];    SPListItem parentItem = parentList.Items.GetItemById((int)task[SPBuiltInFieldId.WorkflowItemId]);    SPWorkflow workflow = parentItem.Workflows[new Guid(task[SPBuiltInFieldId.WorkflowInstanceID].ToString())];    if (!workflow.IsLocked)    {       task[SPBuiltInFieldId.WorkflowVersion] = 1;       task.SystemUpdate();      break;    }} It will reset the workflow version to 1 again.Conclusion: This Exception is completely confusing. So, we need to find at first whether our workflow is really locked or not. If it is really locked then use 1st method. If not, then check the workflow version and set it to 1 again.Jayant Sharma

    Read the article

  • Repointing iTunes library from failed external drive to local drive

    - by Andy White
    Up until recently I had my iTunes library on an external drive, but the drive was beginning to act up, so as a precaution, I copied all the media files onto my local hard-drive. My external drive is now completely dead, so I'm in a situation where iTunes is still looking for the files on my F: drive, but the drive is gone, so the library references are now all broken links. A complete copy of the library is on my C: drive, so I'm curious what would be the easiest way to repoint my library to the media files on C:? Should I just clear out my library in iTunes and just import the files from C:? Or is there a more automatic way? I don't think I can use the "Consolidate Library" function in iTunes, b/c I no longer have access to the original library media files.

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • Any task-control algorithms programming practices?

    - by NumberFour
    Hi, I was just wondering if there's any field which concerns the task-control programming (or at least that's the way I call it). For a better explanation of task-control consider the following scenario: An application (master-thread) waits for a command - which might be a particular action or a set of actions the application should perform. When a command is received the master-thread creates a task (= spawns an independent thread which actually does the action) and adds a record in it's task-list - thus keeping track of the time of execution, thread handle, task priority...etc. The master-thread awaits for any other incoming commands while taking care of all the tasks - e.g: kills tasks running too long, prioritizes tasks with higher priorities, kills a task on a request of another task, limits the number of currently running tasks, allows task scheduling, cleans finished tasks (threads) and so on. The model is pretty similar to what we can see in OS dealing with running processes. Are there any good practices programming such task-models or is there some theoretical work done in this field? Maybe my question is too generalized, but at least I wanted to know whether there are any experiences working on such models or if there's a better approach. Thanks for any answers.

    Read the article

  • Snow Leopard doesn't repair permissions, despite showing / saying its fixed them?

    - by Jules
    Sometime ago, I used carbon copy as I was replacing my hard drive in my Mac Mini running Snow Leopard. Afterwards, on my new drive I had some permission problems. I've tried several times running a repair permissions / repair disk from disk util. It shows that there are problems and I think it says its correceted the problems. However the problems remain, what can I do to fix them ? It doesn't seem to cause me any problems, that I can tell EDIT Repairing permissions for “Macintosh HD” Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Italian.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Italian.lproj/UIAgent.nib". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Classes/jconsole.jar", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Classes/jconsole.jar". User differs on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib", should be 95, user is 0. Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/lib". User differs on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Libraries", should be 95, user is 0. Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Libraries". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Home/lib/security/cacerts", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Home/lib/security/cacerts". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/deploy.jar", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/deploy.jar". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/libdeploy.jnilib", should be -rwxr-xr-x , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/A/Resources/Deploy.bundle/Contents/Resources/Java/libdeploy.jnilib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreenLeopard386.app/Contents/Resources/Italian.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreenLeopard386.app/Contents/Resources/Italian.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_TW.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_TW.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_TW.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_TW.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_TW.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_TW.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_CN.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/zh_CN.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_CN.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/zh_CN.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_CN.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/zh_CN.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/ko.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/ko.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/ko.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/ko.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/ko.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/ko.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Dutch.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Dutch.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Dutch.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Dutch.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Dutch.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Dutch.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Italian.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Italian.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Italian.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Italian.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Spanish.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Spanish.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Spanish.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Spanish.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Spanish.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Spanish.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/French.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/French.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/French.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/French.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/French.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/French.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/German.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/German.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/German.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/German.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/German.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/German.lproj/MainMenu.nib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Japanese.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/Japanese.lproj/RemoteDesktopMenu.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Japanese.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/Japanese.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Japanese.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/Japanese.lproj/MainMenu.nib". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/deploy.jar", should be -rw-r--r-- , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/deploy.jar". Permissions differ on "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/libdeploy.jnilib", should be -rwxr-xr-x , they are lrwxr-xr-x . Repaired "System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Resources/JavaPluginCocoa.bundle/Contents/Resources/Java/libdeploy.jnilib". Permissions differ on "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/English.lproj/RemoteDesktopMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/Menu Extras/RemoteDesktop.menu/Contents/Resources/English.lproj/RemoteDesktopMenu.nib". Warning: SUID file "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/MacOS/ARDAgent" has been modified and will not be repaired. Permissions differ on "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/English.lproj/UIAgent.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/Remote Desktop Message.app/Contents/Resources/English.lproj/UIAgent.nib". Permissions differ on "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/English.lproj/MainMenu.nib", should be drwxr-xr-x , they are -rwxr-xr-x . Repaired "System/Library/CoreServices/RemoteManagement/AppleVNCServer.bundle/Contents/Support/LockScreen.app/Contents/Resources/English.lproj/MainMenu.nib". Group differs on "private/var/log/kernel.log", should be 80, group is 0. Permissions differ on "private/var/log/kernel.log", should be -rw-r----- , they are -rw-r--r-- . Repaired "private/var/log/kernel.log". Group differs on "private/var/log/secure.log", should be 80, group is 0. Permissions differ on "private/var/log/secure.log", should be -rw-r----- , they are -rw-r--r-- . Repaired "private/var/log/secure.log". Group differs on "private/var/log/system.log", should be 80, group is 0. Permissions differ on "private/var/log/system.log", should be -rw-r----- , they are -rw-r--r-- . Repaired "private/var/log/system.log". Permissions repair complete

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • What happens differently when you add a task Asynchronously on GAE?

    - by Ben Grunfeld
    Google's doc on async tasks assumes knowledge of the difference between regular and asynchronously added tasks. add_async(task, transactional=False, rpc=None) Asynchronously add a Task or a list of Tasks to this Queue. How is adding tasks asynchronously different to adding them regularly. I.e. what is the difference between using add(task, transactional=False) and add_async(task, transactional=False, rpc=None) I've heard that adding tasks regularly blocks certain things. Any explanation of what it blocks and how, and how async tasks don't block would be greatly appreciated.

    Read the article

  • In parallel.for share value more then one.

    - by user347918
    Here is problem. long sum = 0; Parallel.For(1, 10000, y => { sum1 += y;} ); Solution is .. Parallel.For<int>(0, result.Count, () => 0, (i, loop, subtotal) => { subtotal += result[i]; return subtotal; }, (x) => Interlocked.Add(ref sum, x) ); if there are two parameters in this code. For example long sum1 = 0; long sum2 = 0; Parallel.For(1, 10000, y => { sum1 += y; sum2=sum1*y; } ); what will we do ? i am guessing that have to use array ! int[] s=new int[2]; Parallel.For<int[]>(0, result.Count, () => s, (i, loop, subtotal) => { subtotal[0] += result[i]; subtotal[1] -= result[i]; return subtotal; }, (x) => Interlocked.Add(ref sum1, x[0]) //but how about sum1 i tried several way but it doesn't work. //for example like that //(x[0])=> Interlocked.Add (ref sum1, x[0]) //(x[1])=> Interlocked.Add (ref sum2, x[1]));

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • .NET 4 ... Parallel.ForEach() question

    - by CirrusFlyer
    I understand that the new TPL (Task Parallel Library) has implemented the Parallel.ForEach() such that it works with "expressed parallelism." Meaning, it does not guarantee that your delegates will run in multiple threads, but rather it checks to see if the host platform has multiple cores, and if true, only then does it distribute the work across the cores (essentially 1 thread per core). If the host system does not have multiple cores (getting harder and harder to find such a computer) then it will run your code sequenceally like a "regular" foreach loop would. Pretty cool stuff, frankly. Normally I would do something like the following to place my long running operation on a background thread from the ThreadPool: ThreadPool.QueueUserWorkItem( new WaitCallback(targetMethod), new Object2PassIn() ); In a situation whereby the host computer only has a single core does the TPL's Parallel.ForEach() automatically place the invocation on a background thread? Or, should I manaully invoke any TPL calls from a background thead so that if I am executing from a single core computer at least that logic will be off of the GUI's dispatching thread? My concern is if I leave the TPL in charge of all this I want to ensure if it determines it's a single core box that it still marshalls the code that's inside of the Parallel.ForEach() loop on to a background thread like I would have done, so as to not block my GUI. Thanks for any thoughts or advice you may have ...

    Read the article

  • Parallel features in .Net 4.0

    - by Jonathan.Peppers
    I have been going over the practicality of some of the new parallel features in .Net 4.0. Say I have code like so: foreach (var item in myEnumerable) myDatabase.Insert(item.ConvertToDatabase()); Imagine myDatabase.Insert is performing some work to insert to a SQL database. Theoretically you could write: Parallel.ForEach(myEnumerable, item => myDatabase.Insert(item.ConvertToDatabase())); And automatically you get code that takes advantage of multiple cores. But what if myEnumerable can only be interacted with by a single thread? Will the Parallel class enumerate by a single thread and only dispatch the result to worker threads in the loop? What if myDatabase can only be interacted with by a single thread? It would certainly not be better to make a database connection per iteration of the loop. Finally, what if my "var item" happens to be a UserControl or something that must be interacted with on the UI thread? What design pattern should I follow to solve these problems? It's looking to me that switching over to Parallel/PLinq/etc is not exactly easy when you are dealing with real-world applications.

    Read the article

  • Task Manager Does Not Start Every Time

    - by diek
    I have had a problem that started some time ago, 6 months maybe. I should have noted the first instance but I didn't. I am using Windows 7 Pro, 32bit. Under normal circumstances I can open up the the Task Manager, via the task bar or cntrl alt del. When I get a program stuck, causing a freeze or non-responsive system I try to open the task manager. It will not work. I have had plenty of similar problems in the past and I had no trouble getting it open. I have searched the internet but the only results I can find are when the task manager will not start under any situation. I am running ESET NOD32 as the anti-virus. The latest example happened when I opened a new tab in Google and tried to copy an image. Google accounts for at least 50% of the examples. Ran System File Checker tool, sfc /scannow as recommended on another post. No errors returned. Any guidance would be appreciated.

    Read the article

  • Scheduled Task unable to create/update any files

    - by East of Nowhere
    I have several tasks in Task Scheduler in Windows Server 2008 SP2 (32-bit) and they all successfully "do their work", except for creating or updating any files on Windows. All the tasks point to simple .cmd files that have the real work but beyond that there's no pattern: some call robocopy with the /LOG option, some call .exe files I wrote that manipulate XML files, some just do stuff with > redirection. With all of them, if I double-click the .cmd file myself, it works fine and the files are created or updated or whatever. If I run it from Task Scheduler (by the schedule or just clicking Run), the task always completes "successfully" but without any of the desired changes to files. I don't see any "unable to create file" errors in Event Viewer either. The tasks do all Run As a specific account, but I have logged in as that account and verified that it has permissions to do everything it needs to. Further details -- Task is set to Run whether user is logged in or not. Configured for: "Windows Vista or Windows Server 2008", there is no other Configured for option available.

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • How to set up Task Scheduler task that cleans up firefox window it opens?

    - by WilliamKF
    I have a Task Scheduler Windows 7 task that invokes firefox.exe on a URL which is an iMacro. Upon completion, the firefox window is left open. I always have firefox running, but it brings up a new window, not sure if this is a new instance of firefox, or a second window. Upon completion of the macro, I'd like the extra window closed. The extra window has a blank tab and a tab for the macro that ran. How can I set up the task to clean up after itself?

    Read the article

  • Idle state detection for server

    - by odinmillion
    Windows OS has a service that detects idle state. Details: Task Idle Conditions The computer is considered idle if all the processors and all the disks were idle for more than 90% of the past 15 minutes and if there is no keyboard or mouse input during this period of time. When the Task Scheduler service detects that the computer is idle, the service only waits for user input to mark the end of the idle state. It is very useful for usual PCs that have keyboard amd mouse. We can use standard task scheduler to start some process like defrag when PC in idle state and stop when PC isn't in idle state. But what should we use when we using a standalone server without keyboard and mouse? Server sometimes receives commands by TCP/IP and starts CPU and HDD activity. But sometimes CPU and HDD activity at zero level. I would like to use this periods of time to start defrag or another process. But this started at "idle" state processes should be terminated when another commands will appear. So, standard idle state conditions cant help me because we have not got user input to stop idle state. I need more customizable idle state detector. Automatically started processes shouldn't influence to idle state, but PC should go away from idle state when another process will apperar. What should I use? Maybe exists some advansed task scheduler? Or I should write some useful utility on C#? I hope that it is a standard task and all useful utilities already compiled :)

    Read the article

  • Logging another person off in Windows 7 using Task Manager

    - by BBlake
    Under WinXP, I could use Task Manager's Users tab to log off my wife's account which she always leaves logged in so I don't have to log in to her account and log it out. It's an older machine so I used that trick to free up every resource I could which might potentially slow down the game I'm playing at the time. I recently upgraded the machine to Win7 and when I try the same trick, I get an access denied popup. My logged in account does have Admin rights, so is it as simple as runing Task Manager "as an Administrator" in order to allow this? If so, how can I pull up Task Manager (other than the standard CTRL-ALT-DELETE) to have it pop up with Admin rights in order to log her account off in this manner?

    Read the article

  • Creating a Scheduled Task that runs forever on Windows XP

    - by Mike Fiedler
    When I create a scheduled task, I do so via command line: schtasks.exe /Create /TN "startup-script" /TR "C:\startup.bat" /RU taskuser /RP taskpasswd /SC ONLOGON The idea is that this task run forever. The batch opens a java process that is never meant to end. I've used ONLOGON, as the machine auto-logs in as taskuser. All this works fine, for about 72 hours, after which the Duration flag kicks in and ends the process. Windows XP doesn't have the /DU flag on command line - is there an alternative method to creating a task that is meant to run from a system startup (doesn't even require logon) and runs forever, without touching a GUI?

    Read the article

  • Windows Server R2 Task Scheduler - Open Programs On Startup

    - by Markive
    I want Fiddler and some other programs to run on startup, so it's there and running every time I bring up an instance of my test server on EC2. There's a few questions about running scripts on Startup with Task Scheduler, but this needs to work slightly differently. I have set this up to run on startup but when I RDP into the server I can see Fiddler is running in Task Manager (so I can't manually run a second instance of the program), but it's not viewable on the task bar? So I can't actually see the interface? Here's my setup: General Tab Running with highest privileges Run whether user is logged on or not Configure for Windows 2008 server R2 Triggers Tab At startup Others are obvious and defaults.. What am I doing wrong?

    Read the article

  • unable to schedule a task (access denied)

    - by hiddenkirby
    i have a bat file im trying to schedule every morning. whilst in the Scheduled Task Wizard... when i click on finish... i get a ... The new task could not be created. The specific error is: 0x8007005: Access is denied. Try using the Task page Browse button to locate the application I have attempted using both a domain account that is an administrator on the box... and a local account that is an administrator on the box. On another machine ... i have managed to get this work.. but cannot figure out the difference in configuration. It is using the domain account to run the bat file.

    Read the article

  • Does Parallel.ForEach require AsParallel()

    - by dkackman
    ParallelEnumerable has a static member AsParallel. If I have an IEnumerable<T> and want to use Parallel.ForEach does that imply that I should always be using AsParallel? e.g. Are both of these correct (everything else being equal)? without AsParallel: List<string> list = new List<string>(); Parallel.ForEach<string>(GetFileList().Where(file => reader.Match(file)), f => list.Add(f)); or with AsParallel? List<string> list = new List<string>(); Parallel.ForEach<string>(GetFileList().Where(file => reader.Match(file)).AsParallel(), f => list.Add(f));

    Read the article

  • Parallel Task Library WaitAny Design

    - by colithium
    I've just begun to explore the PTL and have a design question. My Scenario: I have a list of URLs that each refer to an image. I want each image to be downloaded in parallel. As soon as at least one image is downloaded, I want to execute a method that does something with the downloaded image. That method should NOT be parallelized -- it should be serial. I think the following will work but I'm not sure if this is the right way to do it. Because I have separate classes for collecting the images and for doing "something" with the collected images, I end up passing around an array of Tasks which seems wrong since it exposes the inner workings of how images are retrieved. But I don't know a way around it. In reality there is more to both of these methods but that's not important for this. Just know that they really shouldn't be lumped into one large method that both retrieves and does something with the image. Task<Image>[] downloadTasks = collector.RetrieveImages(listOfURLs); for (int i = 0; i < listOfURLs.Count; i++) { //Wait for any of the remaining downloads to complete int completedIndex = Task<Image>.WaitAny(downloadTasks); Image completedImage = downloadTasks[completedIndex].Result; //Now do something with the image (this "something" must happen serially) } /////////////////////////////////////////////////// public Task<Image>[] RetrieveImages(List<string> urls) { Task<Image>[] tasks = new Task<Image>[urls.Count]; int index = 0; foreach (string url in urls) { string lambdaVar = url; //Required... Bleh tasks[index] = Task<Image>.Factory.StartNew(() => { using (WebClient client = new WebClient()) { //TODO: Replace with live image locations string fileName = String.Format("{0}.png", i); client.DownloadFile(lambdaVar, Path.Combine(Application.StartupPath, fileName)); } return Image.FromFile(Path.Combine(Application.StartupPath, fileName)); }, TaskCreationOptions.LongRunning | TaskCreationOptions.AttachedToParent); index++; } return tasks; }

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >