Search Results

Search found 8687 results on 348 pages for 'per'.

Page 31/348 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to generate distinct random numbers per distinct threads in .NET?

    - by mark
    Dear ladies and sirs. I have to generate 19 bit random numbers. However, there is a constraint - two threads may not generate the same random number when running certain code. The simplest solution is lock the entire code. However, I would like to know if there is a non locking solution. I thought, I can incorporate ManagedThreadId within the produced random numbers, but the ManagedThreadId documentation on the Internet mentions that it may span the whole Int32 range. Unmanaged thread id seems to be limited to 11 bits, still this leaves me with just 8 truly random bits. Are there any other ways? Somehow to utilize the Thread Local Storage, may be? Thanks.

    Read the article

  • How many SQL queries per HTTP request is optimal?

    - by Chris Kooken
    I know the answer to this question for the most part is "It Depends", however I wanted to see if anyone had some pointers. We execute queries each request in ASP.NET MVC. Each request we need to get user rights information, and Various data for the Views that we are displaying. How many is too much, I know I should be conscious to the number of queries i am executing. I would assume if they are small queries and optimized out, half-a-dozen should be okay? Am I right? What do you think?

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • PHP's page generation time takes 0.01s. 1/0.01 = 100; however i'm having problems reaching that number of request per seconds. Why?

    - by cedivad
    On average, my PHP page generation time is 10ms. So i should be able to execute 100 requests one after the other one (using a single core on the server, since that php is not multithreaded). However, i'm having problems reaching 50 pages per seconds. As of now i do 25 on avg., with a medium load. The application is really light, it consist in a read (<5KB) from a pool of SSDs, some read queries solved by indexes. Where should i look to solve this bottleneck?

    Read the article

  • Problem with command line in windows

    - by Hoang Pham
    I copy the cmd.exe to a new location, then I run it to get the current directory location at that folder. But just recently, there is always this message: Impossibile trovare il testo del messaggio per il numero di messaggio 0x2350 nel file di messaggio per Application. Impossibile trovare il testo del messaggio per il numero di messaggio 0x2334 nel file di messaggio per Application. C:\cygwin\home\Hoang> Someone know how to solve it?

    Read the article

  • Adding local users / passwords on Kerberized Linux box

    - by Brian
    Right now if I try to add a non-system user not in the university's Kerberos realm I am prompted for a Kerberos password anyway. Obviously there is no password to be entered, so I just press enter and see: passwd: Authentication token manipulation error passwd: password unchanged Typing passwd newuser has the same issue with the same message. I tried using pwconv in the hopes that only a shadow entry was needed, but it changed nothing. I want to be able to add a local user not in the realm and give them a local password without being bothered about Kerberos. I am on Ubuntu 10.04. Here are my /etc/pam.d/common-* files (the defaults that Ubuntu's pam-auth-update package generates): account # here are the per-package modules (the "Primary" block) account [success=1 new_authtok_reqd=done default=ignore] pam_unix.so # here's the fallback if no module succeeds account requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around account required pam_permit.so # and here are more per-package modules (the "Additional" block) account required pam_krb5.so minimum_uid=1000 # end of pam-auth-update config auth # here are the per-package modules (the "Primary" block) auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config password # here are the per-package modules (the "Primary" block) password requisite pam_krb5.so minimum_uid=1000 password [success=1 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 # here's the fallback if no module succeeds password requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around password required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config session # here are the per-package modules (the "Primary" block) session [default=1] pam_permit.so # here's the fallback if no module succeeds session requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so # and here are more per-package modules (the "Additional" block) session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so # end of pam-auth-update config

    Read the article

  • NUnit-console 2.5.4 not capable of running multiple assemblies?

    - by Per Salmi
    I am having problems running tests with the command line NUnit test runner. I am using version 2.5.4 with .NET 4 on an x64 machine. Using the following line results in a failure "Could not load file or assembly 'bar' or one of its dependencies. The system cannot find the file specified." nunit-console-x86 foo.dll bar.dll /framework=4.0.30319 If I reverse the dll file names it complains about not finding 'foo' instead... It works if I run them separately like: nunit-console-x86 foo.dll /framework=4.0.30319 Also the tests of the second file works if I run: nunit-console-x86 bar.dll /framework=4.0.30319 Before upgrading our projects to 4.0 we used NUnit 2.5.2 and the same command line tool options and at that point the runner worked well with multiple assemblies. It seems like the ability to run tests on multiple files at the same time is broken... Anyone that can see the same behavior or does it work indicating that my environment is somehow broken? /Per

    Read the article

  • Yahoo YQL Rate Limits

    - by catlan
    I'm a bit unsure about the Usage Information and Limits of Yahoo YQL. Per application limit (identified by your Access Key): 100,000 calls per day. Per IP limits: /v1/public/: 1,000 calls per hour; /v1/yql/: 10,000 calls per hour. Do I require an application/access key for the /v1/public/ interface, non of the examples uses one. If I don't need an application key and only access the /v1/public/ interface I only have do worry about the IP limits of 1,000 calls per hour, right?

    Read the article

  • Can I create a dataset XSD without using the designer?

    - by Per Åkerberg
    Hi everyone, I want to be able to create the XSD file for my typed dataset without using the visual studio dataset designer. Is there a way to do this using for instance a command-line tool? There is some magic happening when a table is dragged from the server explorer to the design surface, but where does that magic come from? To add some flavour to the mix, I am using DB2 LUW 9.1, but I am guessing that the process is similar using other database vendors. Once I have the XSD I can use XSD.exe to create my .CS class, no problem. Thanks for any help or suggestion! /Per

    Read the article

  • WCF Diagnostics tracing and WAS hosting?

    - by Per Salmi
    I have a WAS hosted set of services configured to use net.tcp running under an IIS AppPool user account. When hosting the services with WAS I have a hard time getting any diagnostic tracing out of them to track down problems. The same services with tracing set to use i.e. c:\logs\trace.svclog as trace output works fine when using self-hosting in a console application. I don't seem to get any trace output at all when hosting with WAS, are there any special settings I need to get trace output under WAS? I have set a fixed output path for tracing and assigned permissions to the folder for the IIS AppPool\MyAppPool-user. /Per Salmi

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Non perdere la possibilità di incontrare i membri dell’Oracle Real-Time Decisions Customer Advisory Board!

    - by Silvia Valgoi
    Quest’anno, in via del tutto eccezionale, vengono aperte le porte dell’appuntamento annuale che Oracle dedica ai clienti di alcune specifiche Applicazioni: si incontreranno a Roma il prossimo 20 giugno 2012  i clienti mondiali della soluzione Oracle Real-Time Decisions (RTD). E’ una occasione unica per sentire direttamente da chi ha implementato questa soluzione quali siano stati i reali ritorni sugli investimenti e per parlare direttamente con loro in un contesto internazionale. La testimonianza di  Dell - che presenterà l’utilizzo di RTD  integrato anche a Siebel - la partecipazione di  BT, Deutsch Telecom, United Airlines, Bouygues Telecom, Dell e RoomKe, fanno di questo appuntamento un momento importante per tutti coloro che vedono nel Real-Time Decisions un tassello importante per le loro strategie di Customer Experience Management. Sei interessato? http://www.oracle.com/goto/RealTimeDecisions

    Read the article

  • Fundtech’s Global PAYplus Achieves Oracle Exadata and Oracle Exalogic Optimized Status

    - by Javier Puerta
    Fundtech, a leader in global transaction banking solutions, has announced  that Global PAYplus® – Services Platform (GPP-SP) version 4 has achieved Oracle Exadata Optimized and Oracle Exalogic Optimized status. (Read full announcement here) "GPP-SP testing was done in the third quarter of 2012 in the Oracle Exastack Lab located in the Oracle Solution Center in Linlithgow, Scotland. It showed that an integrated solution can result in a highly streamlined installation, enabling reduced cost of evaluation, acquisition and ownership. Highlights of the transaction processing test are as follows: 9.3 million Mass Payments per hour 5.7 million Single Payments per hour The test found that the optimized combination of GPP-SP running on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud is able to increase transactions per second (TPS) output per core, and able to reduce total cost of ownership (TCO). The volumes achieved were using only 25% of Exadata/Exalogic processing capacity".

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • La pianificazione finanziaria fra le opere di Peggy Guggenheim

    - by user812481
    Lo scorso 22 giugno nella fantastica cornice del Palazzo Venier dei Leoni a Venezia si è tenuto il CFO Executive meeting & event sul Cash flow planning &Optimization. L’evento iniziato con un networking lunch ha permesso agli ospiti di godere della fantastica vista della terrazza panoramica del palazzo che affaccia su Canal Grande. Durante i lavori, Oracle e Reply Consulting, partner dell’evento, hanno parlato della strategia di corporate finance e del valore della pianificazione economico-finanziaria- patrimoniale integrata. Grazie alla partecipazione di Banca IMI si sono potuti approfondire i temi del Business Plan, Sensitivity Analysis e Covenant Test nelle operazioni di Finanza Strutturata. AITI (Associazione Italiana Tesorieri d’Impresa) ha concluso i lavori dando una visione a 360° della pianificazione finanziaria, spiegando il percorso strategico necessario per i flussi di capitale a sostegno del business. Ecco l’elenco degli interventi: Il valore della pianificazione economico-finanziaria-patrimoniale integrata per il CFO nei processi di corporate governance - Lorenzo Mariani, Partner - Reply Consulting Business Plan, Sensitivity Analysis e Covenant Test nelle operazioni di Finanza Strutturata: applicazioni nelle fasi di concessione del credito e di monitoraggio dei rischi - Gianluca Vittucci, Responsabile Finanza Strutturata Banca dei Territori - Banca IMI Dalla strategia di corporate finance al planning operativo: una visione completa ed integrata del processo di pianificazione economico-finanziario-patrimoniale - Edilio Rossi, EPM Business Development Manager, Italy - Oracle EMEA Pianificazione Finanziaria: percorso strategico per ottimizzare i flussi di capitale allo sviluppo del business Aziendale; processo base nelle relazioni con il sistema bancario - Giovanni Ceci, Consigliere AITI e Temporary Finance Manager - Associazione Italiana Tesorieri d’Impresa Per visualizzare tutte le presentazioni seguici su slideshare.  Per visualizzare tutte le foto della giornata clicca qui.

    Read the article

  • CUSTOMER INSIGHT, Trend, Modelli e Tecnologie di Successo nel CRM di ultima generazione

    - by antonella.buonagurio(at)oracle.com
    Il CRM è una necessità sia per le grandi realtà aziendali che per le medie imprese, che hanno una crescente necessità di dati, informazioni, intelligence sui loro clienti. Molte realtà hanno sviluppato al loro interno sistemi di CRM ad hoc, ma, non avendo l'informatica nel loro DNA, hanno impiegato molto tempo su aspetti tecnici ed operativi piuttosto che sull'interpretazione, elaborazione e riflessione dei dati raccolti. Per maggiori informazioni e visionare l'agenda dell'evento clicca qui

    Read the article

  • MSDN Subscriber Benefits

    - by kaleidoscope
    Windows Azure Platform offer Introductory MSDN Premium offer Ongoing MSDN Subscription Benefits Windows Azure Compute hours per month 750 hours 250 100 50 Storage 10 GB 7.5 GB 5 GB 3 GB Transactions per month 1,000,000 750,000 500,000 300,000 AppFabric Service bus messages per month 1,000,000 1,000,000 500,000 300,000 SQL Azure Web Edition (1GB databases) 3 3 2 1 Data Transfers per month Europe and North America 7 GB in / 14 GB out 5 GB in / 10 GB out 3 GB in / 6 GB out 2 GB in / 4 GB out Asia Pacific 2.5 GB in / 5 GB out 2 GB in / 4 GB out 1 GB in / 2 GB out .5 GB in / 1 GB out Available for sign-up January 4, 2010* After completion of your 8 month introductory Windows Azure benefit Duration of benefit 8 months While MSDN Subscription remains active Subscription levels receiving benefit** MSDN Premium & BizSpark Visual Studio Ultimate with MSDN & BizSpark Visual Studio Premium with MSDN Visual Studio Professional with MSDN Estimated Retail Value: $1038 (8 months) $812/year $436/year $223/year This introductory offer will last for 8 months from the time you sign up. After that, you'll cancel your introductory account and sign up for the ongoing MSDN benefit based on your subscription level. The easiest way to cancel your introductory account is to set it to not "auto-renew". Think of "compute" as an instance of your application running in the cloud. So with 750 hours per month, you can keep a single instance running non-stop all month long. Or run 2 compute instances for two weeks a month. Or 4 for a week a piece. Lokesh, M

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • HP ProLiant DL980-Oracle TPC-C Benchmark spat

    - by jchang
    The Register reported a spat between HP and Oracle on the TPC-C benchmark. Per above, HP submitted a TPC-C result of 3,388,535 tpm-C for their ProLiant DL980 G7 (8 Xeon X7560 processors), with a cost of $0.63 per tpm-C. Oracle has refused permission to publish. Late last year (2010) Oracle published a result of 30M tpm-C for a 108 processors (sockets) SPARC cluster ($30M complete system cost). Oracle is now comparing this to the HP Superdome result from 2007 of 4M tpm-C at $2.93 per tpm-C, calling...(read more)

    Read the article

  • Microsoft Cuts Windows Azure Compute and Storage Pricing

    The savings begin with Microsoft's Windows Azure Storage Pay-As-You-Go service, which now costs $0.125 per GB as opposed to $0.14 per GB, a savings of 12 percent. Microsoft also slashed the pricing for Windows Azure Storage's 6 Month Plans as much as 14 percent across all tiers. Lastly, compute customers can now enjoy Windows Azure Extra Small Compute pricing of $0.02 per hour instead of $0.04 per hour, a savings of 50 percent. To exhibit the cost advantages offered by Windows Azure, Microsoft noted in a blog post that a 24x7 Extra Small Compute instance with a 100MB SQL Azure database can b...

    Read the article

  • Oracle Developer Days 2013

    - by Anne Manke
    Die Oracle Datenbank in der Praxis Was steckt in den Editionen? Einsatzgebiete, Tipps und Tricks zum Mitnehmen, inkl. Ausblick auf neue Funktionen Die Einsatzgebiete für die Oracle Datenbank sind vielfältig, und so bietet Oracle seine marktführende Datenbank in unterschiedlichen Editionen an. Über 30 Jahre Erfahrung in der Weiterentwicklung haben zu einer Fülle von nützlichen Features geführt, welche in den verschiedenen Ausführungen sinnvoll aufgeteilt sind. Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. In dieser speziell von der BU DB zusammengestellten Veranstaltung werden wir Sie neben vielen Tipps und Tricks zu folgenden Themen auf den neuesten Stand bringen: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Termine 23.01.2013: Oracle Niederlassung Stuttgart Liebknechtstr. 35 D-70565 Stuttgart [Anmeldung per Email] 30.01.2013: Oracle Niederlassung Potsdam Schiffbauergasse 14 D-14467 Potsdam [Anmeldung per Email] 05.02.2013: Oracle Niederlassung Düsseldorf Hamborner Str. 51 D-40472 Düsseldorf [Anmeldung per Email] Anmeldung Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos! Per Mail an Barbara Frank, ORACLE Deutschland B.V. & Co KG Per Telefon: +49 (0)711 72840-211 Agenda 10:00 Beginn der Veranstaltung Die Oracle Datenbank in ihren Editionen im Überblick OracleXE, SE1, SE, EE: Wer braucht was? Was sind die Unterschiede ...? Die Standard Edition - Eine umfangreiche Grundausstattung SQL und PL/SQL: Mehr als SELECT, Application Express, Oracle TEXT und mehr ... Mittagspause Mehr Performance: Die Sportausstattung in der Enterprise Edition Performante Statementausführung, Garantierte Ressourcenverwendung, Speicherplatz sparen ... Mehr Sicherheit: Die Sicherheitsausstattung in der Enterprise Edition Mandantenfähigkeit out-of-the-box, Audit-Möglichkeiten Mehr Verfügbarkeit: Die Mobilitätsausstattung in der Enterprise Edition Flashback Database, Möglichkeiten mit Data Guard, ... 17:00: Ende der Veranstaltung Wir freuen uns auf Sie!

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >