Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 40/289 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • multiple join query in entity framework

    - by gvLearner
    I have following tables tasks id | name | proj_id 1 | task1 | 1 2 | task2 | 1 3 | task3 | 1 projects id | name 1 | sample proj1 2 | demo project budget_versions id | version_name| proj_id 1 | 50 | 1 budgets id | cost | budget_version_id | task_id 1 | 3000 | 1 | 2 2 | 5000 | 1 | 1 I need to join these tables to get a result as below task_id | task_name | project_id | budget_version | budget_id | cost 1 | task1 | 1 | 1 | 2 |5000 2 | task2 | 1 | 1 | 1 |3000 3 | task3 | 1 | NULL | NULL |NULL select tsk.id,tsk.name, tsk.project_id, bgtver.id, bgt.id, bgt.cost from TASK tsk left outer join BUDGET_VERSIONS bgtver on tsk.project_id= bgtver.project_id left outer join BUDGETS bgt on bgtver.id = bgt.budget_version_id and tsk.id = bgt.task_id where bgtver.id = 1

    Read the article

  • how this scaling down for css code is worked?

    - by harris
    this is a code for scaling down for css. i was wondering, how this worked. please someone explain to me part by part. thank you very much. /* ======================================================================== / / Copyright (C) 2000 - 2009 ND-Tech. Co., Ltd. / / All Rights Reserved. / / ======================================================================== / / Project : ScaleDown Created : 31-AUG-2009 / / File : main.c Contact : [email protected] / / ======================================================================== / / You are free to use or modify this code to the following restrictions: / / Acknowledge ND Tech. Co. Ltd. / / Or, put "Parts of code by ND Tech. Co., Ltd." / / Or, leave this header as it is. / / in somewhere in your code. / / ======================================================================== */ include "vm3224k.h" define CE0CTL *(volatile int *)(0x01800008) define CE2CTL *(volatile int *)(0x01800010) define SDCTL *(volatile int *)(0x01800018) define LED *(volatile short *)(0x90080000) // Definitions for async access(change as you wish) define WSU (2<<28) // Write Setup : 0-15 define WST (8<<22) // Write Strobe: 0-63 define WHD (2<<20) // Write Hold : 0-3 define RSU (2<<16) // Read Setup : 0-15 define TA (3<<14) // Turn Around : 0-3 define RST (8<<8) // Read Strobe : 0-63 define RHD (2<<0) // Read Hold : 0-3 define MTYPE (2<<4) /* EDMA Registers */ define PaRAM_OPT 0 // Options define PaRAM_SRC 1 // Source Address define PaRAM_CNT 2 // Frame count, Element count define PaRAM_DST 3 // Destination Address define PaRAM_IDX 4 // Frame index, Element index define PaRAM_RDL 5 // Element count reload, Link address define EDMA_CIPR *(volatile int *)0x01A0FFE4 // EDMA Channel interrupt pending low register define EDMA_CIER *(volatile int *)0x01A0FFE8 // EDMA Channel interrupt enable low register define EDMA_CCER *(volatile int *)0x01A0FFEC // EDMA Channel chain enable register define EDMA_ER *(volatile int *)0x01A0FFF0 // EDMA Event low register define EDMA_EER *(volatile int *)0x01A0FFF4 // EDMA Event enable low register define EDMA_ECR *(volatile int *)0x01A0FFF8 // EDMA Event clear low register define EDMA_ESR *(volatile int *)0x01A0FFFC // EDMA Event set low register define PRI (2<<29) // 1:High priority, 2:Low priority define ESIZE (1<<27) // 0:32bit, 1:16bit, 2:8bit, 3:reserved define DS2 (0<<26) // 1:2-Dimensional define SUM (0<<24) // 0:no update, 1:increment, 2:decrement, 3:by index define DD2 (0<<23) // 1:2-Dimensional define DUM (0<<21) // 0:no update, 1:increment, 2:decrement, 3:by index define TCINT (1<<20) // 0:disable, 1:enable define TCC (8<<16) // 4 bit code define LINK (0<<1) // 0:disable, 1:enable define FS (1<<0) // 0:element, 1:frame define OptionField_0 (PRI|ESIZE|DS2|SUM|DD2|DUM|TCINT|TCC|LINK|FS) define DD2_1 (1<<23) // 1:2-Dimensional define DUM_1 (1<<21) // 0:no update, 1:increment, 2:decrement, 3:by index define TCC_1 (9<<16) // 4 bit code define OptionField_1 (PRI|ESIZE|DS2|SUM|DD2_1|DUM_1|TCINT|TCC_1|LINK|FS) define TCC_2 (10<<16)// 4 bit code define OptionField_2 (PRI|ESIZE|DS2|SUM|DD2|DUM|TCINT|TCC_2|LINK|FS) define DS2_3 (1<<26) // 1:2-Dimensional define SUM_3 (1<<24) // 0:no update, 1:increment, 2:decrement, 3:by index define TCC_3 (11<<16)// 4 bit code define OptionField_3 (PRI|ESIZE|DS2_3|SUM_3|DD2|DUM|TCINT|TCC_3|LINK|FS) pragma DATA_SECTION ( lcd,".sdram" ) pragma DATA_SECTION ( cam,".sdram" ) pragma DATA_SECTION ( rgb,".sdram" ) pragma DATA_SECTION ( u,".sdram" ) extern cregister volatile unsigned int IER; extern cregister volatile unsigned int CSR; short camcode = 0x08000; short lcdcode = 0x00000; short lcd[2][240][320]; short cam[2][240][320]; short rgb[64][32][32]; short bufsel; int *Cevent,*Levent,*CLink,flag=1; unsigned char v[240][160],out_y[120][160]; unsigned char y[240][320],out_u[120][80]; unsigned char u[240][160],out_v[120][80]; void PLL6713() { int i; // CPU Clock Input : 50MHz *(volatile int *)(0x01b7c100) = *(volatile int *)(0x01b7c100) & 0xfffffffe; for(i=0;i<4;i++); *(volatile int *)(0x01b7c100) = *(volatile int *)(0x01b7c100) | 0x08; *(volatile int *)(0x01b7c114) = 0x08001; // 50MHz/2 = 25MHz *(volatile int *)(0x01b7c110) = 0x0c; // 25MHz * 12 = 300MHz *(volatile int *)(0x01b7c118) = 0x08000; // SYSCLK1 = 300MHz/1 = 300MHz *(volatile int *)(0x01b7c11c) = 0x08001; // SYSCLK2 = 300MHz/2 = 150MHz // Peripheral Clock *(volatile int *)(0x01b7c120) = 0x08003; // SYSCLK3 = 300MHz/4 = 75MHz // SDRAM Clock for(i=0;i<4;i++); *(volatile int *)(0x01b7c100) = *(volatile int *)(0x01b7c100) & 0xfffffff7; for(i=0;i<4;i++); *(volatile int *)(0x01b7c100) = *(volatile int *)(0x01b7c100) | 0x01; } unsigned short ybr_565(short y,short u,short v) { int r,g,b; b = y + 1772*(u-128)/1000; if (b<0) b=0; if (b>255) b=255; g = y - (344*(u-128) + 714*(v-128))/1000; if (g<0) g=0; if (g>255) g=255; r = y + 1402*(v-128)/1000; if (r<0) r=0; if (r>255) r=255; return ((r&0x0f8)<<8)|((g&0x0fc)<<3)|((b&0x0f8)>>3); } void yuyv2yuv(char *yuyv,char *y,char *u,char *v) { int i,j,dy,dy1,dy2,s; for (j=s=dy=dy1=dy2=0;j<240;j++) { for (i=0;i<320;i+=2) { u[dy1++] = yuyv[s++]; y[dy++] = yuyv[s++]; v[dy2++] = yuyv[s++]; y[dy++] = yuyv[s++]; } } } interrupt void c_int06(void) { if(EDMA_CIPR&0x800){ EDMA_CIPR = 0xffff; bufsel=(++bufsel&0x01); Cevent[PaRAM_DST] = (int)cam[(bufsel+1)&0x01]; Levent[PaRAM_SRC] = (int)lcd[(bufsel+1)&0x01]; EDMA_ESR = 0x80; flag=1; } } void main() { int i,j,k,y0,y1,v0,u0; bufsel = 0; CSR &= (~0x1); PLL6713(); // Initialize C6713 PLL CE0CTL = 0xffffbf33;// SDRAM Space CE2CTL = (WSU|WST|WHD|RSU|RST|RHD|MTYPE); SDCTL = 0x57115000; vm3224init(); // Initialize vm3224k2 vm3224rate(1); // Set frame rate vm3224bl(15); // Set backlight VM3224CNTL = VM3224CNTL&0xffff | 0x2; // vm3224 interrupt enable for (k=0;k<64;k++) // Create RGB565 lookup table for (i=0;i<32;i++) for (j=0;j<32;j++) rgb[k][i][j] = ybr_565(k<<2,i<<3,j<<3); Cevent = (int *)(0x01a00000 + 24 * 7); Cevent[PaRAM_OPT] = OptionField_0; Cevent[PaRAM_SRC] = (int)&camcode; Cevent[PaRAM_CNT] = 1; Cevent[PaRAM_DST] = (int)&VM3224ADDH; Cevent = (int *)(0x01a00000 + 24 * 8); Cevent[PaRAM_OPT] = OptionField_1; Cevent[PaRAM_SRC] = (int)&VM3224DATA; Cevent[PaRAM_CNT] = (239<<16)|320; Cevent[PaRAM_DST] = (int)cam[bufsel]; Cevent[PaRAM_IDX] = 0; Levent = (int *)(0x01a00000 + 24 * 9); Levent[PaRAM_OPT] = OptionField_2; Levent[PaRAM_SRC] = (int)&lcdcode; Levent[PaRAM_CNT] = 1; Levent[PaRAM_DST] = (int)&VM3224ADDH; Levent = (int *)(0x01a00000 + 24 * 10); Levent[PaRAM_OPT] = OptionField_3; Levent[PaRAM_SRC] = (int)lcd[bufsel]; Levent[PaRAM_CNT] = (239<<16)|320; Levent[PaRAM_DST] = (int)&VM3224DATA; Levent[PaRAM_IDX] = 0; IER = IER | (1<<6)|3; CSR = CSR | 0x1; EDMA_CCER = (1<<8)|(1<<9)|(1<<10); EDMA_CIER = (1<<11); EDMA_CIPR = 0xffff; EDMA_ESR = 0x80; while (1) { if(flag) { // LED = 0; yuyv2yuv((char *)cam[bufsel],(char *)y,(char *)u,(char *)v); for(j=0;j<240;j++) for(i=0;i<320;i++) lcd[bufsel][j][i]=0; for(j=0;j<240;j+=2) for(i=0;i<320;i+=2) out_y[j>>1][i>>1]=(y[j][i]+y[j][i+1]+y[j+1][i]+y[j+1][i+1])>>2; for(j=0;j<240;j+=2) for(i=0;i<160;i+=2) { out_u[j>>1][i>>1]=(u[j][i]+u[j][i+1]+u[j+1][i]+u[j+1][i+1])>>2; out_v[j>>1][i>>1]=(v[j][i]+v[j][i+1]+v[j+1][i]+v[j+1][i+1])>>2; } for (j=0;j<120;j++) for (i=0;i<160;i+=2) { y0 = out_y[j][i]>>2; u0 = out_u[j][i>>1]>>3; v0 = out_v[j][i>>1]>>3; y1 = out_y[j][i+1]>>2; lcd[bufsel][j+60][i+80]=rgb[y0][u0][v0]; lcd[bufsel][j+60][i+81]=rgb[y1][u0][v0]; } flag=0; // LED = 1; } } }

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Can I still use unity 2d [duplicate]

    - by dragonloverlord
    This question already has an answer here: Is it possible to change Unity 3D to 2D and will I gain any performance boost after that? 3 answers I can not run unity 3d on my Chromebook but unity 2d in Ubuntu 12.04 works fine so is it possible to run unity low graphics mode on Ubuntu 14.04 as an alternative? If I can run low graphics mode as an alternative then how would I go about that? If I can not then what would be a good unity like alternative for Ubuntu 14.04?

    Read the article

  • links for 2010-03-15

    - by Bob Rhubart
    ComputerworldUK: Morrison boosts IT investment by £200 million "[I]mproving efficiencies in areas such as manufacturing and distribution...helped the company make total savings of £526 million, surpassing its expected cost savings of £460 million. A total £43 million in cost savings was due to the IT investment." -- Anh Nguyen, ComputerworldUK (h/t to Brian Dayton for the link) (tags: oracle investment informationtechnology soasuite fusionmiddleware)

    Read the article

  • @CodeStock 2012 Review: Rob Gillen ( @argodev ) - Anatomy of a Buffer Overflow Attack

    Anatomy of a Buffer Overflow AttackSpeaker: Rob GillenTwitter: @argodevBlog: rob.gillenfamily.net Honestly, this talk was over my head due to my lack of knowledge of low level programming, and I think that most of the other attendees would agree. However I did get the basic concepts that we was trying to get across. Fortunately most high level programming languages handle most of the low level concerns regarding preventing buffer overflow attacks. What I got from this talk was to validate all input data from external sources.

    Read the article

  • HP ProLiant DL980-Oracle TPC-C Benchmark spat

    - by jchang
    The Register reported a spat between HP and Oracle on the TPC-C benchmark. Per above, HP submitted a TPC-C result of 3,388,535 tpm-C for their ProLiant DL980 G7 (8 Xeon X7560 processors), with a cost of $0.63 per tpm-C. Oracle has refused permission to publish. Late last year (2010) Oracle published a result of 30M tpm-C for a 108 processors (sockets) SPARC cluster ($30M complete system cost). Oracle is now comparing this to the HP Superdome result from 2007 of 4M tpm-C at $2.93 per tpm-C, calling...(read more)

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • @CodeStock 2012 Review: Rob Gillen ( @argodev ) - Anatomy of a Buffer Overflow Attack

    Anatomy of a Buffer Overflow AttackSpeaker: Rob GillenTwitter: @argodevBlog: rob.gillenfamily.net Honestly, this talk was over my head due to my lack of knowledge of low level programming, and I think that most of the other attendees would agree. However I did get the basic concepts that we was trying to get across. Fortunately most high level programming languages handle most of the low level concerns regarding preventing buffer overflow attacks. What I got from this talk was to validate all input data from external sources.

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • Expectations + Rewards = Innovation

    - by D'Arcy Lussier
    “Innovation” is a heavy word. We regard those that embrace it as “Innovators”. We describe organizations as being “Innovative”. We hold those associated with the word in high regard, even though its dictionary definition is very simple: Introducing something new. What our culture has done is wrapped Innovation in white robes and a gold crown. Innovation is rarely just introducing something new. Innovations and innovators are typically associated with other terms: groundbreaking, genius, industry-changing, creative, leading. Being a true innovator and creating innovations are a big deal, and something companies try to strive for…or at least say they strive for. There’s huge value in being recognized as an innovator in an industry, since the idea is that innovation equates to increased profitability. IBM ran an ad a few years back that showed what their view of innovation is: “The point of innovation is to make actual money.” If the money aspect makes you feel uneasy, consider it another way: the point of innovation is to <insert payoff here>. Companies that innovate will be more successful. Non-profits that innovate can better serve their target clients. Governments that innovate can better provide services to their citizens. True innovation is not easy to come by though. As with anything in business, how well an organization will innovate is reliant on the employees it retains, the expectations placed on those employees, and the rewards available to them. In a previous blog post I talked about one formula: Right Employees + Happy Employees = Productive Employees I want to introduce a new one, that builds upon the previous one: Expectations + Rewards = Innovation  The level of innovation your organization will realize is directly associated with the expectations you place on your staff and the rewards you make available to them. Expectations We may feel uncomfortable with the idea of placing expectations on our staff, mainly because expectation has somewhat of a negative or cold connotation to it: “I expect you to act this way or else!” The problem is in the or-else part…we focus on the negative aspects of failing to meet expectations instead of looking at the positive side. “I expect you to act this way because it will produce <insert benefit here>”. Expectations should not be set to punish but instead be set to ensure quality. At a recent conference I spoke with some Microsoft employees who told me that you have five years from starting with the company to reach a “Senior” level. If you don’t, then you’re let go. The expectation Microsoft placed on their staff is that they should be working towards improving themselves, taking more responsibility, and thus ensure that there is a constant level of quality in the workforce. Rewards Let me be clear: a paycheck is not a reward. A paycheck is simply the employer’s responsibility in the employee/employer relationship. A paycheck will never be the key motivator to drive innovation. Offering employees something over and above their required compensation can spur them to greater performance and achievement. Working in the food service industry, this tactic was used again and again: whoever has the highest sales over lunch will receive a free lunch/gift certificate/entry into a draw/etc. There was something to strive for, to try beyond the baseline of what our serving jobs were. It was through this that innovative sales techniques would be tried and honed, with key servers being top sellers time and time again. At a code camp I spoke at, I was amazed to see that all the employees from one company receive $100 Visa gift cards as a thank you for taking time to speak. Again, offering something over and above that can give that extra push for employees. Rewards work. But what about the fairness angle? In the restaurant example I gave, there were servers that would never win the competition. They just weren’t good enough at selling and never seemed to get better. So should those that did work at performing better and produce more sales for the restaurant not get rewarded because those who weren’t working at performing better might get upset? Of course not! Organizations succeed because of their top performers and those that strive to join their ranks. The Expectation/Reward Graph While the Expectations + Rewards = Innovation formula may seem like a simple mathematics formula, there’s much more going under the hood. In fact there are three different outcomes that could occur based on what you put in as values for Expectations and Rewards. Consider the graph below and the descriptions that follow: Disgruntled – High Expectation, Low Reward I worked at a company where the mantra was “Company First, Because We Pay You”. Even today I still hear stories of how this sentiment continues to be perpetuated: They provide you a paycheck and a means to live, therefore you should always put them as your top priority. Of course, this is a huge imbalance in the expectation/reward equation. Why would anyone willingly meet high expectations of availability, workload, deadlines, etc. when there is no reward other than a paycheck to show for it? Remember: paychecks are not rewards! Instead, you see employees be disgruntled which not only affects the level of production but also the level of quality within an organization. It also means that you see higher turnover. Complacent – Low Expectation, Low Reward Complacency is a systemic problem that typically exists throughout all levels of an organization. With no real expectations or rewards, nobody needs to excel. In fact, those that do try to innovate, improve, or introduce new things into the organization might be shunned or pushed out by the rest of the staff who are just doing things the same way they’ve always done it. The bigger issue for the organization with low/low values is that at best they’ll never grow beyond their current size (and may shrink actually), and at worst will cease to exist. Entitled – Low Expectation, High Reward It’s one thing to say you have the best people and reward them as such, but its another thing to actually have the best people and reward them as such. Organizations with Entitled employees are the former: their organization provides them with all types of comforts, benefits, and perks. But there’s no requirement before the rewards are dolled out, and there’s no short-list of who receives the rewards. Everyone in the company is treated the same and is given equal share of the spoils. Entitlement is actually almost identical with Complacency with one notable difference: just try to introduce higher expectations into an entitled organization! Entitled employees have been spoiled for so long that they can’t fathom having rewards taken from them, or having to achieve specific levels of performance before attaining them. Those running the organization also buy in to the Entitled sentiment, feeling that they must persist the same level of comforts to appease their staff…even though the quality of the employee pool may be suspect. Innovative – High Expectation, High Reward Finally we have the Innovative organization which places high expectations but also provides high rewards. This organization gets it: if you truly want the best employees you need to apply equal doses of pressure and praise. Realize that I’m not suggesting crazy overtime or un-realistic working conditions. I do not agree with the “Glengary-Glenross” method of encouragement. But as anyone who follows sports can tell you, the teams that win are the ones where the coaches push their players to be their best; to achieve new levels of performance that they didn’t know they could receive. And the result for the players is more money, fame, and opportunity. It’s in this environment that organizations can focus on innovation – true innovation that builds the business and allows everyone involved to truly benefit. In Closing Organizations love to use the word “Innovation” and its derivatives, but very few actually do innovate. For many, the term has just become another marketing buzzword to lump in with all the other business terms that get overused. But for those organizations that truly get the value of innovation, they will be the ones surging forward while other companies simply fade into the background. And they will be the organizations that expect more from their employees, and give them their just rewards.

    Read the article

  • Intel programming "performance" books? [closed]

    - by user997112
    I vaguely remember seeing that Intel have produced a few good books, especially with regards to low latency programming, but I cannot remember the titles. Could people suggest the titles of Intel books (or ones relating to Intel products)? Examples include books on: -Intel Compiler -Intel Assembler -Any low level programming on Intel assembler -The Intel CPU architecture -Intel threading blocks library

    Read the article

  • Has anyone bought Market Samurai and had a good experience?

    - by ZakGottlieb
    It's hard when a piece of marketing software offers an affiliate program to ever find an objective review of it, so I thought I might try on Quora. It just boggles my mind that it can only cost $97 flat, when other SEO or keyword research tools like Wordtracker cost almost the same PER MONTH, and don't seem to offer much, if anything, more... Can anyone explain this, and would anyone recommend Market Samurai WITHOUT posting a link to it in their review? :)

    Read the article

  • Building Dynamic Websites With XML, XSLT, and ASP

    While online businesses are expanding rapidly in this day and age and searching for a way to reduce website cost, it is imperative for the internet business executive to understand and utilize the technical tools available on the internet to build a dynamic website. XML, XSLT, and ASP are internet website building tools that operate effectively to help sites survive in the booming online business market as well as reduce website cost.

    Read the article

  • Code while standing

    - by bgbg
    I have a regular, standard, workplace: a desk, a chair an LCD monitor, a mouse and a keyboard. I would like to have the ability to work while standing. I have the feeling that my employer will not will to buy an adjustable desk, instead of the existing one, so I would like to have your help with ideas on how to convert a workplace to a "standable" one on as low budget as possible. I saw this discussion, but the solutions proposed there are way above my "low budget" definition

    Read the article

  • Basic Information For Lead Generation

    Online Lead Generation has a very transparent cost structure. It is straightforward to see each lead's origins and quality - and companies can then pay only for data on interested consumers that meet their criteria. This makes the service highly cost-effective and gives each lead higher value.

    Read the article

  • Code while standing

    - by bgbg
    I have a regular, standard, workplace: a desk, a chair an LCD monitor, a mouse and a keyboard. I would like to have the ability to work while standing. I have the feeling that my employer will not will to buy an adjustable desk, instead of the existing one, so I would like to have your help with ideas on how to convert a workplace to a "standable" one on as low budget as possible. I saw this discussion, but the solutions proposed there are way above my "low budget" definition

    Read the article

  • Intel Puts Mobile CPUs on a Diet for Ultra-Thin Laptops

    <b>Hardware Central:</b> "Intel today broadened its number of ultra-low voltage processors (ULV) to include a complete range, from Celeron to Core i7, for the super-thin laptop market. This announcement builds on Intel's January introduction of laptop processors, which included only a few low-end ULV processors."

    Read the article

  • Working out costs to implement WCAG 2.0 (AA) site

    - by Sixfoot Studio
    Hi, I've run our client's site through a WCAG 2.0 validator which has returned 415 tasks that need to be worked through in order to get it WCAG 2.0 compliant. For the most part, I can get a rough estimation of how long a task will take but there are tasks I have never had to do before which I am not sure how to cost. I would like to know if someone has a rough guide on what to cost a client to convert their site to a compliant WCAG 2.0 (AA) site. Many thanks

    Read the article

  • Looking for a web service for students tracking

    - by shannoga
    I am working Voluntary association with a low budget. They asked me to build a tracking system for the students they work with. It is fairly simple, it needs to let them store data on the student personal details and grades and have the ability to get reports and charts on the students achievements. Since their budget is low I thought looking fro a web service that can feet their needs. Any Idea's?

    Read the article

  • Buying Backlinks

    - by Lynda
    I came across a website the other day that was selling backlinks. The site was well designed and promised some results for a nice low price but not too low. After a couple of minutes it started to sound similar to buying email marketing list which I know is not something you do. I assume that buying backlinks is considered a black-hat SEO trick and should be avoided. Am I wrong in my assumption?

    Read the article

  • The Benefits of Using Professional SEO Services

    Professional SEO services are offered by individuals and companies that specialize in internet marketing and search engine optimization. They are a cost effective solution, catering for any online company's marketing needs. If you choose a good SEO company, the chances are the cost of the services will far outweigh the increased business to your website.

    Read the article

  • Offshore Development - 3 Challenges and 3 Solutions

    Offshore development has become synonymous with cost saving for software and web development companies situated in North America, Europe and various other eastern countries. It saves the cost for sure but it there are challenges that needed to be addressed. If those challenges are addressed well, there are millions of small and medium businesses eager to try these offshore software and web development services. I am trying to list few of those challenges and their solutions in this article.

    Read the article

  • How to write a real time data acquisition program [closed]

    - by Tosin Awe
    I have to write a program in assembly language that will monitor temperature continuously, and I have no idea where to begin. The temperature must be displayed in BCD format, and the high and low set points will be programmed into the system. if the set points are exceeded then an alarm will be indicated. The low point is 20 degrees Celsius, and the high point is 24 degrees Celsius. Can somebody give me some hints on how to complete this task?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >