Search Results

Search found 616 results on 25 pages for 'divide'.

Page 17/25 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Does database affect classes?

    - by satyanarayana
    I had created one class User and UserDAOImpl class for querying DB using class User. As there is one table to be queried, these two classes are sufficient for me. What if there is a case where new fields are to be added to that one table is to be divided into 3 tables( user_info, user_profile and user_address) to store user? As new fields are added, I need to change classes User and UserDAOImpl, it seems these two are not sufficient. It seems database changes affect my classes. In this case, do I need to divide class User into 3 classes as tables are changes? Can any one suggest me how can I solve this without making too many changes?

    Read the article

  • MSSQL / T-SQL : How to update equal percentages of a resultset?

    - by Kent Comeaux
    I need a way to take a resultset of KeyIDs and divide it up as equally as possible and update records differently for each division based on the KeyIDs. In other words, there is SELECT KeyID FROM TableA WHERE (some criteria exists) I want to update TableA 3 different ways by 3 equal portions of KeyIDs. UPDATE TableA SET FieldA = Value1 WHERE KeyID IN (the first 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value2 WHERE KeyID IN (the second 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value3 WHERE KeyID IN (the third 1/3 of the SELECT resultset above) or something to that effect. Thanks for any and all of your responses.

    Read the article

  • Is catching NumberFormatException a bad practice?

    - by integeruser
    I have to parse a String that can assume hex values or other non-hex values 0xff, 0x31 or A, PC, label, and so on. I use this code to divide the two cases: String input = readInput(); try { int hex = Integer.decode(input); // use hex ... } catch (NumberFormatException e) { // input is not a hex, continue parsing } Can this code be considered "ugly" or difficult to read? Are there other (maybe more elegant) solutions? EDIT : I want to clarify that (in my case) a wrong input doesn't exist: i just need to distinguish if it is a hex number, or not. And just for completeness, i'm making a simple assebler for DCPU-16.

    Read the article

  • Improve disk read performance (multiple files) with threading

    - by pablo
    I need to find a method to read a big number of small files (about 300k files) as fast as possible. Reading them sequentially using FileStream and reading the entire file in a single call takes between 170 and 208 seconds (you know, you re-run, disk cache plays its role and time varies). Then I tried using PInvoke with CreateFile/ReadFile and using FILE_FLAG_SEQUENTIAL_SCAN, but I didn't appreciate any changes. I tried with several threads (divide the big set in chunks and have every thread reading its part) and this way I was able to improve speed just a little bit (not even a 5% with every new thread up to 4). Any ideas on how to find the most effective way to do this?

    Read the article

  • PHP: Iterate through folders and display HTML contends

    - by Mestika
    Hi, I’m currently trying to develop a method to get a overview of all my different web templates I’ve created and (legally) downloaded over the years. I thought about a displaying them like Wordpress is previewing it’s templates view a small preview windows, displaying the concrete file with styles and everything. How to divide them into rows and columns and create AJAX modal window open on preview and pagination and so on I believe I can manage, but it is the concept itself about iterate over several folders then find all index.htm / index.html pages and displaying them. I’ve not worked very much with directories in PHP and the only references and code stumps I’ve found so far is just to list all the files in a certain directory like, what it contains. I would be really grateful if someone knew about a script, a function, snippet or just could get me a nudge in the right direction to create such a (probably simple) preview function. Sincere Mestika

    Read the article

  • Double # showing 0 on android

    - by Dave
    I'm embarrassed to ask this question, but after 45 minutes of not finding a solution I will resort to public humiliation. I have a number that is being divided by another number and I'm storing that number in a double variable. The numbers are randomly generated, but debugging the app shows that both numbers are in fact being generated. Lets just say the numbers are 476 & 733. I then take the numbers and divide them to get the percentage 476/733 = .64 I then print out the variable and it's always set to 0. I've tried using DecimalFormat and NumberFormat. No matter what I try though it always says the variable is 0. I know there is something simple that I'm missing, I just can't find it =/.

    Read the article

  • C++ a map to a 2 dimensional vector

    - by user1701545
    I want to create a C++ map where key is, say, int and value is a 2-D vector of double: map myMap; suppose I filled it and now I would like to update the second vector mapped by each key (for example divide each element by 2). How would I access that vector iteratively? The "itr-second[0]" syntax in the statement below is obviously wrong. What would be the right syntax for that action? for(std::map<in, vector<vector<double> > > itr = myMap.begin(); itr != myMap.end();++itr) { for(int i = 0;i < itr->second[0].size();++i) { itr->second[0][i] /= 2; } } thanks, rubi

    Read the article

  • Dividing a list in specific number of sublists

    - by Surya
    I want to divide a list in "a specific number of" sublists. That is, for example if I have a list List(34, 11, 23, 1, 9, 83, 5) and the number of sublists expected is 3 then I want List(List(34, 11), List(23, 1), List(9, 83, 5)). How do I go about doing this? I tried grouped but it doesn't seem to be doing what I want. PS: This is not a homework question. Kindly give a direct solution instead of some vague suggestions.

    Read the article

  • How to group rows into two groups in sql?

    - by user1055638
    Lets say I have such a table: id|time|operation 1 2 read 2 5 write 3 3 read 4 7 read 5 2 save 6 1 open and now I would like to do two things: Divide all these records into two groups: 1) all rows where operation equals to "read" 2) all other rows. Sum the time in each group. So that my query would result only into two rows. What I got so far is: select sum(time) as total_time, operation group by operation ; Although that gives me many groups, depending on the number of distinct operations. How I could group them only into two categories? Cheers!

    Read the article

  • How to find the remainder of large number division in C++?

    - by Beelzeboul
    Hello, I have a question regarding modulus in C++. What I was trying to do was divide a very large number, lets say for example, M % 2, where M = 54,302,495,302,423. However, when I go to compile it says that the number is to 'long' for int. Then when I switch it to a double it repeats the same error message. Is there a way I can do this in which I will get the remainder of this very large number or possibly an even larger number? Thanks for your help, much appreciated.

    Read the article

  • How to algorithmically partion a keyspace?

    - by pbhogan
    This is related to consistent hashing and while I conceptually understand what I need to do, I'm having a hard time translating this into code. I'm trying to divide a given keyspace (say, 128 bits) into equal sized partitions. I want the upper bound (highest key) of each partition. Basically, how would I complete this? #define KEYSPACE_BYTE_SIZE 16 #define KEYSPACE_BIT_SIZE (KEYSPACE_BYTE_SIZE * 8) typedef struct _key { char byte[KEYSPACE_BYTE_SIZE]; } key; key * partition_keyspace( int num_partitions ) { key * partitions = malloc( sizeof(key) * num_partitions ); // ... }

    Read the article

  • Date range advanced count calculation in TSQL

    - by cihata87
    I am working on call center project and I have to calculate the call arrivals at the same time between specific time ranges. I have to write a procedure which has parameters StartTime, EndTime and Interval For Example: Start Time: 11:00 End Time: 12:00 Interval: 20 minutes so program should divide the 1-hour time range into 3 parts and each part should count the arrivals which started and finished in this range OR arrivals which started and haven't finished yet Should be like this: 11:00 - 11:20 15 calls at the same time(TimePeaks) 11:20 - 11:40 21 calls ... 11:40 - 12:00 8 calls ... Any suggestions how to calculate them?

    Read the article

  • Finding a integer number after a beginning t=

    - by user2966696
    I have a string like this: 33 00 4b 46 ff ff 03 10 30 t=25562 I am only interested in the five digits at the very end after the t= How can I get this numbers with a regular expression out of it? I tried grep t=..... but I also got all characters including the t= in the beginning, which I would like to drop? After finding that five digit number, I would like to divide this by 1000. So in the above mentioned case the number 25.562. Is this possible with grep and regular expressions? Thanks for your help.

    Read the article

  • Dividing web.config into multiple files in asp.net

    - by Jalpesh P. Vadgama
    When you are having different people working on one project remotely you will get some problem with web.config, as everybody was having different version of web.config. So at that time once you check in your web.config with your latest changes the other people have to get latest that web.config and made some specific changes as per their local environment. Most of people who have worked things from remotely has faced that problem. I think most common example would be connection string and app settings changes. For this kind of situation this will be a best solution. We can divide particular section of web.config into the multiple files. For example we could have separate ConnectionStrings.config file for connection strings and AppSettings.config file for app settings file. Most of people does not know that there is attribute called ‘configSource’ where we can  define the path of external config file and it will load that section from that external file. Just like below. <configuration> <appSettings configSource="AppSettings.config"/> <connectionStrings configSource="ConnectionStrings.config"/> </configuration> And you could have your ConnectionStrings.config file like following. <connectionStrings> <add name="DefaultConnection" connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-WebApplication1-20120523114732;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> Same way you have another AppSettings.Config file like following. <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> That's it. Hope you like this post. Stay tuned for more..

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Chuck Norris Be Thy Name

    - by Robz / Fervent Coder
    Chuck Norris doesn’t program with a keyboard. He stares the computer down until it does what he wants. All things need a name. We’ve tossed around a bunch of names for the framework of tools we’ve been working on, but one we kept coming back to was Chuck Norris. Why did we choose Chuck Norris? Well Chuck Norris sort of chose us. Everything we talked about, the name kept drawing us closer to it. We couldn’t escape Chuck Norris, no matter how hard we tried. So we gave in. Chuck Norris can divide by zero. What is the Chuck Norris Framework? @drusellers and I have been working on a variety of tools: WarmuP - http://github.com/chucknorris/warmup (Template your entire project/solution and create projects ready to code - From Zero to a Solution with everything in seconds. Your templates, your choices.) UppercuT - http://projectuppercut.org (Build with Conventions - Professional Builds in Moments, Not Days!) | Code also at http://github.com/chucknorris/uppercut DropkicK - http://github.com/chucknorris/dropkick (Deploy Fluently) RoundhousE - http://projectroundhouse.org (Professional Database Management with Versioning) | Code also at http://github.com/chucknorris/roundhouse SidePOP - http://sidepop.googlecode.com (Does your application need to check email?) HeadlocK - http://github.com/chucknorris/headlock (Hash a directory so you can later know if anything has changed) Others – still in concept or vaporware People ask why we choose such violent names for each tool of our framework? At first it was about whipping your code into shape, but after awhile the naming became, “How can we relate this to Chuck Norris?” People also ask why we uppercase the last letter of each name. Well, that’s more about making you ask questions…but there are a few reasons for it. Project managers never ask Chuck Norris for estimations…ever. The class object inherits from Chuck Norris Chuck Norris doesn’t need garbage collection because he doesn’t call .Dispose(), he calls .DropKick() So what are you waiting for? Join the Google group today, download and play with the tools. And lastly, welcome to Chuck Norris. Or should I say Chuck Norris welcomes you…

    Read the article

  • Cocos2d: Changing b2Body x val every frame causes jitter

    - by Joey Green
    So, I have a jumping mechanism similar to what you would see in doodle jump where character jumps and you use the accelerometer to make character change direction left or right. I have a player object with position and a box2d b2Body with position. I'm changing the player X position via the accelerometer and the Y position according to box2d. pseudocode for this is like so -----accelerometer acceleration------ player.position = new X -----world update--------- physicsWorld-step() //this will get me the new Y according to the physics similation //so we keep the bodys Y value but change x to new X according to accelerometer data playerPhysicsBody.position = new pos(player.position.x, keepYval) player.position = playerPhysicsBody.position Now this is simplifying my code, but I'm doing the position conversion back and forth via mult or divide by PTM_. Well, I'm getting a weird jitter effect after I get big jump in acceleration data. So, my questions are: 1) Is this the right approach to have the accelerometer control the x pos and box2d control the y pos and just sync everthing up every frame? 2) Is there some issue with updating a b2body x position every frame? 3) Any idea what might be creating this jitter effect? I've collecting some data while running the game. Pre-body is before I set the x value on the b2Body in my update method after I world-step(). Post of course is afterwards. As you can see there is definitively a pattern. 012-06-19 08:14:13.118 Game[1073:707] pre-body pos 5.518720~24.362963 2012-06-19 08:14:13.120 Game[1073:707] post-body pos 5.060156~24.362963 2012-06-19 08:14:13.131 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.133 Game[1073:707] delta 0.016669 2012-06-19 08:14:13.135 Game[1073:707] pre-body pos 5.060156~24.689455 2012-06-19 08:14:13.137 Game[1073:707] post-body pos 5.502138~24.689455 2012-06-19 08:14:13.148 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.150 Game[1073:707] delta 0.016667 2012-06-19 08:14:13.151 Game[1073:707] pre-body pos 5.502138~25.006948 2012-06-19 08:14:13.153 Game[1073:707] post-body pos 5.043575~25.006948 2012-06-19 08:14:13.165 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.167 Game[1073:707] delta 0.016644 2012-06-19 08:14:13.169 Game[1073:707] pre-body pos 5.043575~25.315441 2012-06-19 08:14:13.170 Game[1073:707] post-body pos 5.485580~25.315441 2012-06-19 08:14:13.180 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.182 Game[1073:707] delta 0.016895 2012-06-19 08:14:13.185 Game[1073:707] pre-body pos 5.485580~25.614935 2012-06-19 08:14:13.188 Game[1073:707] post-body pos 5.026768~25.614935 2012-06-19 08:14:13.198 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.199 Game[1073:707] delta 0.016454 2012-06-19 08:14:13.207 Game[1073:707] pre-body pos 5.026768~25.905428 2012-06-19 08:14:13.211 Game[1073:707] post-body pos 5.469213~25.905428 2012-06-19 08:14:13.217 Game[1073:707] acceleration x -0.137421 2012-06-19 08:14:13.223 Game[1073:707] player velocity x: -65.022644 2012-06-19 08:14:13.229 Game[1073:707] delta 0.016603

    Read the article

  • Algorithm for spreading labels in a visually appealing and intuitive way

    - by mac
    Short version Is there a design pattern for distributing vehicle labels in a non-overlapping fashion, placing them as close as possible to the vehicle they refer to? If not, is any of the method I suggest viable? How would you implement this yourself? Extended version In the game I'm writing I have a bird-eye vision of my airborne vehicles. I also have next to each of the vehicles a small label with key-data about the vehicle. This is an actual screenshot: Now, since the vehicles could be flying at different altitudes, their icons could overlap. However I would like to never have their labels overlapping (or a label from vehicle 'A' overlap the icon of vehicle 'B'). Currently, I can detect collisions between sprites and I simply push away the offending label in a direction opposite to the otherwise-overlapped sprite. This works in most situations, but when the airspace get crowded, the label can get pushed very far away from its vehicle, even if there was an alternate "smarter" alternative. For example I get: B - label A -----------label C - label where it would be better (= label closer to the vehicle) to get: B - label label - A C - label EDIT: It also has to be considered that beside the overlapping vehicles case, there might be other configurations in which vehicles'labels could overlap (the ASCII-art examples show for example three very close vehicles in which the label of A would overlap the icon of B and C). I have two ideas on how to improve the present situation, but before spending time implementing them, I thought to turn to the community for advice (after all it seems like a "common enough problem" that a design pattern for it could exist). For what it's worth, here's the two ideas I was thinking to: Slot-isation of label space In this scenario I would divide all the screen into "slots" for the labels. Then, each vehicle would always have its label placed in the closest empty one (empty = no other sprites at that location. Spiralling search From the location of the vehicle on the screen, I would try to place the label at increasing angles and then at increasing radiuses, until a non-overlapping location is found. Something down the line of: try 0°, 10px try 10°, 10px try 20°, 10px ... try 350°, 10px try 0°, 20px try 10°, 20px ...

    Read the article

  • Mathemagics - 3 consecutive number

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Three Consecutive numbers When I was young and handsome (OK, OK, just young), my father used to challenge us with riddles and tricks involving Logic, Math and general knowledge. Most of the time, at least after reaching the ripe age of 10, I would see thru his tricks in no time. This one is a bit more subtle. I had to think about it for close to an hour and then when I had the ‘AHA!’ effect, I could not understand why it had taken me so long. So here it is. You select a volunteer from the audience (or a shill, but that would be cheating!) and ask him to select three consecutive numbers, all of them 1 or 2 digits. So {1, 2, 3} would be good, albeit trivial set, as would {8, 9, 10} or {97, 98, 99} but not {99, 99, 100} (why?!). Now, using a calculator – and these days almost every phone has a built in calculator – he is to perform these steps: 1.      Select a single digit 2.      Multiply it by 3 and write it down 3.      Add the 3 consecutive numbers 4.      Add the number from step 2 5.      Multiply the sum by 67 6.      Now tell me the last 2 digits of the result and also the number you wrote down in step 2 I will tell you which numbers you selected. How do I do this? I’ll give you the mechanical answer, but because I like you to have the pleasure of an ‘AHA!’ effect, I will not really explain the ‘why’. So let’s you selected 30, 31, and 32 and also that your 3 multiple was 24, so here is what you get 30 + 31 + 32 = 93 93 + 24 = 117 117 x 67 = 7839, last 2 digits are 39, so you say “the last 2 digits are 39, and the other number is 24.” Now, I divide 24 by 3 getting 8. I subtract 8 from 39 and get 31. I then subtract 1 from this getting 30, and say: “You selected 30, 31, and 32.” This is the ‘how’. I leave the ‘why’ to you! That’s all folks! PS do you really want to know why? Post a feedback below. When 11 people or more will have asked for it, I’ll add a link to the full explanation.

    Read the article

  • Cannot dual Windows XP and Ubuntu

    - by Fabio Machado
    I am new to Ubuntu and at the moment I am trying to get Ubuntu 12.10 to one of my machines. The machine is a Pentium 4 @ 3.06, 2Gb RAM, 200GB Hard Drive and a NVidia GeForce 8800 GT. A few days ago, I tried Ubuntu without installing and it worked perfectly. Yesterday, I decided to formatted the hard drive and divide my hard drive into four partitions: 1 for the XP, 1 for Ubuntu, 1 for swamp and 1 where I will have my documents. Everything went great, I installed XP and then Ubuntu but I did something wrong on the partition window (Ubunto partion window) that I ended up without boot loader. This morning, I formatted everything again, installed XP and when I went to install Ubuntu (with the same DVD as before) the problems started. First, I had a black screen with a msg written with white text saying something like: unable to find a medium containing a live file system. After I burned another CD and tried again, I got stuck at the red dots (loading screen). I then went online and I read somewhere that it could be the CD, so I checked the integrity of the CD and everything was fine. I also unplugged all USBs connected to the computer and nothing changed. I goggled further options to try to solve my problem and some users suggested that people having these types of problems should try the alternate installation, which if I am not wrong is for networks. I then tried to install and yes the installation process was different from the normal CD, but it did get stuck on a page where it was doing something, like: ...finding ethd0 and it was stuck on the 100%. I tried USB installation as well and it also got stuck at the red dots (I do not have USB 3.0 on the computer in question). I have burned 5 different CD's and all at low speed. I checked the integrity and all are fine. I downloaded other distribution as well as other versions of Ubuntu and I still cannot install or even run the Live CD of Ubuntu or any other distribution. What is really annoying me is that everything was working perfectly before, when I first tried to install Ubuntu. Anyway, any help is welcome. Edit: My boot load is normal, no errors and all the hardware is working fine. I forgot to mention that after the loading screen (red dots) gets stuck, the DVD drive and the hard drive goes into idle state. I also restored the default values of the BIOS and no luck.

    Read the article

  • Equal Gifts Algorithm Problem

    - by 7Aces
    Problem Link - http://opc.iarcs.org.in/index.php/problems/EQGIFTS It is Lavanya's birthday and several families have been invited for the birthday party. As is customary, all of them have brought gifts for Lavanya as well as her brother Nikhil. Since their friends are all of the erudite kind, everyone has brought a pair of books. Unfortunately, the gift givers did not clearly indicate which book in the pair is for Lavanya and which one is for Nikhil. Now it is up to their father to divide up these books between them. He has decided that from each of these pairs, one book will go to Lavanya and one to Nikhil. Moreover, since Nikhil is quite a keen observer of the value of gifts, the books have to be divided in such a manner that the total value of the books for Lavanya is as close as possible to total value of the books for Nikhil. Since Lavanya and Nikhil are kids, no book that has been gifted will have a value higher than 300 Rupees... For the problem, I couldn't think of anything except recursion. The code I wrote is given below. But the problem is that the code is time-inefficient and gives TLE (Time Limit Exceeded) for 9 out of 10 test cases! What would be a better approach to the problem? Code - #include<cstdio> #include<climits> #include<algorithm> using namespace std; int n,g[150][2]; int diff(int a,int b,int f) { ++f; if(f==n) { if(a>b) { return a-b; } else { return b-a; } } return min(diff(a+g[f][0],b+g[f][1],f),diff(a+g[f][1],b+g[f][0],f)); } int main() { int i; scanf("%d",&n); for(i=0;i<n;++i) { scanf("%d%d",&g[i][0],&g[i][1]); } printf("%d",diff(g[0][0],g[0][1],0)); } Note - It is just a practice question, & is not part of a competition.

    Read the article

  • Au revoir, Python?

    - by GuySmiley
    I'm an ex-C++ programmer who's recently discovered (and fallen head-over-heels with) Python. I've taken some time to become reasonably fluent in Python, but I've encountered some troubling realities that may lead me to drop it as my language of choice, at least for the time being. I'm writing this in the hopes that someone out there can talk me out of it by convincing me that my concerns are easily circumvented within the bounds of the python universe. I picked up python while looking for a single flexible language that will allow me to build end-to-end working systems quickly on a variety of platforms. These include: - web services - mobile apps - cross-platform client apps for PC Development speed is more of a priority at the time-being than execution speed. However, in order to improve performance over time without requiring major re-writes or architectural changes I think it's imperative to be able to interface easily with Java. That way, I can use Java to optimize specific components as the application scales, without throwing away any code. As far as I can tell, my requirement for an enterprise-capable, platform-independent, fast language with a large developer base means it would have to be Java. .NET or C++ would not cut it due to their respective limitations. Also Java is clearly de rigeur for most mobile platforms. Unfortunately, tragically, there doesn't seem to be a good way to meet all these demands. Jython seems to be what I'm looking for in principle, except that it appears to be practically dead, with no one developing, supporting, or using it to any great degree. And also Jython seems too married to the Java libraries, as you can't use many of the CPython standard libraries with it, which has a major impact on the code you end up writing. The only other option that I can see is to use JPype wrapped in marshalling classes, which may work although it seems like a pain and I wonder if it would be worth it in the long run. On the other hand, everything I'm looking for seems to be readily available by using JRuby, which seems to be much better supported. As things stand, I think this is my best option. I'm sad about this because I absolutely love everything about Python, including the syntax. The perl-like constructs in Ruby just feel like such a step backwards to me in terms of readability, but at the end of the day most of the benefits of python are available in Ruby as well. So I ask you - am I missing something here? Much of what I've said is based on what I've read, so is this summary of the current landscape accurate, or is there some magical solution to the Python-Java divide that will snuff these concerns and allow me to comfortably stay in my happy Python place?

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >