Search Results

Search found 3828 results on 154 pages for 'mathematical optimization'.

Page 73/154 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Storing information in the DOM?

    - by John
    Im making a small private message application in the form of a phone. Ten messages are shown at the time. And the list of messages are scrolled up/down by hiding them. Just how bad is it to use the DOM to store information in this way. My main goal for doing this is to reduce calls to the database. And instead of making a new call all the time, it only checks if any new messages has arrived and ads the new message(s). Whats the alternative, cookies anyone? Thank you for the time

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • JavaScript Optimisation

    - by Jayie
    I am using JavaScript to work out all the combinations of badminton doubles matches from a given list of players. Each player teams up with everyone else. EG. If I have the following players a, b, c & d. Their combinations can be: a & b V c & d a & c V b & d a & d V b & c I am using the code below, which I wrote to do the job, but it's a little inefficient. It loops through the PLAYERS array 4 times finding every single combination (including impossible ones). It then sorts the game out into alphabetical order and stores it in the GAMES array if it doesn't already exist. I can then use the first half of the GAMES array to list all game combinations. The trouble is if I have any more than 8 players it runs really slowly because the combination growth is exponential. Does anyone know a better way or algorithm I could use? The more I think about it the more my brain hurts! var PLAYERS = ["a", "b", "c", "d", "e", "f", "g"]; var GAMES = []; var p1, p2, p3, p4, i1, i2, i3, i4, entry, found, i; var pos = 0; var TEAM1 = []; var TEAM2 = []; // loop through players 4 times to get all combinations for (i1 = 0; i1 < PLAYERS.length; i1++) { p1 = PLAYERS[i1]; for (i2 = 0; i2 < PLAYERS.length; i2++) { p2 = PLAYERS[i2]; for (i3 = 0; i3 < PLAYERS.length; i3++) { p3 = PLAYERS[i3]; for (i4 = 0; i4 < PLAYERS.length; i4++) { p4 = PLAYERS[i4]; if ((p1 != p2 && p1 != p3 && p1 != p4) && (p2 != p1 && p2 != p3 && p2 != p4) && (p3 != p1 && p3 != p2 && p3 != p4) && (p4 != p1 && p4 != p2 && p4 != p3)) { // sort teams into alphabetical order (so we can compare them easily later) TEAM1[0] = p1; TEAM1[1] = p2; TEAM2[0] = p3; TEAM2[1] = p4; TEAM1.sort(); TEAM2.sort(); // work out the game and search the array to see if it already exists entry = TEAM1[0] + " & " + TEAM1[1] + " v " + TEAM2[0] + " & " + TEAM2[1]; found = false; for (i=0; i < GAMES.length; i++) { if (entry == GAMES[i]) found = true; } // if the game is unique then store it if (!found) { GAMES[pos] = entry; document.write((pos+1) + ": " + GAMES[pos] + "<br>"); pos++; } } } } } } Thanks in advance. Jason.

    Read the article

  • Minimizing distance to a weighted grid

    - by Andrew Tomazos - Fathomling
    Lets suppose you have a 1000x1000 grid of positive integer weights W. We want to find the cell that minimizes the average weighted distance.to each cell. The brute force way to do this would be to loop over each candidate cell and calculate the distance: int best_x, best_y, best_dist; for x0 = 1:1000, for y0 = 1:1000, int total_dist = 0; for x1 = 1:1000, for y1 = 1:1000, total_dist += W[x1,y1] * sqrt((x0-x1)^2 + (y0-y1)^2); if (total_dist < best_dist) best_x = x0; best_y = y0; best_dist = total_dist; This takes ~10^12 operations, which is too long. Is there a way to do this in or near ~10^8 or so operations?

    Read the article

  • SQL -- How is DISTINCT so fast without an index?

    - by Jonathan
    Hi, I have a database with a table called 'links' with 600 million rows in it in SQLite. There are 2 columns in the database - a "src" column and a "dest" column. At present there are no indices. There are a fair number of common values between src and dest, but also a fair number of duplicated rows. The first thing I'm trying to do is remove all the duplicate rows, and then perform some additional processing on the results, however I've been encountering some weird issues. Firstly, SELECT * FROM links WHERE src=434923 AND dest=5010182. Now this returns one result fairly quickly and then takes quite a long time to run as I assume it's performing a tablescan on the rest of the 600m rows. However, if I do SELECT DISTINCT * FROM links, then it immediately starts returning rows really quickly. The question is: how is this possible?? Surely for each row, the row must be compared against all of the other rows in the table, but this would require a tablescan of the remaining rows in the table which SHOULD takes ages! Any ideas why SELECT DISTINCT is so much quicker than a standard SELECT?

    Read the article

  • Reflector error or optimisation?

    - by David_001
    Long story short: I used reflector on the System.Security.Util.Tokenizer class, and there's loads of goto statements in there. Here's a brief example snippet: Label_0026: if (this._inSavedCharacter != -1) { num = this._inSavedCharacter; this._inSavedCharacter = -1; } else { switch (this._inTokenSource) { case TokenSource.UnicodeByteArray: if ((this._inIndex + 1) < this._inSize) { break; } stream.AddToken(-1); return; case TokenSource.UTF8ByteArray: if (this._inIndex < this._inSize) { goto Label_00CF; } stream.AddToken(-1); return; case TokenSource.ASCIIByteArray: if (this._inIndex < this._inSize) { goto Label_023C; } stream.AddToken(-1); return; case TokenSource.CharArray: if (this._inIndex < this._inSize) { goto Label_0272; } stream.AddToken(-1); return; case TokenSource.String: if (this._inIndex < this._inSize) { goto Label_02A8; } stream.AddToken(-1); return; case TokenSource.NestedStrings: if (this._inNestedSize == 0) { goto Label_030D; } if (this._inNestedIndex >= this._inNestedSize) { goto Label_0306; } num = this._inNestedString[this._inNestedIndex++]; goto Label_0402; default: num = this._inTokenReader.Read(); if (num == -1) { stream.AddToken(-1); return; } goto Label_0402; } num = (this._inBytes[this._inIndex + 1] << 8) + this._inBytes[this._inIndex]; this._inIndex += 2; } goto Label_0402; Label_00CF: num = this._inBytes[this._inIndex++]; if ((num & 0x80) != 0) { switch (((num & 240) >> 4)) { case 8: case 9: case 10: case 11: throw new XmlSyntaxException(this.LineNo); case 12: case 13: num &= 0x1f; num3 = 2; break; case 14: num &= 15; num3 = 3; break; case 15: throw new XmlSyntaxException(this.LineNo); } if (this._inIndex >= this._inSize) { throw new XmlSyntaxException(this.LineNo, Environment.GetResourceString("XMLSyntax_UnexpectedEndOfFile")); } byte num2 = this._inBytes[this._inIndex++]; if ((num2 & 0xc0) != 0x80) { throw new XmlSyntaxException(this.LineNo); } num = (num << 6) | (num2 & 0x3f); if (num3 != 2) { if (this._inIndex >= this._inSize) { throw new XmlSyntaxException(this.LineNo, Environment.GetResourceString("XMLSyntax_UnexpectedEndOfFile")); } num2 = this._inBytes[this._inIndex++]; if ((num2 & 0xc0) != 0x80) { throw new XmlSyntaxException(this.LineNo); } num = (num << 6) | (num2 & 0x3f); } } goto Label_0402; Label_023C: num = this._inBytes[this._inIndex++]; goto Label_0402; Label_0272: num = this._inChars[this._inIndex++]; goto Label_0402; Label_02A8: num = this._inString[this._inIndex++]; goto Label_0402; Label_0306: this._inNestedSize = 0; I essentially wanted to know how the class worked, but the number of goto's makes it impossible. Arguably something like a Tokenizer class needs to be heavily optimised, so my question is: is Reflector getting it wrong, or is goto an optimisation for this class?

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • MySQL TEXT field performance

    - by Jonathon
    I have several TEXT and/or MEDIUMTEXT fields in each of our 1000 MySQL tables. I now know that TEXT fields are written to disk rather than in memory when queried. Is that also true even if that field is not called in the query? For example, if I have a table (tbExam) with 2 fields (id int(11) and comment text) and I run SELECT id FROM tbExam, does MySQL still have to write that to disk before returning results or will it run that query in memory? I am trying to figure out if I need to reconfigure our actual db tables to switch to varchar(xxxx) or keep the text fields and reconfigure the queries.

    Read the article

  • which one is a faster/better sql practice?

    - by artsince
    Suppose I have a 2 column table (id, flag) and id is sequential. I expect this table to contain a lot of records. I want to periodically select the first row not flagged and update it. Some of the records on the way may have already been flagged, so I want to skip them. Does it make more sense if I store the last id I flagged and use it in my select statement, like select * from mytable where id > my_last_id order by id asc limit 1 or simply get the first unflagged row, like: select * from mytable where flagged = 'F' order by id asc limit 1 Thank you!

    Read the article

  • How can I speed-up this loop (in C)?

    - by splicer
    Hi! I'm trying to parallelize a convolution function in C. Here's the original function which convolves two arrays of 64-bit floats: void convolve(const Float64 *in1, UInt32 in1Len, const Float64 *in2, UInt32 in2Len, Float64 *results) { UInt32 i, j; for (i = 0; i < in1Len; i++) { for (j = 0; j < in2Len; j++) { results[i+j] += in1[i] * in2[j]; } } } In order to allow for concurrency (without semaphores), I created a function that computes the result for a particular position in the results array: void convolveHelper(const Float64 *in1, UInt32 in1Len, const Float64 *in2, UInt32 in2Len, Float64 *result, UInt32 outPosition) { UInt32 i, j; for (i = 0; i < in1Len; i++) { if (i > outPosition) break; j = outPosition - i; if (j >= in2Len) continue; *result += in1[i] * in2[j]; } } The problem is, using convolveHelper slows down the code about 3.5 times (when running on a single thread). Any ideas on how I can speed-up convolveHelper, while maintaining thread safety?

    Read the article

  • How to optimize this MYSQL table?

    - by Lost_in_code
    This is for an upcoming project. I have two tables - first one keeps tracks of photos, and the second one keeps track of the photo's rank Photos: +-------+-----------+------------------+ | id | photo | current_rank | +-------+-----------+------------------+ | 1 | apple | 5 | | 2 | orange | 9 | +-------+-----------+------------------+ The photo rank keeps changing on a regular basis and this is the table that tracks it: Ranks: +-------+-----------+----------+-------------+ | id | photo_id | ranks | timestamp | +-------+-----------+----------+-------------+ | 1 | 1 | 8 | * | | 2 | 2 | 2 | * | | 3 | 1 | 3 | * | | 4 | 1 | 7 | * | | 5 | 1 | 5 | * | | 6 | 2 | 9 | * | +-------+-----------+----------+-------------+ * = current timestamp Every rank is tracked for reporting/analysis purpose. I talked to someone who has experience in this field and he told me that storing ranks like above is the way to go. But I'm not so sure yet. The problem here is data redundancy. There are going to be tens of thousands of photos. The photo rank changes on a hourly basis (many time within minutes) for recent photos but less frequently for older photos. At this rate the table will have millions of records within months. And since I do not have experience in working with large databases, this makes me a little nervous. I thought of this: Ranks: +-------+-----------+--------------------+ | id | photo_id | ranks | +-------+-----------+--------------------+ | 1 | 1 | 8:*,3:*,7:*,5:* | | 2 | 2 | 2:*,9:* | +-------+-----------+--------------------+ * = current timestamp That means some extra code in PHP to split the rank/time (and sorting) but that looks OK to me. Is this a correct way to optimize the table for performance? What would you recommend? Any suggestions would be great.

    Read the article

  • Optimizing MySQL queries with IN operator

    - by Arkadiusz Kondas
    I have a MySQL database with a fairly large table where the products are. Each of them has its own id and categoryId field where there is a category id belongs to this product. Now I have a query that pulls out products from given categories such as: SELECT * FROM products WHERE categoryId IN ( 1, 2, 3, 4, 5, 34, 6, 7, 8, 9, 10, 11, 12 ) Of course, come a WHERE clause and ORDER BY sort but not in this thing. Let's say that these products is 250k and the visits are over 100k per day. Under such conditions in the table slow_log registered weight of these queries with large generation time. Do you have any ideas how to optimize the given problem? Table engine is MyISAM.

    Read the article

  • Improve C function performance with cache locality?

    - by Christoper Hans
    I have to find a diagonal difference in a matrix represented as 2d array and the function prototype is int diagonal_diff(int x[512][512]) I have to use a 2d array, and the data is 512x512. This is tested on a SPARC machine: my current timing is 6ms but I need to be under 2ms. Sample data: [3][4][5][9] [2][8][9][4] [6][9][7][3] [5][8][8][2] The difference is: |4-2| + |5-6| + |9-5| + |9-9| + |4-8| + |3-8| = 2 + 1 + 4 + 0 + 4 + 5 = 16 In order to do that, I use the following algorithm: int i,j,result=0; for(i=0; i<4; i++) for(j=0; j<4; j++) result+=abs(array[i][j]-[j][i]); return result; But this algorithm keeps accessing the column, row, column, row, etc which make inefficient use of cache. Is there a way to improve my function?

    Read the article

  • With (or similar) statement in JQuery

    - by Salman A
    Very simple question: I want to optimize the following jQuery code with maximum readability, optimal performance and minimum fuss (fuss = declaring new variables etc): $(".addthis_toolbox").append('<a class="addthis_button_delicious"></a>'); $(".addthis_toolbox").append('<a class="addthis_button_facebook"></a>'); $(".addthis_toolbox").append('<a class="addthis_button_google"></a>'); $(".addthis_toolbox").append('<a class="addthis_button_reddit"></a>'); . . .

    Read the article

  • Is it faster to compute values in a query, call a Scalar Function (decimal(28,2) datatype) 4 times,

    - by Pulsehead
    I have a handful of queries I need to write in SQL Server 2005. Each Query will be calculating 4 unit cost values based on a handful of (up to 11) fields. Any time I want 1 of these 4 unit cost values, I'll want all 4. Which is quicker? Computing in the SQL Query ((a+b+c+d+e+f+g+h+i)/(j+k)), calling ComputeScalarUnitCost(datapoint.ID) 4 times, or joining to ComputeUnitCostTable(datapoint.ID) one time?

    Read the article

  • Two if conditions or one if with OR

    - by Ram
    I have a small doubt I have following code bool res= false; if(cond1) { res = true; } if(cond2) { res = true; } Instead of this if I put following code if(cond1 || cond2) { res = true; } Which code snippet will be more optimized? I believe it would be second one as I have avoided an If condition.

    Read the article

  • Does it make sense to replace sub-queries by join?

    - by Roman
    For example I have a query like that. select col1 from t1 where col2>0 and col1 in (select col1 from t2 where col2>0) As far as I understand, I can replace it by the following query: select t1.col1 from t1 join (select col1 from t2 where col2>0) as t2 on t1.col1=t2.col1 where t1.col2>0 ADDED In some answers I see join in other inner join. Are both right? Or they are even identical?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >