Search Results

Search found 668 results on 27 pages for 'wobbily col'.

Page 3/27 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • how to use static function in header and compare with float array

    - by ed k
    I wrote this function: static bool colorIsEmpty(const Color col) { return (col[0] == 0 && col[1] == 0 && col[2] == 0 ); } where Color is simply a float[3]; the function doesn't work if col[3] are all 0; but this works: if(col[0] == col[1] == col[2] == 0) { //gets called } however gcc gives me warning: cColorTest.c:212:5: warning: suggest parentheses around comparison in operand of ‘==’ [-Wparentheses] so it would be nice if that function works,why it doesn't work?

    Read the article

  • dojo if i have only the value of the Column Portlet, Can i place to GridContainer?

    - by user3639054
    i faced with problem situation. I don't know even how to solve the problem. So i would like to get advice. i saved value of Column Portlet and value of nbZones GridContainer in MongoDB. My situation is that if i get value of Column and ROW Portlet and value of nbZones GridContainer in MongoDB, Portlet and GridContainer value was taken from the DB to the location of the original place of it? my MongoDB Data { "GridSeq":1, "nbZones":3, "Portlet":[ { "row":1, "col":0 } ] } i calculate row value following code. var perColumn=[]; var id=portlet.get('id'); var col = portlet.get('column'); if (perColumn[col] !== undefined) { perColumn[col]++; } else { perColumn[col] = 0; } var row = perColumn[col]; console.log('id:'+id + ",row: " + row + ",col: " + col);

    Read the article

  • Checking if an Unloaded Collection Contains Elements

    - by Ricardo Peres
    If you want to know if an unloaded collection in an entity contains elements, or count them, without actually loading them, you need to use a custom query; that is because the Count property (if the collection is not mapped with lazy=”extra”) and the LINQ Count() and Any() methods force the whole collection to be loaded. You can use something like these two methods, one for checking if there are any values, the other for actually counting them: 1: public static Boolean Exists(this ISession session, IEnumerable collection) 2: { 3: if (collection is IPersistentCollection) 4: { 5: IPersistentCollection col = collection as IPersistentCollection; 6:  7: if (col.WasInitialized == false) 8: { 9: String[] roleParts = col.Role.Split('.'); 10: String ownerTypeName = String.Join(".", roleParts, 0, roleParts.Length - 1); 11: String ownerCollectionName = roleParts.Last(); 12: String hql = "select 1 from " + ownerTypeName + " it where it.id = :id and exists elements(it." + ownerCollectionName + ")"; 13: Boolean exists = session.CreateQuery(hql).SetParameter("id", col.Key).List().Count == 1; 14:  15: return (exists); 16: } 17: } 18:  19: return ((collection as IEnumerable).OfType<Object>().Any()); 20: } 21:  22: public static Int64 Count(this ISession session, IEnumerable collection) 23: { 24: if (collection is IPersistentCollection) 25: { 26: IPersistentCollection col = collection as IPersistentCollection; 27:  28: if (col.WasInitialized == false) 29: { 30: String[] roleParts = col.Role.Split('.'); 31: String ownerTypeName = String.Join(".", roleParts, 0, roleParts.Length - 1); 32: String ownerCollectionName = roleParts.Last(); 33: String hql = "select count(elements(it." + ownerCollectionName + ")) from " + ownerTypeName + " it where it.id = :id"; 34: Int64 count = session.CreateQuery(hql).SetParameter("id", col.Key).UniqueResult<Int64>(); 35:  36: return (count); 37: } 38: } 39:  40: return ((collection as IEnumerable).OfType<Object>().Count()); 41: } Here’s how: 1: MyEntity entity = session.Load(100); 2:  3: if (session.Exists(entity.SomeCollection)) 4: { 5: Int32 count = session.Count(entity.SomeCollection); 6: //... 7: }

    Read the article

  • OpenCV Mat creation memory leak

    - by Royi Freifeld
    My memory is getting full fairly quick once using the next piece of code. Valgrind shows a memory leak, but everything is allocated on stack and (supposed to be) freed once the function ends. void mult_run_time(int rows, int cols) { Mat matrix(rows,cols,CV_32SC1); Mat row_vec(cols,1,CV_32SC1); /* initialize vector and matrix */ for (int col = 0; col < cols; ++col) { for (int row = 0; row < rows; ++row) { matrix.at<unsigned long>(row,col) = rand() % ULONG_MAX; } row_vec.at<unsigned long>(1,col) = rand() % ULONG_MAX; } /* end initialization of vector and matrix*/ matrix*row_vec; } int main() { for (int row = 0; row < 20; ++row) { for (int col = 0; col < 20; ++col) { mult_run_time(row,col); } } return 0; } Valgrind shows that there is a memory leak in line Mat row_vec(cols,1,CV_32CS1): ==9201== 24,320 bytes in 380 blocks are definitely lost in loss record 50 of 50 ==9201== at 0x4026864: malloc (vg_replace_malloc.c:236) ==9201== by 0x40C0A8B: cv::fastMalloc(unsigned int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x41914E3: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x8048BE4: cv::Mat::create(int, int, int) (mat.hpp:368) ==9201== by 0x8048B2A: cv::Mat::Mat(int, int, int) (mat.hpp:68) ==9201== by 0x80488B0: mult_run_time(int, int) (mat_by_vec_mult.cpp:26) ==9201== by 0x80489F5: main (mat_by_vec_mult.cpp:59) Is it a known bug in OpenCV or am I missing something?

    Read the article

  • Selenium RC: Selecting elements using the CSS :contains pseudo-class

    - by Andrew
    I would like to assert that a table row contains the data that I expect in two different tables. Using the following HTML as an example: <table> <tr> <th>Table 1</th> </tr> <tr> <td>Row 1 Col 1</td> <td>Row 1 Col 2</td> </tr> </table> <table> <tr> <th>Table 2</th> </tr> <tr> <td>Row 1 Col 1</td> <td>different data</td> </tr> </table> The following assertion passes: $this->assertElementPresent('css=table:contains(Table 1)'); However, this one doesn't: $this->assertElementPresent('css=table:contains(Table 1) tr:contains(Row 1 Col 1)'); And ultimately, I need to be able to test that both columns within the table row contain the data that I expect: $this->assertElementPresent('css=table:contains(Table 1) tr:contains(Row 1 Col 1):contains(Row 1 Col 2)'); $this->assertElementPresent('css=table:contains(Table 2) tr:contains(Row 1 Col 1):contains(different data)'); What am I doing wrong? How can I achieve this? Update: Sounds like the problem is a bug in Selenium when trying to select descendants. The only way I was able to get this to work was to add an extra identifier on the table so I could tell which one I was working with: /* HTML */ <table id="table-1"> /* PHP */ $this->assertElementPresent("css=#table-1 tr:contains(Row 1 Col 1):contains(Row 1 Col 2)");

    Read the article

  • Sudoku Recursion Issue (Java)

    - by SkylineAddict
    I'm having an issue with creating a random Sudoku grid. I tried modifying a recursive pattern that I used to solve the puzzle. The puzzle itself is a two dimensional integer array. This is what I have (By the way, the method doesn't only randomize the first row. I had an idea to randomize the first row, then just decided to do the whole grid): public boolean randomizeFirstRow(int row, int col){ Random rGen = new Random(); if(row == 9){ return true; } else{ boolean res; for(int ndx = rGen.nextInt() + 1; ndx <= 9;){ //Input values into the boxes sGrid[row][col] = ndx; //Then test to see if the value is valid if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)){ // grid valid, move to the next cell if(col + 1 < 9){ res = randomizeFirstRow(row, col+1); } else{ res = randomizeFirstRow( row+1, 0); } //If the value inputed is valid, restart loop if(res == true){ return true; } } } } //If no value can be put in, set value to 0 to prevent program counting to 9 setGridValue(row, col, 0); //Return to previous method in stack return false; } This results in an ArrayIndexOutOfBoundsException with a ridiculously high or low number (+- 100,000). I've tried to see how far it goes into the method, and it never goes beyond this line: if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)) I don't understand how the array index goes so high. Can anyone help me out?

    Read the article

  • Memory not being returned after function python call

    - by Dan
    I've got a function which does a parse of a sentence by building up a big chart. For some reason, Python holds on to whatever memory was allocated during that function call. That is, I do best = translate(sentence, grammar) and somehow my memory goes up and stays up. Here is the function: from string import join from heapq import nsmallest, heappush def translate(f, g): words = f.split() chart = {} for col in range(len(words)): for row in reversed(range(0,col+1)): # get rules for this subspan rules = g[join(words[row:col+1], ' ')] # ensure there's at least one rule on the diagonal if not rules and row==col: rules=[(0.0, join(words[row:col+1]))] # pick up rules below & to the left for k in range(row,col): if (row,k) and (k+1,col) in chart: for (w1, e1) in chart[row, k]: for (w2, e2) in chart[k+1,col]: heappush(rules, (w1+w2, e1+' '+e2)) # add all rules to chart chart[row,col] = nsmallest(MAX_TRANSLATIONS, rules) (w, best) = chart[0, len(words)-1][0] return best EDIT: Using Python 2.7 on OS X. The grammar g is just a dictionary from strings to heaps, e.g.: g['et'] [(1.05, 'and'), (6.92, ', and'), (9.95, 'and ,'), (11.17, 'and to')] EDIT: If you want to run the code, try the sentence "Cela est difficile" with the following grammar: >>> g['cela'] [(8.28, 'this'), (11.21, 'it'), (11.57, 'that'), (15.26, 'this ,')] >>> g['est'] [(2.69, 'is'), (10.21, 'is ,'), (11.15, 'has'), (11.28, ', is')] >>> g['difficile'] [(2.01, 'difficult'), (10.08, 'hard'), (10.19, 'difficult ,'), (10.57, 'a difficult')]

    Read the article

  • Weird error: [Semantical Error] line 0, col 75 near 'submit': Error: 'submit' is not defined.

    - by luxury
    My controller like: /** * @Route("/product/submit", name="product_submit") * @Template("GaorenVendorsBundle:Product:index.html.twig") */ public function submitAction() { $em = $this->getDoctrine()->getManager(); $uid = $this->getUser()->getId(); $em->getRepository( 'GaorenVendorsBundle:Product' )->updateStatus( $uid, Product::STATUS_FREE, Product::STATUS_PENDING ); return $this->redirect( $this->generateUrl( 'product' ) ); } and the repo like: class ProductRepository extends EntityRepository { public function updateStatus($uid, $status, $setter) { $st = $this->getEntityManager()->getRepository( 'GaorenVendorsBundle:Product' ) ->createQueryBuilder( 'p' ) ->update( 'GaorenVendorsBundle:Product', 'p' ) ->set( 'p.status', ':setter' ) ->where( 'p.status= :status AND p.user= :user' ) ->setParameters( array( 'user' => $uid, 'status' => $status, 'setter' => $setter ) ) ->getQuery() ->execute() return $st; } when request the "submit" action, it prompts me "[Semantical Error] line 0, col 75 near 'submit': Error: 'submit' is not defined. ". "submit" is nothing to do with DOCTRINE orm query, why it appears in the error? I just can't figure out.Anyone could tell me?

    Read the article

  • I am trying to use user-defined functions to print out an T out of stars, but i need help shifting t

    - by lm
    I know main() and other parts of the prog are missing, but please help def horizLine(col): for cols in range(col): print("*", end='') print() def line(col): #C,E,F,G,I,L,P,T for col in range(col//2): print("*", end='') print() def functionT(width): horizLine(width) line(width) enter width for the box width = int(input("Enter a width between 5 and 20: ")) letter=input("Enter one of the capital letters: T ") if ((width >= 5 and width <=20)): if letter=="T": functionT(width) else: print() print("Invalid letter!") else: print("You have entered a wrong range for the width!") main()

    Read the article

  • How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS... Please pay attention to this: http://stackoverflow.com/questions/485581/generate-delete-statement-from-foreign-key-relationships-in-sql-2008/2677145#2677145 Another user has just written it in SQL SERVER 2008, anyone is able to convert to Oracle 10G PL/SQL? I am not able to... :-( This is the code written by another user in SQL SERVER 2008: DECLARE @COLUMN_NAME AS sysname DECLARE @TABLE_NAME AS sysname DECLARE @IDValue AS int SET @COLUMN_NAME = '<Your COLUMN_NAME here>' SET @TABLE_NAME = '<Your TABLE_NAME here>' SET @IDValue = 123456789 DECLARE @sql AS varchar(max) ; WITH RELATED_COLUMNS AS ( SELECT QUOTENAME(c.TABLE_SCHEMA) + '.' + QUOTENAME(c.TABLE_NAME) AS [OBJECT_NAME] ,c.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.COLUMNS AS c WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLES AS t WITH (NOLOCK) ON c.TABLE_CATALOG = t.TABLE_CATALOG AND c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME AND t.TABLE_TYPE = 'BASE TABLE' INNER JOIN ( SELECT rc.CONSTRAINT_CATALOG ,rc.CONSTRAINT_SCHEMA ,lkc.TABLE_NAME ,lkc.COLUMN_NAME FROM PBANKDW.INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc WITH (NOLOCK) INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE lkc WITH (NOLOCK) ON lkc.CONSTRAINT_CATALOG = rc.CONSTRAINT_CATALOG AND lkc.CONSTRAINT_SCHEMA = rc.CONSTRAINT_SCHEMA AND lkc.CONSTRAINT_NAME = rc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc WITH (NOLOCK) ON rc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rc.UNIQUE_CONSTRAINT_NAME = tc.CONSTRAINT_NAME INNER JOIN PBANKDW.INFORMATION_SCHEMA.KEY_COLUMN_USAGE rkc WITH (NOLOCK) ON rkc.CONSTRAINT_CATALOG = tc.CONSTRAINT_CATALOG AND rkc.CONSTRAINT_SCHEMA = tc.CONSTRAINT_SCHEMA AND rkc.CONSTRAINT_NAME = tc.CONSTRAINT_NAME WHERE rkc.COLUMN_NAME = @COLUMN_NAME AND rkc.TABLE_NAME = @TABLE_NAME ) AS j ON j.CONSTRAINT_CATALOG = c.TABLE_CATALOG AND j.CONSTRAINT_SCHEMA = c.TABLE_SCHEMA AND j.TABLE_NAME = c.TABLE_NAME AND j.COLUMN_NAME = c.COLUMN_NAME ) SELECT @sql = COALESCE(@sql, '') + 'DELETE FROM ' + [OBJECT_NAME] + ' WHERE ' + [COLUMN_NAME] + ' = ' + CONVERT(varchar, @IDValue) + CHAR(13) + CHAR(10) FROM RELATED_COLUMNS PRINT @sql Thank to Charles, this is the latest not working release of the software, I have added a parameter with the OWNER because the referential integrities propagate through about 5 other Oracle users (!!!): CREATE OR REPLACE PROCEDURE delete_cascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete_cascade (tab_name, tab_name_owner); delete_command := ''; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');'; INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; END; / In the cursor CONS_CURSOR I have added the condition: AND delete_rule = 'NO ACTION'; in order to avoid deletion in case of referential integrities with DELETE_RULE = 'CASCADE' or DELETE_RULE = 'SET NULL'. Now I have tried to turn from stored procedure to stored function, but the delete statements are not correct: CREATE OR REPLACE FUNCTION deletecascade ( parent_table VARCHAR2, parent_table_owner VARCHAR2) RETURN VARCHAR2 IS cons_name VARCHAR2 (30); tab_name VARCHAR2 (30); tab_name_owner VARCHAR2 (30); parent_cons VARCHAR2 (30); parent_col VARCHAR2 (30); delete1 VARCHAR (500); delete2 VARCHAR (500); delete_command VARCHAR (4000); AT_LEAST_ONE_ITERATION NUMBER DEFAULT 0; CURSOR cons_cursor IS SELECT constraint_name, r_constraint_name, table_name, constraint_type FROM all_constraints WHERE constraint_type = 'R' AND r_constraint_name IN (SELECT constraint_name FROM all_constraints WHERE constraint_type IN ('P', 'U') AND table_name = parent_table AND owner = parent_table_owner) AND delete_rule = 'NO ACTION'; CURSOR tabs_cursor IS SELECT DISTINCT table_name FROM all_cons_columns WHERE constraint_name = cons_name; CURSOR child_cols_cursor IS SELECT column_name, position FROM all_cons_columns WHERE constraint_name = cons_name AND table_name = tab_name; BEGIN FOR cons IN cons_cursor LOOP AT_LEAST_ONE_ITERATION := 1; cons_name := cons.constraint_name; parent_cons := cons.r_constraint_name; SELECT DISTINCT table_name, owner INTO tab_name, tab_name_owner FROM all_cons_columns WHERE constraint_name = cons_name; delete1 := ''; delete2 := ''; FOR col IN child_cols_cursor LOOP SELECT DISTINCT column_name INTO parent_col FROM all_cons_columns WHERE constraint_name = parent_cons AND position = col.position; IF delete1 IS NULL THEN delete1 := col.column_name; ELSE delete1 := delete1 || ', ' || col.column_name; END IF; IF delete2 IS NULL THEN delete2 := parent_col; ELSE delete2 := delete2 || ', ' || parent_col; END IF; END LOOP; delete_command := 'delete from ' || tab_name_owner || '.' || tab_name || ' where (' || delete1 || ') in (select ' || delete2 || ' from ' || parent_table_owner || '.' || parent_table || ');' || deletecascade (tab_name, tab_name_owner); INSERT INTO ris VALUES (SEQUENCE_COMANDI.NEXTVAL, delete_command); COMMIT; END LOOP; IF AT_LEAST_ONE_ITERATION = 1 THEN RETURN ' where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION;'; ELSE RETURN NULL; END IF; END; / Please assume that V_CHICKEN and V_NATION are the criteria to select the CHICKEN to delete from the root table: the condition is: "where COD_CHICKEN = V_CHICKEN AND COD_NATION = V_NATION" on the root table.

    Read the article

  • How to attach turrets to tiles in a tile based game

    - by Joseph St. Pierre
    I am a flash developer, and I am building a Tower Defense game. The world is being built through tiles, and I have gotten that accomplished easily. I have also gotten level changes and enemy spawning down as well. However, I wish the player to be able to spawn turrets, and have those turrets be on specific tiles, based upon where the player placed it. Here is my code: stop(); colOffset = 50; rowOffset = 50; guns = []; placed = true; dead = 0; spawned = 0; level = 1; interval = 350 / level; amount = level * 20; counter = 0; numCol = 14; numRow = 10; tiles = []; k = 0; create = false; tileName = new Array("road","grass","end", "start"); board = new Array( new Array(1,1,1,1,3,1,1,1,1,1,2,1,1,1), new Array(1,1,1,0,0,1,1,1,1,1,0,1,1,1), new Array(1,1,1,0,1,1,1,1,1,1,0,0,1,1), new Array(1,1,1,0,0,0,1,1,1,1,1,0,1,1), new Array(1,1,1,0,1,0,0,0,1,1,1,0,0,1), new Array(1,1,1,0,1,1,1,0,0,1,1,1,0,1), new Array(1,1,0,0,1,1,1,1,0,1,1,0,0,1), new Array(1,1,0,1,1,1,1,1,0,1,0,0,1,1), new Array(1,1,0,0,0,0,0,0,0,1,0,1,1,1), new Array(1,1,1,1,1,1,1,1,0,0,0,1,1,1) ); buildBoard(); function buildBoard(){ for ( col = 0; col < numCol; col++){ for ( row = 0; row < numRow; row++){ _root.attachMovie("tile", "tile_" + col + "_" + row, _root.getNextHighestDepth()); theTile = eval("tile_" + col + "_" + row); theTile._x = (col * 50); theTile._y = (row * 50); theTile.row = row; theTile.col = col; tileType = board[row][col]; theTile.gotoAndStop(tileName[tileType]); tiles.push(theTile); } } } init(); function init(){ onEnterFrame = function(){ counter += 1; if ( spawned < amount && counter > 50){ min= _root.attachMovie("minion","minion",_root.getNextHighestDepth()); min._x = tile_4_0._x + 25; min._y = tile_4_0._y + 25; min.health = 100; choose = Math.round(Math.random()); if ( choose == 0 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_6._x + 25, tile_2_6._x + 25, tile_2_8._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_6._y + 25, tile_2_6._y + 25, tile_2_8._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25]; } else if ( choose == 1 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_3._x + 25, tile_5_3._x + 25, tile_5_4._x + 25, tile_7_4._x + 25, tile_7_5._x + 25, tile_8_5._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25 ]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_3._y + 25, tile_5_3._y + 25, tile_5_4._y + 25, tile_7_4._y + 25, tile_7_5._y + 25, tile_8_5._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25 ]; } min.i = 0; counter = 0; spawned += 1; min.onEnterFrame = function(){ dx = this.waypointX[this.i] - this._x; dy = this.waypointY[this.i] - this._y; radians = Math.atan2(dy,dx); degrees = radians * 180 / Math.PI; xspeed = Math.cos(radians); yspeed = Math.sin(radians); this._x += xspeed; this._y += yspeed; if( this._x == this.waypointX[this.i] && this._y == this.waypointY[this.i]){ this.i++; } if ( this._x == tile_10_0._x + 25 && this._y == tile_10_0._y + 25){ this.removeMovieClip(); dead += 1; } } } if ( dead >= amount ){ dead = 0; level += 1; amount = level * 20; spawned = 0; } } btnM.onRelease = function(){ create = true; } } game.onEnterFrame = function(){ } It is possible for me however to complete this task, but only once. I am able to make the turret, drag it over to a tile, and have it attach itself to the tile. No problem. The issue is, I cannot do these multiple times. Please Help.

    Read the article

  • get cells odf a JTable

    - by tuxou
    hi how to display a row of a jtable in a from of JTextField when click on the row, ( I need this to edit the data base from the JTable ) My table model static class TableDataModel extends AbstractTableModel { private List nomColonnes; private List tableau; public TableDataModel(List nomColonnes, List tableau){ this.nomColonnes = nomColonnes; majDonnees(tableau); } public void majDonnees(List nouvellesDonnees){ this.tableau = nouvellesDonnees; fireTableDataChanged(); } public int getRowCount(){ return tableau.size(); } public int getColumnCount(){ return nomColonnes.size(); } public Object getValueAt(int row, int col){ return ((ArrayList)( tableau.get(row))).get(col); } public String getColumnName(int col){ return nomColonnes.get(col).toString(); } public Class getColumnClass(int c) { return getValueAt(0,c).getClass(); } public boolean isCellEditable(int row, int col){ return true; } public void setValueAt(Object value, int row, int col) { ((List)tableau.get(row)).set(col,value); fireTableCellUpdated(row, col); //i suppose i should update the database here } }

    Read the article

  • Binding a null variable in a PreparedStatement

    - by Paul Tomblin
    I swear this used to work, but it's not in this case. I'm trying to match col1, col2 and col3, even if one or more of them is null. I know that in some languages I've had to resort to circumlocutions like ((? is null AND col1 is null) OR col1 = ?). Is that required here? PreparedStatement selStmt = getConn().prepareStatement( "SELECT * " + "FROM tbl1 " + "WHERE col1 = ? AND col2 = ? and col3 = ?"); try { int col = 1; setInt(selStmt, col++, col1); setInt(selStmt, col++, col2); setInt(selStmt, col++, col3); ResultSet rs = selStmt.executeQuery(); try { while (rs.next()) { // process row } } finally { rs.close(); } } finally { selStmt.close(); } // Does the equivalient of stmt.setInt(col, i) but preserves nullness. protected static void setInt(PreparedStatement stmt, int col, Integer i) throws SQLException { if (i == null) stmt.setNull(col, java.sql.Types.INTEGER); else stmt.setInt(col, i); }

    Read the article

  • No transperancy in Bitmap loading from MemoryStream

    - by Jogi Joseph George
    Please see the C# code. When i am writing a Bitmap to a file and read from the file, i am getting the transperancy correctly. using (Bitmap bmp = new Bitmap(2, 2)) { Color col = Color.FromArgb(1, 2, 3, 4); bmp.SetPixel(0, 0, col); bmp.Save("J.bmp"); } using (Bitmap bmp = new Bitmap("J.bmp")) { Color col = bmp.GetPixel(0, 0); // ------------------------------ // Here col.A is 1. This is right. // ------------------------------ } But if I write the Bitmap to a MemoryStream and read from that MemoryStream, the transperancy has been removed. All Alpha values become 255. MemoryStream ms = new MemoryStream(); using (Bitmap bmp = new Bitmap(2, 2)) { Color col = Color.FromArgb(1, 2, 3, 4); bmp.SetPixel(0, 0, col); bmp.Save(ms, ImageFormat.Bmp); } using (Bitmap bmp = new Bitmap(ms)) { Color col = bmp.GetPixel(0, 0); // ------------------------------ // But here col.A is 255. Why? i am expecting 1 here. // ------------------------------ } I wish to save the Bitmap to a MemoryStream and read it back with transperancy. Could you please help me?

    Read the article

  • SSRS 2008 Interactive Sorting in Sub-Report not working as expected

    - by Ray J
    I have a (parent) report that has a list. The details group of this list contains one sub-report. So basically if the list has 10 records (rows) the sub-report is executed 10 times. The problem seems to be with interactive sorting in the Sub-Report. It has 4 columns with interactive sorting enabled. When I run the parent report and try to sort columns SSRS "remembers" the previous sort column and sorts by multiple columns at the same time. For example if I sort by Col A then click to sort by Col B, SSRS will preserve the sorting of Col A (and the direction) and then apply the sorting to Col B. However I simply want to sort by Col B and do not want to Col A to be part of the sort. When I try this directly with the sub-report everything works as expected. Any ideas why this is happening?

    Read the article

  • Go - Using a container/heap to implement a priority queue

    - by Seth Hoenig
    In the big picture, I'm trying to implement Dijkstra's algorithm using a priority queue. According to members of golang-nuts, the idiomatic way to do this in Go is to use the heap interface with a custom underlying data structure. So I have created Node.go and PQueue.go like so: //Node.go package pqueue type Node struct { row int col int myVal int sumVal int } func (n *Node) Init(r, c, mv, sv int) { n.row = r n.col = c n.myVal = mv n.sumVal = sv } func (n *Node) Equals(o *Node) bool { return n.row == o.row && n.col == o.col } And PQueue.go: // PQueue.go package pqueue import "container/vector" import "container/heap" type PQueue struct { data vector.Vector size int } func (pq *PQueue) Init() { heap.Init(pq) } func (pq *PQueue) IsEmpty() bool { return pq.size == 0 } func (pq *PQueue) Push(i interface{}) { heap.Push(pq, i) pq.size++ } func (pq *PQueue) Pop() interface{} { pq.size-- return heap.Pop(pq) } func (pq *PQueue) Len() int { return pq.size } func (pq *PQueue) Less(i, j int) bool { I := pq.data.At(i).(Node) J := pq.data.At(j).(Node) return (I.sumVal + I.myVal) < (J.sumVal + J.myVal) } func (pq *PQueue) Swap(i, j int) { temp := pq.data.At(i).(Node) pq.data.Set(i, pq.data.At(j).(Node)) pq.data.Set(j, temp) } And main.go: (the action is in SolveMatrix) // Euler 81 package main import "fmt" import "io/ioutil" import "strings" import "strconv" import "./pqueue" const MATSIZE = 5 const MATNAME = "matrix_small.txt" func main() { var matrix [MATSIZE][MATSIZE]int contents, err := ioutil.ReadFile(MATNAME) if err != nil { panic("FILE IO ERROR!") } inFileStr := string(contents) byrows := strings.Split(inFileStr, "\n", -1) for row := 0; row < MATSIZE; row++ { byrows[row] = (byrows[row])[0 : len(byrows[row])-1] bycols := strings.Split(byrows[row], ",", -1) for col := 0; col < MATSIZE; col++ { matrix[row][col], _ = strconv.Atoi(bycols[col]) } } PrintMatrix(matrix) sum, len := SolveMatrix(matrix) fmt.Printf("len: %d, sum: %d\n", len, sum) } func PrintMatrix(mat [MATSIZE][MATSIZE]int) { for r := 0; r < MATSIZE; r++ { for c := 0; c < MATSIZE; c++ { fmt.Printf("%d ", mat[r][c]) } fmt.Print("\n") } } func SolveMatrix(mat [MATSIZE][MATSIZE]int) (int, int) { var PQ pqueue.PQueue var firstNode pqueue.Node var endNode pqueue.Node msm1 := MATSIZE - 1 firstNode.Init(0, 0, mat[0][0], 0) endNode.Init(msm1, msm1, mat[msm1][msm1], 0) if PQ.IsEmpty() { // make compiler stfu about unused variable fmt.Print("empty") } PQ.Push(firstNode) // problem return 0, 0 } The problem is, upon compiling i get the error message: [~/Code/Euler/81] $ make 6g -o pqueue.6 Node.go PQueue.go 6g main.go main.go:58: implicit assignment of unexported field 'row' of pqueue.Node in function argument make: *** [all] Error 1 And commenting out the line PQ.Push(firstNode) does satisfy the compiler. But I don't understand why I'm getting the error message in the first place. Push doesn't modify the argument in any way.

    Read the article

  • How can I get a COUNT(col) ... GROUP BY to use an index?

    - by thecoop
    I've got a table (col1, col2, ...) with an index on (col1, col2, ...). The table has got millions of rows in it, and I want to run a query: SELECT col1, COUNT(col2) WHERE col1 NOT IN (<couple of exclusions>) GROUP BY col1 Unfortunately, this is resulting in a full table scan of the table, which takes upwards of a minute. Is there any way of getting oracle to use the index on the columns to return the results much faster?

    Read the article

  • Oracle curcular join sometimes give duplicates, but sometimes does not

    - by Kaushik
    By mistake I wrote a query like this: select * from a,b,c where a.col=b.col and b.col2=c.col2 and c.col3=a.col4 So there is a circular join here. Now the thing is sometimes this query returns duplicate result, sometimes it returns unique(correct) results. I am trying to understand why it does not give duplicate results always. Also if circular joins are not allowed, how come Oracle does not throw an error. EDIT: This is the actual query. After reading ti carefully, I am not sure anymore if this is a circular join or not.It does not seem so...but why I get duplicates only sometime? select * from a,b,c,d where a.col=b.col and b.col=c.col and c.col2=d.col2 and d.col2 =a.col2

    Read the article

  • Resolve property at runtime. C#

    - by user1031381
    I have some class which have some code public IEnumerable<TypeOne> TypeOne { get { if (db != null) { var col = db.Select<TypeOne>(); if (col.Count > 0) return col; } return db2.TypeOne; } } public IEnumerable<TypeTwo> TypeTwo { get { if (db != null) { var col = db.Select<TypeTwo>(); if (col.Count > 0) return col; } return db2.TypeTwo; } } So as You can see there is a lot of duplicated Code and there are same property name and item type of enumerable. I want to call some property of object like "obj.MyProp". And MyProp must be resolved at runtime with some generic or non-generic method. Is it possible?

    Read the article

  • You have an error in your SQL syntax; check the manual that corresponds to your MySQL

    - by LuisEValencia
    I am trying to run a mysql query to find all occurences of a text. I have a syntax error but dont know where or how to fix it I am using sqlyog to execute this script DECLARE @url VARCHAR(255) SET @url = '1720' SELECT 'select * from ' + RTRIM(tbl.name) + ' where ' + RTRIM(col.name) + ' like %' + RTRIM(@url) + '%' FROM sysobjects tbl INNER JOIN syscolumns col ON tbl.id = col.id AND col.xtype IN (167, 175, 231, 239) -- (n)char and (n)varchar, there may be others to include AND col.length > 30 -- arbitrary min length into which you might store a URL WHERE tbl.type = 'U' -- user defined table 1 queries executed, 0 success, 1 errors, 0 warnings Query: declare @url varchar(255) set @url = '1720' select 'select * from ' + rtrim(tbl.name) + ' where ' + rtrim(col.name) + ' like %' ... Error Code: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'declare @url varchar(255)

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >