Search Results

Search found 5384 results on 216 pages for 'integer division'.

Page 56/216 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • I am using vb 2008 . I am trying to create text boxes dynammically and remove them here isthe code i

    - by fari
    Private Sub setTextBox() Dim num As Integer Dim pos As Integer num = Len(word) temp = String.Copy(word) Dim intcount As Integer remove() GuessBox.Visible = True letters.Visible = True pos = 0 'To create the dynamic text box and add the controls For intcount = 0 To num - 1 Txtdynamic = New TextBox Txtdynamic.Width = 20 Txtdynamic.Visible = True Txtdynamic.MaxLength = 1 Txtdynamic.Location = New Point(pos + 5, 0) pos = pos + 30 'set the font size Txtdynamic.Font = New System.Drawing.Font("Verdana", 8.25!, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, CType(0, Byte)) Txtdynamic.Name = "txtdynamic_" & intcount & "_mycntrl" Txtdynamic.Enabled = False Txtdynamic.Text = "" Panel1.Controls.Add(Txtdynamic) Next Panel1.Visible = True Controls.Add(Panel1) Controls.Add(GuessBox) Controls.Add(letters) letter = "" letters.Text = "" hang_lable.Text = "" tries = 0 End Sub`enter code here` Function remove() For Each ctrl In Panel1.Controls Panel1.Controls.Remove(ctrl) Next End Function I am able to create the textboxes but only a few of them are removed. by using For Each ctrl In Panel1.Controls it doesn't retrieve all the controls and some ae duplicated as well. Can anyone pls help me. Thanks

    Read the article

  • Reading bytes from JavaScript string

    - by Jan
    I have a string containing binary data in JS. Now I want to read, for example, an integer from it. So I get the first 4 characters, use charCodeAt, do some shifting etc. to get an integer. Problem is that strings in JS are UTF-16 (instead of ASCII) and charCodeAt often returns values higher than 256. The Mozilla reference states that "The first 128 Unicode code points are a direct match of the ASCII character encoding." (what about ASCII values 128?) How can I convert the result of charCodeAt to an ASCII value? Or is there a better way to convert a string of four characters to a 4 byte integer?

    Read the article

  • SQLite: does data type depend on the quotes?

    - by Septagram
    In SQLite, the datatype of a value is associated with the value itself, not with the column type. So suppose we have a table with an integer primary key "id" and an integer column "some_number". If I do a query like this: INSERT INTO mytable (id, some_number) VALUES (NULL, "1234") Will 123 be inserted as an integer or as string? What consequences it will have later for me, say, when I'm comparing it with other value like "234" (as a number 1234 234, as a string "1234" < "234", right?)?

    Read the article

  • How to generate entities with Objects?

    - by 01
    I want to generate @Enities with seam-gen from existing database. However its generates very simple version only. For Example @Entity @Table(name = "badges") public class Badges implements java.io.Serializable { private Integer id; private **Integer userId**; private String name; private String date; I want him to generate @Entity @Table(name = "badges") public class Badges implements java.io.Serializable { private Integer id; private **User user**; private String name; private String date; I even have constrain on userId and it points to column User.id P.S. Im using MySQL5 and seam gen is using hbm2java to generate entities.

    Read the article

  • Partially flattening a list

    - by alj
    This is probably a really silly question but, given the example code at the bottom, how would I get a single list that retain the tuples? (I've looked at the itertools but it flattens everything) What I currently get is: ('id', 20, 'integer') ('companyname', 50, 'text') [('focus', 30, 'text'), ('fiesta', 30, 'text'), ('mondeo', 30, 'text'), ('puma', 30, 'text')] ('contact', 50, 'text') ('email', 50, 'text') what I would like is a single level list like: ('id', 20, 'integer') ('companyname', 50, 'text') ('focus', 30, 'text') ('fiesta', 30, 'text') ('mondeo', 30, 'text') ('puma', 30, 'text') ('contact', 50, 'text') ('email', 50, 'text') def getproducts(): temp_list=[] product_list=['focus','fiesta','mondeo','puma'] #usually this would come from a db for p in product_list: temp_list.append((p,30,'text')) return temp_list def createlist(): column_title_list = ( ("id",20,"integer"), ("companyname",50,"text"), getproducts(), ("contact",50,"text"), ("email",50,"text"), ) return column_title_list for item in createlist(): print item Thanks ALJ

    Read the article

  • How to insert a value based on lookup from another table [SQL]?

    - by Shaitan00
    I need to find a way to do an INSERT INTO table A but one of the values is something that comes from a lookup on table B, allow me to illustrate. I have the 2 following tables: Table A: A1: String A2: Integer value coming from table B A3: More Data Table B: B1: String B2: Integer Value Example row of A: {"Value", 101, MoreData} Example row of B: {"English", 101} Now, I know I need to INSERT the following into A {"Value2", "English", MoreData} but obviously that won't work because it is expecting an Integer in the second column not the word "English", so I need to do a lookup in Table B first. Something like this: INSERT INTO tableA (A1, A2, A3) VALUES ("Value2", SELECT B2 FROM tableB where B1="English", MoreData); Obviously this doesn't work as-is ... Any suggestions?

    Read the article

  • Most proper way to use inherited classes with shared scopes in Mongo?

    - by Trip
    I have the TestVisual class that is inherited by the Game class : class TestVisual < Game include MongoMapper::Document end class Game include MongoMapper::Document belongs_to :maestra key :incorrect, Integer key :correct, Integer key :time_to_complete, Integer key :maestra_id, ObjectId timestamps! end As you can see it belongs to Maestra. So I can do Maestra.first.games But I can not to Maestra.first.test_visuals Since I'm working specifically with TestVisuals, that is ideally what I would like to pull. Is this possible with Mongo. If it isn't or if it isn't necessary, is there any other better way to reach the TestVisual object from Maestra and still have it inherit Game ?

    Read the article

  • Java generics conversion

    - by LittleGreenMan
    I have build a generic datacontainer and now I want to manipulate data depending on their type. However, I get an incompatable types warning. What am I doing wrong? Type _Value; public void set(Type t) throws Exception { if (_Value instanceof Integer && t instanceof Integer) { _Value = (((Integer) t - _MinValue + getRange()) % getRange()) + _MinValue; } else if (_Value instanceof Boolean && t instanceof Boolean) { _Value = t; } else throw new Exception("Invalid type"); }

    Read the article

  • How should I do this (business logic) in Sql Server? A constraint?

    - by Pure.Krome
    Hi folks, I wish to add some type of business logic constraint to a table, but not sure how / where. I have a table with the following fields. ID INTEGER IDENTITY HubId INTEGER CategoryId INTEGER IsFeatured BIT Foo NVARCHAR(200) etc. So what i wish is that you can only have one featured thingy, per articleId + hubId. eg. 1, 1, 1, 1, 'blah' -- Ok. 2, 1, 2, 1, 'more blah' -- Also Ok 3, 1, 1, 1, 'aaa' -- constraint error 4, 1, 1, 0, 'asdasdad' -- Ok. 5, 1, 1, 0, 'bbbb' -- Ok. etc. so the third row to be inserterd would fail because that hub AND category already have a featured thingy. Is this possible?

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • Consolidating separate Loan, Purchase & Sales tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column "transaction.trx_type" [char(1)] {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user=wise to have all types of transactions under one table.

    Read the article

  • Is it possible to write a generic +1 method for numeric box types in Java?

    - by polygenelubricants
    This is NOT homework. Part 1 Is it possible to write a generic method, something like this: <T extends Number> T plusOne(T num) { return num + 1; // DOESN'T COMPILE! How to fix??? } Short of using a bunch of instanceof and casts, is this possible? Part 2 The following 3 methods compile: Integer plusOne(Integer num) { return num + 1; } Double plusOne(Double num) { return num + 1; } Long plusOne(Long num) { return num + 1; } Is it possible to write a generic version that bound T to only Integer, Double, or Long?

    Read the article

  • Splitting up input using regular expressions in Java

    - by Joe24
    I am making a program that lets a user input a chemical for example C9H11N02. When they enter that I want to split it up into pieces so I can have it like C9, H11, N, 02. When I have it like this I want to make changes to it so I can make it C10H12N203 and then put it back together. This is what I have done so far. using the regular expression I have used I can extract the integer value, but how would I go about get C10, H11 etc..? System.out.println("Enter Data"); Scanner k = new Scanner( System.in ); String input = k.nextLine(); String reg = "\\s\\s\\s"; String [] data; data = input.split( reg ); int m = Integer.parseInt( data[0] ); int n = Integer.parseInt( data[1] );

    Read the article

  • Facing trouble in retrieving relevant records

    - by Umaid
    SELECT * from MainCategory where Month = 'May' and Day in ((cast(strftime('%d',date('now','-1 day')) as Integer)),(cast(strftime('%d',date('now')) as Integer)),(cast(strftime('%d',date('now','+1 day')) as Integer))); Whenever I run this query in sqlite so it returns me 33 records instead of 3. I am insterested in fetching on 3 records of the current month but unable to do so, so plz assist. --Please note: if you can't assist so plz don't post irrelevant answer. I have also modified and try to make it simple but not achieve Select day, month from MainCategory where Month = 'May' and day in ((date('now','-1 day')),(date('now')),(date('now','+1 day')))

    Read the article

  • Design classes/interface to support methods returning different types

    - by Nayn
    Hi, I have classes as below. public interface ITest <T> { public T MethodHere(); } public class test1 implements ITest<String> { String MethodHere(){ return "Bla"; } } public class test2 implements ITest<Integer> { Integer MethodHere(){ return Integer.valueOf(2); } } public class ITestFactory { public static ITest getInstance(int type) { if(type == 1) return new test1(); else if(type == 2) return new test2(); } } There is a warning in the factory class that ITest is used as raw type. What modification should I do to get rid of it? Thanks Nayn

    Read the article

  • Making sure an 'int' is bound on sqlite

    - by iphone_developer
    I have a method called like this... -(void)dbUpdate:(int)forId { Which contains this sqlite to bind and update a database. sqlite3_bind_int(UPDATE_ACCOUNT, 3, forId); When I run the query, it doesn't give me an error, but it doesn't update the database either. If I change this line of code... sqlite3_bind_int(UPDATE_ACCOUNT, 3, forId); manually, to... sqlite3_bind_int(UPDATE_ACCOUNT, 3, 6); the query runs 100% fine, as expected. Changing the variable to an actual integer proves to me that the variable 'forId' isn't being passed or bound to the query properly. What am I doing wrong? How is the best way to pass an Integer (and make sure it is an Integer) into an sqlite bind? Cheers,

    Read the article

  • Passing a generic class to a java function?

    - by rodo
    Is it possible to use Generics when passing a class to a java function? I was hoping to do something like this: public static class DoStuff { public <T extends Class<List>> void doStuffToList(T className) { System.out.println(className); } public void test() { doStuffToList(List.class); // compiles doStuffToList(ArrayList.class); // compiler error (undesired behaviour) doStuffToList(Integer.class); // compiler error (desired behaviour) } } Ideally the List.class and ArrayList.class lines would work fine, but the Integer.class line would cause a compile error. I could use Class as my type instead of T extends Class<List> but then I won't catch the Integer.class case above.

    Read the article

  • Problem decrementing in Java with '-='

    - by hanesjw
    I'm making a scrolling game on Android and am having a hard time figuring out why the code below does not decrement past 0. Objects start at the end of the screen (so the x position is equal to the width of the screen) the objects move accross the screen by decrementing their x positions. I want them to scroll off of the screen, but when the x position hits 0, the objects just stay at 0, they do not move into the negatives. Here is my code to move objects on the screen private void incrementPositions(long delta) { float incrementor = (delta / 1000F) * Globals.MAP_SECTION_SPEED; for(Map.Entry<Integer, HashMap<Integer, MapSection>> column : scrollingMap.entrySet()) { for(Map.Entry<Integer, MapSection> row : column.getValue().entrySet()) { MapSection section = row.getValue(); section.x -= incrementor; } } } It works ok if I change section.x -= incrementor; to section.x = section.x - (int)incrementor; but if i do that the scrolling doesn't appear as smooth.

    Read the article

  • Partially fattening a list

    - by alj
    This is probably a really silly question but, given the example code at the bottom, how would I get a single list that retain the tuples? (I've looked at the itertools but it flattens everything) What I currently get is: ('id', 20, 'integer') ('companyname', 50, 'text') [('focus', 30, 'text'), ('fiesta', 30, 'text'), ('mondeo', 30, 'text'), ('puma', 30, 'text')] ('contact', 50, 'text') ('email', 50, 'text') what I would like is a single level list like: ('id', 20, 'integer') ('companyname', 50, 'text') ('focus', 30, 'text') ('fiesta', 30, 'text') ('mondeo', 30, 'text') ('puma', 30, 'text') ('contact', 50, 'text') ('email', 50, 'text') def getproducts(): temp_list=[] product_list=['focus','fiesta','mondeo','puma'] #usually this would come from a db for p in product_list: temp_list.append((p,30,'text')) return temp_list def createlist(): column_title_list = ( ("id",20,"integer"), ("companyname",50,"text"), getproducts(), ("contact",50,"text"), ("email",50,"text"), ) return column_title_list for item in createlist(): print item Thanks ALJ

    Read the article

  • Jdbc - Connect remote Mysql Database error

    - by Guilherme Ruiz
    I'm using JDBC to connect my program to a MySQL database. I already put the port number and yes, my database have permission to access. When i use localhost work perfectly, but when i try connect to a remote MySQL database, show this error on console. java.lang.ExceptionInInitializerError Caused by: java.lang.NumberFormatException: null at java.lang.Integer.parseInt(Integer.java:454) at java.lang.Integer.parseInt(Integer.java:527) at serial.BDArduino.<clinit>(BDArduino.java:25) Exception in thread "main" Java Result: 1 CONSTRUÍDO COM SUCESSO (tempo total: 1 segundo) Thank you in Advance ! MAIN CODE /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package serial; import gnu.io.CommPort; import gnu.io.CommPortIdentifier; import gnu.io.SerialPort; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import javax.swing.JFrame; import javax.swing.JOptionPane; /** * * @author Ruiz */ public class BDArduino extends JFrame { static boolean connected = false; static int aux_sql8 = Integer.parseInt(Sql.getDBinfo("SELECT * FROM arduinoData WHERE id=1", "pin8")); static int aux_sql2 = Integer.parseInt(Sql.getDBinfo("SELECT * FROM arduinoData WHERE id=1", "pin2")); CommPort commPort = null; SerialPort serialPort = null; InputStream inputStream = null; static OutputStream outputStream = null; String comPortNum = "COM10"; int baudRate = 9600; int[] intArray = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}; /** * Creates new form ArduinoTest */ public BDArduino() { //super("Arduino Test App"); initComponents(); } class Escrita extends Thread { private int i; public void run() { while (true) { System.out.println("Número :" + i++); } } } //public void actionPerformed(ActionEvent e) { // String arg = e.getActionCommand(); public static void writeData(int a) throws IOException { outputStream.write(a); } public void action(String arg) { System.out.println(arg); Object[] msg = {"Baud Rate: ", "9600", "COM Port #: ", "COM10"}; if (arg == "connect") { if (connected == false) { new BDArduino.ConnectionMaker().start(); } else { closeConnection(); } } if (arg == "disconnect") { serialPort.close(); closeConnection(); } if (arg == "p2") { System.out.print("Pin #2\n"); try { outputStream.write(intArray[0]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p3") { System.out.print("Pin #3\n"); try { outputStream.write(intArray[1]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p4") { System.out.print("Pin #4\n"); try { outputStream.write(intArray[2]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p5") { System.out.print("Pin #5\n"); try { outputStream.write(intArray[3]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p6") { System.out.print("Pin #6\n"); try { outputStream.write(intArray[4]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p7") { System.out.print("Pin #7\n"); try { outputStream.write(intArray[5]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p8") { System.out.print("Pin #8\n"); try { outputStream.write(intArray[6]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p9") { System.out.print("Pin #9\n"); try { outputStream.write(intArray[7]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p10") { System.out.print("Pin #10\n"); try { outputStream.write(intArray[8]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p11") { System.out.print("Pin #11\n"); try { outputStream.write(intArray[9]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p12") { System.out.print("Pin #12\n"); try { outputStream.write(intArray[10]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } if (arg == "p13") { System.out.print("Pin #12\n"); try { outputStream.write(intArray[11]); }//end try catch (IOException e12) { e12.printStackTrace(); System.exit(-1); }//end catch } } //******************************************************* //Arduino Connection *************************************** //****************************************************** void closeConnection() { try { outputStream.close(); } catch (Exception ex) { ex.printStackTrace(); String cantCloseConnectionMessage = "Can't Close Connection!"; JOptionPane.showMessageDialog(null, cantCloseConnectionMessage, "ERROR", JOptionPane.ERROR_MESSAGE); } connected = false; System.out.print("\nDesconectado\n"); String disconnectedConnectionMessage = "Desconectado!"; JOptionPane.showMessageDialog(null, disconnectedConnectionMessage, "Desconectado", JOptionPane.INFORMATION_MESSAGE); }//end closeConnection() void connect() throws Exception { String portName = comPortNum; CommPortIdentifier portIdentifier = CommPortIdentifier.getPortIdentifier(portName); if (portIdentifier.isCurrentlyOwned()) { System.out.println("Error: Port is currently in use"); String portInUseConnectionMessage = "Port is currently in use!\nTry Again Later..."; JOptionPane.showMessageDialog(null, portInUseConnectionMessage, "ERROR", JOptionPane.ERROR_MESSAGE); } else { commPort = portIdentifier.open(this.getClass().getName(), 2000); if (commPort instanceof SerialPort) { serialPort = (SerialPort) commPort; serialPort.setSerialPortParams(baudRate, SerialPort.DATABITS_8, SerialPort.STOPBITS_1, SerialPort.PARITY_NONE); outputStream = serialPort.getOutputStream(); } else { System.out.println("Error: Only serial ports are handled "); String onlySerialConnectionMessage = "Serial Ports ONLY!"; JOptionPane.showMessageDialog(null, onlySerialConnectionMessage, "ERROR", JOptionPane.ERROR_MESSAGE); } }//end else //wait some time try { Thread.sleep(300); } catch (InterruptedException ie) { } }//end connect //******************************************************* //*innerclasses****************************************** //******************************************************* public class ConnectionMaker extends Thread { public void run() { //try to make a connection try { connect(); } catch (Exception ex) { ex.printStackTrace(); System.out.print("ERROR: Cannot connect!"); String cantConnectConnectionMessage = "Cannot Connect!\nCheck the connection settings\nand/or your configuration\nand try again!"; JOptionPane.showMessageDialog(null, cantConnectConnectionMessage, "ERROR", JOptionPane.ERROR_MESSAGE); } //show status serialPort.notifyOnDataAvailable(true); connected = true; //send ack System.out.print("\nConnected\n"); String connectedConnectionMessage = "Conectado!"; JOptionPane.showMessageDialog(null, connectedConnectionMessage, "Conectado", JOptionPane.INFORMATION_MESSAGE); }//end run }//end ConnectionMaker /** * This method is called from within the constructor to initialize the form. * WARNING: Do NOT modify this code. The content of this method is always * regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() { btnp2 = new javax.swing.JButton(); btncon = new javax.swing.JButton(); btndesc = new javax.swing.JButton(); btnp3 = new javax.swing.JButton(); btnp4 = new javax.swing.JButton(); btnp5 = new javax.swing.JButton(); btnp9 = new javax.swing.JButton(); btnp6 = new javax.swing.JButton(); btnp7 = new javax.swing.JButton(); btnp8 = new javax.swing.JButton(); btn13 = new javax.swing.JButton(); btnp10 = new javax.swing.JButton(); btnp11 = new javax.swing.JButton(); btnp12 = new javax.swing.JButton(); setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE); btnp2.setText("2"); btnp2.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp2MouseClicked(evt); } }); btncon.setText("Conectar"); btncon.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnconMouseClicked(evt); } }); btndesc.setText("Desconectar"); btndesc.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btndescMouseClicked(evt); } }); btnp3.setText("3"); btnp3.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp3MouseClicked(evt); } }); btnp4.setText("4"); btnp4.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp4MouseClicked(evt); } }); btnp5.setText("5"); btnp5.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp5MouseClicked(evt); } }); btnp9.setText("9"); btnp9.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp9MouseClicked(evt); } }); btnp6.setText("6"); btnp6.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp6MouseClicked(evt); } }); btnp7.setText("7"); btnp7.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp7MouseClicked(evt); } }); btnp8.setText("8"); btnp8.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp8MouseClicked(evt); } }); btn13.setText("13"); btn13.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btn13MouseClicked(evt); } }); btnp10.setText("10"); btnp10.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp10MouseClicked(evt); } }); btnp11.setText("11"); btnp11.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp11MouseClicked(evt); } }); btnp12.setText("12"); btnp12.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { btnp12MouseClicked(evt); } }); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(20, 20, 20) .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING, false) .addGroup(layout.createSequentialGroup() .addComponent(btncon) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE) .addComponent(btndesc)) .addGroup(layout.createSequentialGroup() .addComponent(btnp6, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp7, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp8, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp9, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE)) .addGroup(layout.createSequentialGroup() .addComponent(btnp10, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp11, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp12, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btn13, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE)) .addGroup(layout.createSequentialGroup() .addComponent(btnp2, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp3, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp4, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED) .addComponent(btnp5, javax.swing.GroupLayout.PREFERRED_SIZE, 50, javax.swing.GroupLayout.PREFERRED_SIZE))) .addContainerGap(20, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addContainerGap() .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE) .addComponent(btncon) .addComponent(btndesc)) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, 20, Short.MAX_VALUE) .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(btnp2) .addComponent(btnp3) .addComponent(btnp4) .addComponent(btnp5)) .addGap(18, 18, 18) .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(btnp6) .addComponent(btnp7) .addComponent(btnp8) .addComponent(btnp9)) .addGap(18, 18, 18) .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(btnp10) .addComponent(btnp11) .addComponent(btnp12) .addComponent(btn13)) .addGap(22, 22, 22)) ); pack(); }// </editor-fold> private void btnp2MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p2"); } private void btnconMouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("connect"); } private void btndescMouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("disconnect"); } private void btnp3MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p3"); } private void btnp4MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p4"); } private void btnp5MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here action("p5"); } private void btnp9MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p9"); } private void btnp6MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p6"); } private void btnp7MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p7"); } private void btnp8MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p8"); } private void btn13MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p13"); } private void btnp10MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p10"); } private void btnp11MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p11"); } private void btnp12MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: action("p12"); } /** * @param args the command line arguments */ public static void main(String args[]) throws IOException { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (Exception e) { } //</editor-fold> /* Create and display the form */ java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new BDArduino().setVisible(true); } }); //} while (true) { // int sql8 = Integer.parseInt(Sql.getDBinfo("SELECT * FROM arduinoData WHERE id=1", "pin8")); if (connected == true && sql8 != aux_sql8) { aux_sql8 = sql8; if(sql8 == 1){ writeData(2); }else{ writeData(3); } } int sql2 = Integer.parseInt(Sql.getDBinfo("SELECT * FROM arduinoData WHERE id=1", "pin2")); if (connected == true && sql2 != aux_sql2) { aux_sql2 = sql2; if(sql2 == 1){ writeData(4); }else{ writeData(5); } } try { Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } } } // Variables declaration - do not modify private javax.swing.JButton btn13; private javax.swing.JButton btncon; private javax.swing.JButton btndesc; private javax.swing.JButton btnp10; private javax.swing.JButton btnp11; private javax.swing.JButton btnp12; private javax.swing.JButton btnp2; private javax.swing.JButton btnp3; private javax.swing.JButton btnp4; private javax.swing.JButton btnp5; private javax.swing.JButton btnp6; private javax.swing.JButton btnp7; private javax.swing.JButton btnp8; private javax.swing.JButton btnp9; // End of variables declaration }

    Read the article

  • Remove accents from String .NET

    - by developerit
    Private Const ACCENT As String = “ÀÁÂÃÄÅàáâãäåÒÓÔÕÖØòóôõöøÈÉÊËèéêëÌÍÎÏìíîïÙÚÛÜùúûüÿÑñÇç” Private Const SANSACCENT As String = “AAAAAAaaaaaaOOOOOOooooooEEEEeeeeIIIIiiiiUUUUuuuuyNnCc” Public Shared Function FormatForUrl(ByVal uriBase As String) As String If String.IsNullOrEmpty(uriBase) Then Return uriBase End If ‘// Declaration de variables Dim chaine As String = uriBase.Trim.Replace(” “, “-”) chaine = chaine.Replace(” “c, “-”c) chaine = chaine.Replace(“–”, “-”) chaine = chaine.Replace(“‘”c, String.Empty) chaine = chaine.Replace(“?”c, String.Empty) chaine = chaine.Replace(“#”c, String.Empty) chaine = chaine.Replace(“:”c, String.Empty) chaine = chaine.Replace(“;”c, String.Empty) ‘// Conversion des chaines en tableaux de caractŠres Dim tableauSansAccent As Char() = SANSACCENT.ToCharArray Dim tableauAccent As Char() = ACCENT.ToCharArray ‘// Pour chaque accent For i As Integer = 0 To ACCENT.Length – 1 ‘ // Remplacement de l’accent par son ‚quivalent sans accent dans la chaŒne de caractŠres chaine = chaine.Replace(tableauAccent(i).ToString(), tableauSansAccent(i).ToString()) Next ‘// Retour du resultat Return chaine End Function

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Error in running script [closed]

    - by SWEngineer
    I'm trying to run heathusf_v1.1.0.tar.gz found here I installed tcsh to make build_heathusf work. But, when I run ./build_heathusf, I get the following (I'm running that on a Fedora Linux system from Terminal): $ ./build_heathusf Compiling programs to build a library of image processing functions. convexpolyscan.c: In function ‘cdelete’: convexpolyscan.c:346:5: warning: incompatible implicit declaration of built-in function ‘bcopy’ [enabled by default] myalloc.c: In function ‘mycalloc’: myalloc.c:68:16: error: invalid storage class for function ‘store_link’ myalloc.c: In function ‘mymalloc’: myalloc.c:101:16: error: invalid storage class for function ‘store_link’ myalloc.c: In function ‘myfree’: myalloc.c:129:27: error: invalid storage class for function ‘find_link’ myalloc.c:131:12: warning: assignment makes pointer from integer without a cast [enabled by default] myalloc.c: At top level: myalloc.c:150:13: warning: conflicting types for ‘store_link’ [enabled by default] myalloc.c:150:13: error: static declaration of ‘store_link’ follows non-static declaration myalloc.c:91:4: note: previous implicit declaration of ‘store_link’ was here myalloc.c:164:24: error: conflicting types for ‘find_link’ myalloc.c:131:14: note: previous implicit declaration of ‘find_link’ was here Building the mammogram resizing program. gcc -O2 -I. -I../common mkimage.o -o mkimage -L../common -lmammo -lm ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [mkimage] Error 1 Building the breast segmentation program. gcc -O2 -I. -I../common breastsegment.o segment.o -o breastsegment -L../common -lmammo -lm breastsegment.o: In function `render_segmentation_sketch': breastsegment.c:(.text+0x43): undefined reference to `mycalloc' breastsegment.c:(.text+0x58): undefined reference to `mycalloc' breastsegment.c:(.text+0x12f): undefined reference to `mycalloc' breastsegment.c:(.text+0x1b9): undefined reference to `myfree' breastsegment.c:(.text+0x1c6): undefined reference to `myfree' breastsegment.c:(.text+0x1e1): undefined reference to `myfree' segment.o: In function `find_center': segment.c:(.text+0x53): undefined reference to `mycalloc' segment.c:(.text+0x71): undefined reference to `mycalloc' segment.c:(.text+0x387): undefined reference to `myfree' segment.o: In function `bordercode': segment.c:(.text+0x4ac): undefined reference to `mycalloc' segment.c:(.text+0x546): undefined reference to `mycalloc' segment.c:(.text+0x651): undefined reference to `mycalloc' segment.c:(.text+0x691): undefined reference to `myfree' segment.o: In function `estimate_tissue_image': segment.c:(.text+0x10d4): undefined reference to `mycalloc' segment.c:(.text+0x14da): undefined reference to `mycalloc' segment.c:(.text+0x1698): undefined reference to `mycalloc' segment.c:(.text+0x1834): undefined reference to `mycalloc' segment.c:(.text+0x1850): undefined reference to `mycalloc' segment.o:segment.c:(.text+0x186a): more undefined references to `mycalloc' follow segment.o: In function `estimate_tissue_image': segment.c:(.text+0x1bbc): undefined reference to `myfree' segment.c:(.text+0x1c4a): undefined reference to `mycalloc' segment.c:(.text+0x1c7c): undefined reference to `mycalloc' segment.c:(.text+0x1d8e): undefined reference to `myfree' segment.c:(.text+0x1d9b): undefined reference to `myfree' segment.c:(.text+0x1da8): undefined reference to `myfree' segment.c:(.text+0x1dba): undefined reference to `myfree' segment.c:(.text+0x1dc9): undefined reference to `myfree' segment.o:segment.c:(.text+0x1dd8): more undefined references to `myfree' follow segment.o: In function `estimate_tissue_image': segment.c:(.text+0x20bf): undefined reference to `mycalloc' segment.o: In function `segment_breast': segment.c:(.text+0x24cd): undefined reference to `mycalloc' segment.o: In function `find_center': segment.c:(.text+0x3a4): undefined reference to `myfree' segment.o: In function `bordercode': segment.c:(.text+0x6ac): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_label': cc_label.c:(.text+0x20c): undefined reference to `mycalloc' cc_label.c:(.text+0x6c2): undefined reference to `mycalloc' cc_label.c:(.text+0xbaa): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_label_0bkgd': cc_label.c:(.text+0xe17): undefined reference to `mycalloc' cc_label.c:(.text+0x12d7): undefined reference to `mycalloc' cc_label.c:(.text+0x17e7): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_relabel_by_intensity': cc_label.c:(.text+0x18c5): undefined reference to `mycalloc' ../common/libmammo.a(cc_label.o): In function `cc_label_4connect': cc_label.c:(.text+0x1cf0): undefined reference to `mycalloc' cc_label.c:(.text+0x2195): undefined reference to `mycalloc' cc_label.c:(.text+0x26a4): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_relabel_by_intensity': cc_label.c:(.text+0x1b06): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_coords': convexpolyscan.c:(.text+0x6f0): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x75f): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x7ab): undefined reference to `myfree' convexpolyscan.c:(.text+0x7b8): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_poly_cacheim': convexpolyscan.c:(.text+0x805): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x894): undefined reference to `myfree' ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [breastsegment] Error 1 Building the mass feature generation program. gcc -O2 -I. -I../common afumfeature.o -o afumfeature -L../common -lmammo -lm afumfeature.o: In function `afum_process': afumfeature.c:(.text+0xd80): undefined reference to `mycalloc' afumfeature.c:(.text+0xd9c): undefined reference to `mycalloc' afumfeature.c:(.text+0xe80): undefined reference to `mycalloc' afumfeature.c:(.text+0x11f8): undefined reference to `myfree' afumfeature.c:(.text+0x1207): undefined reference to `myfree' afumfeature.c:(.text+0x1214): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_coords': convexpolyscan.c:(.text+0x6f0): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x75f): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x7ab): undefined reference to `myfree' convexpolyscan.c:(.text+0x7b8): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_poly_cacheim': convexpolyscan.c:(.text+0x805): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x894): undefined reference to `myfree' ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [afumfeature] Error 1 Building the mass detection program. make: Nothing to be done for `all'. Building the performance evaluation program. gcc -O2 -I. -I../common DDSMeval.o polyscan.o -o DDSMeval -L../common -lmammo -lm ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' collect2: error: ld returned 1 exit status make: *** [DDSMeval] Error 1 Building the template creation program. gcc -O2 -I. -I../common mktemplate.o polyscan.o -o mktemplate -L../common -lmammo -lm Building the drawimage program. gcc -O2 -I. -I../common drawimage.o -o drawimage -L../common -lmammo -lm ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' collect2: error: ld returned 1 exit status make: *** [drawimage] Error 1 Building the compression/decompression program jpeg. gcc -O2 -DSYSV -DNOTRUNCATE -c lexer.c lexer.c:41:1: error: initializer element is not constant lexer.c:41:1: error: (near initialization for ‘yyin’) lexer.c:41:1: error: initializer element is not constant lexer.c:41:1: error: (near initialization for ‘yyout’) lexer.c: In function ‘initparser’: lexer.c:387:21: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] lexer.c: In function ‘MakeLink’: lexer.c:443:16: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default] lexer.c:447:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:452:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:455:34: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:458:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:460:3: warning: incompatible implicit declaration of built-in function ‘strcpy’ [enabled by default] lexer.c: In function ‘getstr’: lexer.c:548:26: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default] lexer.c:552:4: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:557:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:557:28: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] lexer.c:561:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c: In function ‘parser’: lexer.c:794:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:798:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1074:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1078:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1116:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1120:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1154:25: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1158:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1190:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1247:25: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1251:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1283:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c: In function ‘yylook’: lexer.c:1867:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1867:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1877:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1877:23: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] make: *** [lexer.o] Error 1

    Read the article

  • MD5 implementation notes

    - by vaasu
    While going through RFC1321, I came across the following paragraph: This step uses a 64-element table T[1 ... 64] constructed from the sine function. Let T[i] denote the i-th element of the table, which is equal to the integer part of 4294967296 times abs(sin(i)), where i is in radians. The elements of the table are given in the appendix. From what I understood from paragraph, it means T[i] = Integer_part(4294967296 times abs(sin(i))) We know the following is true for all x: 0 <= sin(x) <= 1 Since i is an integer, abs(sin(i)) may very well be 0 for all values of i. That means table will contain all zero values ( 4294967296 times 0 is 0). In the implementation, this is not true. Why is this so? Appendix contains just the raw values after calculation. It does not show how it is derived from the sine function.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >