Search Results

Search found 8916 results on 357 pages for 'bulk insert'.

Page 73/357 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Why does MySQL autoincrement increase on failed inserts?

    - by Sorcy
    A co-worker just made me aware of a very strange MySQL behavior. Assuming you have a table with an auto_increment field and another field that is set to unique (e.g. a username-field). When trying to insert a row with a username thats already in the table the insert fails, as expected. Yet the auto_increment value is increased as can be seen when you insert a valid new entry after several failed attempts. For example, when our last entry looks like this... ID: 10 Username: myname ...and we try five new entries with the same username value on our next insert we will have created a new row like so: ID: 16 Username: mynewname While this is not a big problem in itself it seems like a very silly attack vector to kill a table by flooding it with failed insert requests, as the MySQL Reference Manual states: "The behavior of the auto-increment mechanism is not defined if [...] the value becomes bigger than the maximum integer that can be stored in the specified integer type." Is this expected behavior?

    Read the article

  • IS NULL doesn't work as expected in MSSQL 2000 with no Service Pack on it

    - by user306825
    The following batch executed on different instances of mssql 2000 illustrates the problem. select @@version create table a (a int) create table b (b int) insert into a(a) values (1) insert into a(a) values (2) insert into a(a) values (3) insert into b(b) values (1) insert into b(b) values (2) select * from a left outer join (select 1 as test, b from b) as j on j.b = a.a where j.test IS NULL drop table a drop table b Output 1: Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Developer Edition on Windows NT 6.1 (Build 7600: ) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b (0 row(s) affected) Output 2: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b 3 NULL NULL (1 row(s) affected) If someone encounters the same problem - make sure you have the SP installed!

    Read the article

  • mysql ignores not null constraint?

    - by Marga Keuvelaar
    I have created a table with NOT NULL constraints on some columns in MySQL. Then in PHP I wrote a script to insert data, with an insert query. When I omit one of the NOT NULL columns in this insert statement I would expect an error message from MySQL, and I would expect my script to fail. Instead, MySQL inserts empty strings in the NOT NULL fields. In other omitted fields the data is NULL, which is fine. Could someone tell me what I did wrong here? I'm using this table: CREATE TABLE IF NOT EXISTS tblCustomers ( cust_id int(11) NOT NULL AUTO_INCREMENT, custname varchar(50) NOT NULL, company varchar(50), phone varchar(50), email varchar(50) NOT NULL, country varchar(50) NOT NULL, ... date_added timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (cust_id) ) ; And this insert statement: $sql = "INSERT INTO tblCustomers (custname,company) VALUES ('".$customerName."','".$_POST["CustomerCompany"]."')"; $res = mysqli_query($mysqli, $sql);

    Read the article

  • Spring Jdbc Template + MySQL = TransientDataAccessResourceException : Invalid Argument Value : Java.

    - by Vanchinathan
    I was using spring jdbc template to insert some data into the database and I was getting this error. Here is my code : JdbcTemplate insert = new JdbcTemplate(dataSource); for(ResultType result : response.getResultSet().getResult()) { Object[] args = new Object[] {result.getAddress(), result.getCity(), result.getState(), result.getPhone(), result.getLatitude(), result.getLongitude(),result.getRating().getAverageRating(), result.getRating().getAverageRating(), result.getRating().getTotalRatings(), result.getRating().getTotalReviews(), result.getRating().getLastReviewDate(), result.getRating().getLastReviewIntro(), result.getDistance(), result.getUrl(), result.getClickUrl(), result.getBusinessUrl(), result.getBusinessClickUrl()}; insert.update("INSERT INTO data.carwashes ( address, city, state, phone, lat, lng, rating, average_rating, total_ratings, total reviews, last_review_date, last_review_intro, distance, url, click_url, business_url, business_click_url, category_id, zipcode_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,96925724,78701)", args); } Quite lengthy code.. but, basically it gets the value from a object and sticks it to a array and passed that array to insert method of jdbc template. Any help will be appreciated.

    Read the article

  • Find Date From Non Contiguous Ranges

    - by AGoodDisplayName
    I am looking to find an best way to find a date from date ranges that may or may not be contiguous (I am trying to avoid a cursor, or a heavy function if possible). Lets say I have hotel guests that come and go (check in, check out). I want to find the date that a certain guest stayed their 45th night with us. The database we use records the data as so: Create Table #GuestLog( ClientId int, StartDate DateTime, EndDate DateTime) Here is some data Insert Into #GuestLog Values(1, '01/01/2010', '01/10/2010') Insert Into #GuestLog Values(1, '01/16/2010', '01/29/2010') Insert Into #GuestLog Values(1, '02/13/2010', '02/26/2010') Insert Into #GuestLog Values(1, '04/05/2010', '06/01/2010') Insert Into #GuestLog Values(1, '07/01/2010', '07/21/2010') So far I can only think of solutions that involve functions with temp tables and crazy stuff like that, I feel like I'm over thinking it. Thanks ahead of time.

    Read the article

  • Tsql to find the start and end date(set based)

    - by priyanka.sarkar_2
    I have the below Name Date A 2011-01-01 01:00:00.000 A 2011-02-01 02:00:00.000 A 2011-03-01 03:00:00.000 B 2011-04-01 04:00:00.000 A 2011-05-01 07:00:00.000 The desired output being Name StartDate EndDate ------------------------------------------------------------------- A 2011-01-01 01:00:00.000 2011-04-01 04:00:00.000 B 2011-04-01 04:00:00.000 2011-05-01 07:00:00.000 A 2011-05-01 07:00:00.000 NULL How to achieve the same using TSQL in Set based approach DDL is as under DECLARE @t TABLE(PersonName VARCHAR(32), [Date] DATETIME) INSERT INTO @t VALUES('A', '2011-01-01 01:00:00') INSERT INTO @t VALUES('A', '2011-01-02 02:00:00') INSERT INTO @t VALUES('A', '2011-01-03 03:00:00') INSERT INTO @t VALUES('B', '2011-01-04 04:00:00') INSERT INTO @t VALUES('A', '2011-01-05 07:00:00') Select * from @t

    Read the article

  • About NullPointer Exception in Arraylist

    - by user2234456
    List<PrcSubList> listSl = new ArrayList<PrcSubList>(); if (listSl == null || listSl.size() == 0) { PrcSubList subListAdd=coreService.addSubListByAddAlert(childSub); System.out.println("sublist after insert db :" + subListAdd.getName()); listSl.add(subListAdd); System.out.println(listSl.size() +" sublist after insert list:" + listSl.get(0).getName()); } Ouput with first System.out.println("sublist after insert db :" + subListAdd.getName()); sublist after insert db :SYSTEM_ALERT_12312313 Problem But i have NullPointerException with 2nd System.out.println(listSl.size() +" sublist after insert list:" + listSl.get(0).getName()); Can you help me!

    Read the article

  • Is the syntax written in descrition is correct for MySQL Triggers? will it work?

    - by Parth
    Is the syntax written in descrition is correct for MySQL Triggers? will it work? CREATE OR REPLACE TRIGGER myTableAuditTrigger 2 AFTER INSERT OR DELETE OR UPDATE ON myTable 3 FOR EACH ROW 4 BEGIN 5 IF INSERTING THEN 6 INSERT INTO myTableAudit (id, Operation, NewName, NewPhone) 7 VALUES (1, 'Insert ', :NEW.Name, :NEW.PhoneNo); 8 ELSIF DELETING THEN 9 INSERT INTO myTableAudit (id, Operation, OldName, OldPhone) 10 VALUES (1, 'Delete ', :OLD.Name, :OLD.PhoneNo); 11 ELSIF UPDATING THEN 12 INSERT INTO myTableAudit (id, Operation, 13 OldName, OldPhone, NewName, NewPhone) 14 VALUES (1, 'Update ', 15 :OLD.Name, :OLD.PhoneNo, :NEW.Name, :NEW.PhoneNo); 16 END IF; 17 END; 18 /

    Read the article

  • jQuery: How do I insert an html string an arbitrary number of times (i.e. not with an each() function)?

    - by Sam Bivins
    I just need to take a certain html query object and append it to an html element a lot of times. I have this: var box = $("<div class='box'>&nbsp</div>"); $("#firstbox").after(box); and it works fine, but it just adds one 'box' after the #firstbox element. I'd like to do something like this: var box = $("<div class='box'>&nbsp</div>"); $("#firstbox").after(box * 6000); so that it will insert 6000 copies of that 'box' html, but this is not working. This is I'm sure the easiest thing to do, but I can't seem to find how to multiply actions like this without using the each() function, which doesn't apply here because I don't have 6000 of anything on my page. Thanks.

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • SQL select descendants of a row

    - by Joey Adams
    Suppose a tree structure is implemented in SQL like this: CREATE TABLE nodes ( id INTEGER PRIMARY KEY, parent INTEGER -- references nodes(id) ); Although cycles can be created in this representation, let's assume we never let that happen. The table will only store a collection of roots (records where parent is null) and their descendants. The goal is to, given an id of a node on the table, find all nodes that are descendants of it. A is a descendant of B if either A's parent is B or A's parent is a descendant of B. Note the recursive definition. Here is some sample data: INSERT INTO nodes VALUES (1, NULL); INSERT INTO nodes VALUES (2, 1); INSERT INTO nodes VALUES (3, 2); INSERT INTO nodes VALUES (4, 3); INSERT INTO nodes VALUES (5, 3); INSERT INTO nodes VALUES (6, 2); which represents: 1 `-- 2 |-- 3 | |-- 4 | `-- 5 | `-- 6 We can select the (immediate) children of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1; We can select the children and grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id; We can select the children, grandchildren, and great grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id UNION ALL SELECT c.* FROM nodes AS a, nodes AS b, nodes AS c WHERE a.parent=1 AND b.parent=a.id AND c.parent=b.id; How can a query be constructed that gets all descendants of node 1 rather than those at a finite depth? It seems like I would need to create a recursive query or something. I'd like to know if such a query would be possible using SQLite. However, if this type of query requires features not available in SQLite, I'm curious to know if it can be done in other SQL databases.

    Read the article

  • Query returns too few rows

    - by Tareq
    setup: mysql> create table product_stock( product_id integer, qty integer, branch_id integer); Query OK, 0 rows affected (0.17 sec) mysql> create table product( product_id integer, product_name varchar(255)); Query OK, 0 rows affected (0.11 sec) mysql> insert into product(product_id, product_name) values(1, 'Apsana White DX Pencil'); Query OK, 1 row affected (0.05 sec) mysql> insert into product(product_id, product_name) values(2, 'Diamond Glass Marking Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product(product_id, product_name) values(3, 'Apsana Black Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 100, 1); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 50, 2); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(2, 80, 1); Query OK, 1 row affected (0.03 sec) my query: mysql> SELECT IFNULL(SUM(s.qty),0) AS stock, product_name FROM product_stock s RIGHT JOIN product p ON s.product_id=p.product_id WHERE branch_id=1 GROUP BY product_name ORDER BY product_name; returns: +-------+-------------------------------+ | stock | product_name | +-------+-------------------------------+ | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+-------------------------------+ 1 row in set (0.00 sec) But I want to have the following result: +-------+------------------------------+ | stock | product_name | +-------+------------------------------+ | 0 | Apsana Black Pencil | | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+------------------------------+ To get this result what mysql query should I run?

    Read the article

  • google maps api : internal server error when inserting a feature

    - by user142764
    Hi, I try to insert features on a custom google map : i use the sample code from the doc but i get a ServiceException (Internal server error) when i call the service's insert method. Here is what i do : I create a map and get the resulting MapEntry object : myMapEntry = (MapEntry) service.insert(mapUrl, myEntry); This works fine : i can see the map i created in "my maps" on google. I use the feed url from the map to insert a feature : final URL featureEditUrl = myMapEntry.getFeatureFeedUrl(); I create a kml string using the sample from the doc : String kmlStr = "< Placemark xmlns=\"http://www.opengis.net/kml/2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; And when i call the insert method i get an internal server error. I must be doing something wrong but i cant see what, can anybody help ? Here is the complete code i use : public void doCreateFeaturesFormap(MapEntry myMap) throws ServiceException, IOException { final URL featureEditUrl = myMap.getFeatureFeedUrl(); FeatureEntry featureEntry = new FeatureEntry(); try { String kmlStr = "<Placemark xmlns=\"http://www.opengis.net/kml/ 2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; XmlBlob kml = new XmlBlob(); kml.setFullText(kmlStr); featureEntry.setKml(kml); featureEntry.setTitle(new PlainTextConstruct("Feature Title")); } catch (NullPointerException e) { System.out.println("Error: " + e.getClass().getName()); } FeatureEntry myFeature = (FeatureEntry) service.insert( featureEditUrl, featureEntry); } Thanks in advance, Vincent.

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Mysql return value as 0 in the fetch result.

    - by Karthik
    I have this two tables, -- -- Table structure for table `t1` -- CREATE TABLE `t1` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `pname` varchar(20) collate latin1_general_ci NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t1` -- INSERT INTO `t1` VALUES ('p1', 'pro1'); INSERT INTO `t1` VALUES ('p2', 'pro2'); -- -------------------------------------------------------- -- -- Table structure for table `t2` -- CREATE TABLE `t2` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `year` int(6) NOT NULL, `price` int(3) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t2` -- INSERT INTO `t2` VALUES ('p1', 2009, 50); INSERT INTO `t2` VALUES ('p1', 2010, 60); INSERT INTO `t2` VALUES ('p3', 2007, 200); INSERT INTO `t2` VALUES ('p4', 2008, 501); my query is, SELECT * FROM `t1` LEFT JOIN `t2` ON t1.pid = t2.pid Getting the result, pid pname pid year price p1 pro1 p1 2009 50 p1 pro1 p1 2010 60 p2 pro2 NULL NULL NULL My question is, i want to get the price value is 0 instead of NULL. How can i write the query to getting the price value is 0. Thanks in advance for help.

    Read the article

  • ORACLE: can we create global temp tables or any tables in stored proc?

    - by mrp
    Hi, below is the stored proc I wrote: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; when i executed it , i get an error message mentioning: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; Error at line 1 ORA-00955: name is already used by an existing object Script Terminated on line 1. I tried to drop the TEMP_TRAN and it says table doesn't exist. So there is no TEMP_TRAN table existed in system. why am I getting this error? I am using TOAD to create this stored proc. Any help would be highly appreciated.

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • Inorder tree traversal in binary tree in C

    - by srk
    In the below code, I'am creating a binary tree using insert function and trying to display the inserted elements using inorder function which follows the logic of In-order traversal.When I run it, numbers are getting inserted but when I try the inorder function( input 3), the program continues for next input without displaying anything. I guess there might be a logical error.Please help me clear it. Thanks in advance... #include<stdio.h> #include<stdlib.h> int i; typedef struct ll { int data; struct ll *left; struct ll *right; } node; node *root1=NULL; // the root node void insert(node *root,int n) { if(root==NULL) //for the first(root) node { root=(node *)malloc(sizeof(node)); root->data=n; root->right=NULL; root->left=NULL; } else { if(n<(root->data)) { root->left=(node *)malloc(sizeof(node)); insert(root->left,n); } else if(n>(root->data)) { root->right=(node *)malloc(sizeof(node)); insert(root->right,n); } else { root->data=n; } } } void inorder(node *root) { if(root!=NULL) { inorder(root->left); printf("%d ",root->data); inorder(root->right); } } main() { int n,choice=1; while(choice!=0) { printf("Enter choice--- 1 for insert, 3 for inorder and 0 for exit\n"); scanf("%d",&choice); switch(choice) { case 1: printf("Enter number to be inserted\n"); scanf("%d",&n); insert(root1,n); break; case 3: inorder(root1); break; default: break; } } }

    Read the article

  • advanced select in Stored Procedure

    - by Auro
    Hey i got this Table: CREATE TABLE Test_Table ( old_val VARCHAR2(3), new_val VARCHAR2(3), Updflag NUMBER, WorkNo NUMBER ); and this is in my Table: INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('1',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('3',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); now my Table Looks like this: Row Old_val New_val Updflag WorkNo 1 '1' ' 20' 0 0 2 '2' ' 20' 0 0 3 '2' ' 30' 0 0 4 '3' ' 30' 0 0 5 '4' ' 40' 0 0 6 '5' ' 40' 0 0 (if the value in the new_val column are same then they are together and the same goes to old_val) so in the example above row 1-4 are together and row 5-6 at the moment i have in my Stored Procedure a cursor: SELECT t1.Old_val, t1.New_val, t1.updflag, t1.WorkNo FROM Test_Table t1 WHERE t1.New_val = ( SELECT t2.New_val FROM Test_Table t2 WHERE t2.Updflag = 0 AND t2.Worknr = 0 AND ROWNUM = 1 ) the output is this: Row Old_val New_val Updflag WorkNo 1 1 20 0 0 2 2 20 0 0 my Problem is, i dont know how to get row 1 to 4 with one select. (i had an idea with 4 sub-querys but this wont work if its more data that matches together) does anyone of you have an idea?

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Insertion into BST without header Node JAVA

    - by Petiatil
    I am working on a recursive insertion method for a BST. This function is suppose to be a recursive helper method and is in a private class called Node. The Node class is in a class called BinarySearchTree which contains an instance variable for the root. When I am trying to insert an element, I get a NullPointerException at : this.left = insert(((Node)left).element); I am unsure about why this occurs. If I understand correctly, in a BST, I am suppose to insert the item at the last spot on the path transversed. Any help is appreciated! private class Node implements BinaryNode<E> { E item; BinaryNode<E> left, right; public BinaryNode<E> insert(E item) { int compare = item.compareTo(((Node)root).item); if(root == null) { root = new Node(); ((Node)root).item = item; } else if(compare < 0) { this.left = insert(((Node)left).item); } else if(compare > 0) { this.right = insert(((Node)right).item); } return root; } }

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >