Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 100/428 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

  • Implementing set operations in TSQL

    - by dotneteer
    SQL excels at operating on dataset. In this post, I will discuss how to implement basic set operations in transact SQL (TSQL). The operations that I am going to discuss are union, intersection and complement (subtraction).   Union Intersection Complement (subtraction) Implementing set operations using union, intersect and except We can use TSQL keywords union, intersect and except to implement set operations. Since we are in an election year, I will use voter records of propositions as an example. We create the following table and insert 6 records into the table. declare @votes table (VoterId int, PropId int) insert into @votes values (1, 30) insert into @votes values (2, 30) insert into @votes values (3, 30) insert into @votes values (4, 30) insert into @votes values (4, 31) insert into @votes values (5, 31) Voters 1, 2, 3 and 4 voted for proposition 30 and voters 4 and 5 voted for proposition 31. The following TSQL statement implements union using the union keyword. The union returns voters who voted for either proposition 30 or 31. select VoterId from @votes where PropId = 30 union select VoterId from @votes where PropId = 31 The following TSQL statement implements intersection using the intersect keyword. The intersection will return voters who voted only for both proposition 30 and 31. select VoterId from @votes where PropId = 30 intersect select VoterId from @votes where PropId = 31 The following TSQL statement implements complement using the except keyword. The complement will return voters who voted for proposition 30 but not 31. select VoterId from @votes where PropId = 30 except select VoterId from @votes where PropId = 31 Implementing set operations using join An alternative way to implement set operation in TSQL is to use full outer join, inner join and left outer join. The following TSQL statement implements union using full outer join. select Coalesce(A.VoterId, B.VoterId) from (select VoterId from @votes where PropId = 30) A full outer join (select VoterId from @votes where PropId = 31) B on A.VoterId = B.VoterId The following TSQL statement implements intersection using inner join. select Coalesce(A.VoterId, B.VoterId) from (select VoterId from @votes where PropId = 30) A inner join (select VoterId from @votes where PropId = 31) B on A.VoterId = B.VoterId The following TSQL statement implements complement using left outer join. select Coalesce(A.VoterId, B.VoterId) from (select VoterId from @votes where PropId = 30) A left outer join (select VoterId from @votes where PropId = 31) B on A.VoterId = B.VoterId where B.VoterId is null Which one to choose? To choose which technique to use, just keep two things in mind: The union, intersect and except technique treats an entire record as a member. The join technique allows the member to be specified in the “on” clause. However, it is necessary to use Coalesce function to project sets on the two sides of the join into a single set.

    Read the article

  • MS Access 2003 - Unbound Form uses INSERT statement to save to table; what about subforms?

    - by Justin
    So I have an unbound form that I use to save data to a table on button click. Is there a way I can have subforms for entry that will allow me to save data to the table within that same button click? Basically I want to add more entry options for the user, and while I know other ways to do it, I am particularly curious about doing it this way (if it can be done). So lets say the 'parent form' is frmMain. And there are two child forms "sub1" and "sub2". Just for example sake lets say on frmMain there are two text boxes: txtTitle & txtAuthor. sub1 and sub2 both have a text Box on them that represent something like prices. The idea is Title & author of a book, and then a price at each store (simplified). So I tried this (because I thought it was worth a shot): Dim db as DAO.database Dim sql as String sql = "INSERT INTO (Title, Author, PriceA, PriceB) VALUES (" if not isnull(me.txtTitle) then sql = sql & """" & me.txtTitle & """," Else sql = sql & " NULL," End If if not IsNull(me.txtAuthor) then sql = sql & " """ & me.txtAuthor & """," else sql = sql & " NULL," end if if not IsNull (forms!sub1.txtPrice) then sql = sql & " """ & forms!sub1.txtPrice & """," else sql = sql & " NULL," end if without finishing the code, i think you may see the GOTCHA i am headed for. I tried this and got an "Access cannot find the form "" ". I think I can pretty much see why on this approach too, because when I click the button that calls the new sub form into the parent form, the values that were just entered are not held/saved as sub1 closes and sub2 opens. I should mention that the idea above is not intended to be a one or the other approach, rather both sub forms used everytime. so this is an example. i want to use this method (if possible) to have about 7 different sub form choices in one form, and be able to save to a table via a SQL statement. I realize that there may be better ways, but I am just wondering if I can get there with this approach out of curiousity. Thanks as always!

    Read the article

  • How can I do batch image processing with ImageJ in Java or clojure?

    - by Robert McIntyre
    I want to use ImageJ to do some processing of several thousand images. Is there a way to take any general imageJ plugin and apply it to hundreds of images automatically? For example, say I want to take my thousand images and apply a polar transformation to each--- A polar transformation plugin for ImageJ can be found here: http://rsbweb.nih.gov/ij/plugins/polar-transformer.html Great! Let's use it. From: [http://albert.rierol.net/imagej_programming_tutorials.html#How%20to%20automate%20an%20ImageJ%20dialog] I find that I can apply a plugin using the following: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" ""))) This is good because it suppresses the dialog which would otherwise pop up for every image. But running this always brings up a window containing the transformed image, when what I want is to simply return the transformed image. The stupidest way to do what I want is to just close the window that comes up and return the image which it was displaying. Does what I want but is absolutely retarded: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" "") (let [return-image (IJ/getImage)] (.hide return-image) return-image))) I'm obviously missing something about how to use imageJ plugins in a programming context. Does anyone know the right way to do this? Thanks, --Robert McIntyre

    Read the article

  • How can I do batch image processing with ImageJ in clojure?

    - by Robert McIntyre
    I want to use ImageJ to do some processing of several thousand images. Is there a way to take any general imageJ plugin and apply it to hundreds of images automatically? For example, say I want to take my thousand images and apply a polar transformation to each--- A polar transformation plugin for ImageJ can be found here: http://rsbweb.nih.gov/ij/plugins/polar-transformer.html Great! Let's use it. From: [http://albert.rierol.net/imagej_programming_tutorials.html#How%20to%20automate%20an%20ImageJ%20dialog] I find that I can apply a plugin using the following: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" ""))) This is good because it suppresses the dialog which would otherwise pop up for every image. But running this always brings up a window containing the transformed image, when what I want is to simply return the transformed image. The stupidest way to do what I want is to just close the window that comes up and return the image which it was displaying. Does what I want but is absolutely retarded: (defn x-polar [imageP] (let [thread (Thread/currentThread) options ""] (.setName thread "Run$_polar-transform") (Macro/setOptions thread options) (IJ/runPlugIn imageP "Polar_Transformer" "") (let [return-image (IJ/getImage)] (.hide return-image) return-image))) I'm obviously missing something about how to use imageJ plugins in a programming context. Does anyone know the right way to do this? Thanks, --Robert McIntyre

    Read the article

  • Case Order by using Null

    - by molgan
    Hello I have the following test-code: CREATE TABLE #Foo (Foo int) INSERT INTO #Foo SELECT 4 INSERT INTO #Foo SELECT NULL INSERT INTO #Foo SELECT 2 INSERT INTO #Foo SELECT 5 INSERT INTO #Foo SELECT 1 SELECT * FROM #Foo ORDER BY CASE WHEN Foo IS NULL THEN Foo DESC ELSE Foo END DROP TABLE #Foo I'm trying to produce the following output: 1 2 3 4 5 NULL "If null then put it last" How is that done using Sql 2005 /M

    Read the article

  • Using a "vo" for joined data?

    - by keithjgrant
    I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table). I've been using basic vo-type models for each table so far. Because of this table structure structure, though, transactions will almost always be selected with a join on the batch table. So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data? There are a few cases in which batch records and/or transaction records. Are there possible pitfalls down the road if I have "overlapping" vo classes?

    Read the article

  • How to force programs out of swap file when a resources-intensive batch finishes?

    - by sharptooth
    We use employees' desktops for CPU-intensive simulation during the night. Desktops run Windows - usually Windows XP. Employees don't log off, they just lock the desktops, switch off their monitors and go. Every employee has a configuration file which he can edit to specify when he is most likely out of office. When that time comes a background program grabs data for simulation from the server, spawns worker processes, watches them, gets results and sends them to the server. When the time specified by the employee elapses simulation stops so that normal desktop usage is not interfered. The problem is that simuation consumes a lot of memory, so when the worker processes run they force other programs into the swap file. So when the employee comes all the programs he left are luggish and slow until he opens them one by one so that they are unswapped. Is there a way the program can force other programs out of swap file when it stops simulation so that they again run smoothly?

    Read the article

  • More useful Sql Server Serivce Broker Queries

    - by ChrisD
    SELECT 'Checking Broker Service Status...' IF (select Top 1 is_broker_enabled from sys.databases where name = 'NWMESSAGE')=1     SELECT ' Broker Service IS Enabled'  -- Should return a 1. ELSE     SELECT '** Broker Service IS DISABLED ***' /* If Is_Broker_enabled returns 0, uncomment and run this code ALTER DATABASE NWMESSAGE SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO Alter Database NWMESSAGE Set enable_broker GO ALTER DATABASE NWDataChannel SET MULTI_USER GO */ SELECT 'Checking For Disabled Queues....' -- ensure the queues are enabled --  0 indicates the queue is disabled. Select '** Receive Queue Disabled: '+name from sys.service_queues where is_receive_enabled = 0 --select [name], is_receive_enabled from sys.service_queues; /*If the queue is disabled, to enable it alter queue QUEUENAME with status=on; – replace QUEUENAME with the name of your queue */ -- Get General information about the queues --select * from sys.service_queues -- Get the message counts in each queue SELECT 'Checking Message Count for each Queue...' select q.name, p.rows from sys.objects as o join sys.partitions as p on p.object_id = o.object_id join sys.objects as q on o.parent_object_id = q.object_id join sys.service_queues sq on sq.name = q.name where p.index_id = 1 -- Ensure all the queue activiation sprocs are present SELECT 'Checking for Activation Stored Procedures....' SELECT  '** Missing Procedure:  '+q.name  From sys.service_queues q Where NOT Exists(Select * from sysobjects where xtype='p' and name='activation_'+q.name) and q.activation_procedure is not null DECLARE @sprocs Table (Name Varchar(2000)) Insert into @sprocs Values ('Echo') Insert into @sprocs Values ('HTTP_POST') Insert into @sprocs Values ('InitializeRecipients') Insert into @sprocs Values ('sp_EnableRecipient') Insert into @sprocs Values ('sp_ProcessReceivedMessage') Insert into @sprocs Values ('sp_SendXmlMessage') SELECT 'Checking for required stored procedures...' SELECT  '** Missing Procedure:  '+s.name  From @sprocs s Where NOT Exists(Select * from sysobjects where xtype='p' and name=s.name) GO -- Check the services Select 'Checking Recipient Message Services...' Select '** Missing Message Service:' + r.RecipientName +'MessageService' From Recipient r Where not exists (Select * from sys.services s where  s.name  COLLATE SQL_Latin1_General_CP1_CI_AS= r.RecipientName+'MessageService') DECLARE @svcs Table (Name Varchar(2000)) Insert into @svcs Values ('XmlMessageSendingService') SELECT  '** Missing Service:  '+s.name  From @svcs s Where NOT Exists(Select * from sys.services where name=s.name COLLATE SQL_Latin1_General_CP1_CI_AS) GO /*** To Test a message send Run: sp_SendXmlMessage  'TSQLTEST', 'CommerceEngine','<Root><Text>Test</Text></Root>' */ Select CAST(message_body as XML) as xml, * From XmlMessageSendingQueue /*** clean out all queues declare @handle uniqueidentifier declare conv cursor for   select conversation_handle from sys.conversation_endpoints open conv fetch next from conv into @handle while @@FETCH_STATUS = 0 Begin    END Conversation @handle with cleanup    fetch next from conv into @handle End close conv deallocate conv ***********************

    Read the article

  • Adding multiple items in a batch to an osCommerce site?

    - by Unkwntech
    I need to add several hundred products to an osCommerce (ugh, I know, it wasn't my choice) but osCommerce doesn't have a built in method for this (or at least I couldn't find it), does anyone know where some (even half decent) documentation on HOW osCommerce stores products ('cause it certainly is not in any logical manner) can be found? Or possibly some free addon/software that will do it?

    Read the article

  • Odd SQL behavior, I'm wondering why this works the way it does.

    - by Matthew Vines
    Consider the following Transact sql. DECLARE @table TABLE(val VARCHAR(255) NULL) INSERT INTO @table (val) VALUES('a') INSERT INTO @table (val) VALUES('b') INSERT INTO @table (val) VALUES('c') INSERT INTO @table (val) VALUES('d') INSERT INTO @table (val) VALUES(NULL) select val from @table where val not in ('a') I would expect this to return b, c, d, NULL but instead it returns b, c, d Why is this the case? Is NULL not evaluated? Is NULL somehow in the set 'a'?

    Read the article

  • Java Swing: JTextArea columns question

    - by battousai622
    How do i put the text in specific columns with jTextArea? private javax.swing.JTextArea jTextArea1; jTextArea1.setColumns(4); jTextArea1.insert(price, 0); //column 1 jTextArea1.insert(cost, 0); //column 2 jTextArea1.insert(quantity, 0); //column ect.. jTextArea1.insert(itemName, 0); jTextArea1.insert("\n", 0);

    Read the article

  • I need to programmatically remove a batch of unique constraints that I don't know the names of.

    - by Bill
    I maintain a product that is installed at multiple locations which as been haphazardly upgraded. Unique constraints were added to a number of tables, but I have no idea what the names are at any particular instance. What I do know is the table/columnname pair that has the unique constraints and I would like to write a script to delete any unique constraint on these column/table combinations. This is MSSQL 2000 and later. Something that works on 2000/2005/2008 would be best!

    Read the article

  • Error : 'java' is not recognized as an internal or external command, operable program or batch file.

    - by Setu
    I have my application online on the Google Apps Engine. When I deploy this application, this error is generated. I am using Netbeans 6.9. My jdk is installed at : "C:\Program Files\Java\jdk1.6.0_23\". I have installed Google App Engine for Java. This application is getting deployed & run on localhost very well. I am also able to start Google App Engine Server. I have set Environment variables as : 1) classpath : C:\Program Files\Java\jdk1.6.0_23\bin; 2) path : C:\Program Files\Java\jdk1.6.0_23\bin; 3) JAVA_HOME : C:\Program Files\Java\jdk1.6.0_23\bin; 4) include : C:\Program Files\Java\jdk1.6.0_23\include; 5) lib : C:\Program Files\Java\jdk1.6.0_23\lib; However, in system32, I am not finding java.exe, javac.exe or javawc.exe. Also, while running this : 1) java.exe 2) javac.exe 3) java -version from Command Line, they give proper output. How do I make my application properly deployed ?

    Read the article

  • Would Python make a good substitute for the Windows command-line/batch scripts?

    - by Lawrence Johnston
    I've got some experience with Bash, which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead. Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc.

    Read the article

  • Assertion failure when trying to write (INSERT, UPDATE) to sqlite database on iPhone.

    - by Mark McFarlane
    I have a really frustrating error that I've spent hours looking at and cannot fix. I can get data from my db no problem with this code, but inserting or updating gives me these errors: *** Assertion failure in +[Functions db_insert_answer:question_type:score:], /Volumes/Xcode/Kanji/Classes/Functions.m:129 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error inserting: db_insert_answer:question_type:score:' Here is the code I'm using: [Functions db_insert_answer:[[dict_question objectForKey:@"JISDec"] intValue] question_type:@"kanji_meaning" score:arc4random() % 100]; //update EF, Next_question, n here [Functions db_update_EF:[dict_question objectForKey:@"question"] EF:EF]; To call these functions: +(sqlite3_stmt *)db_query:(NSString *)queryText{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; NSLog(queryText); if (sqlite3_prepare_v2(database, [queryText UTF8String], -1, &statement, nil) == SQLITE_OK) { } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } sqlite3_close(database); return statement; } +(void)db_insert_answer:(int)obj_id question_type:(NSString *)question_type score:(int)score{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; char *errorMsg; char *update = "INSERT INTO Answers (obj_id, question_type, score, date) VALUES (?, ?, ?, DATE())"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, obj_id); sqlite3_bind_text(statement, 2, [question_type UTF8String], -1, NULL); sqlite3_bind_int(statement, 3, score); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error inserting: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Answer saved"); } +(void)db_update_EF:(NSString *)kanji EF:(int)EF{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; //NSLog(queryText); char *errorMsg; char *update = "UPDATE Kanji SET EF = ? WHERE Kanji = '?'"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, EF); sqlite3_bind_text(statement, 2, [kanji UTF8String], -1, NULL); } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error updating: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Update saved"); } +(sqlite3 *)get_db{ sqlite3 *database; NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *copyFrom = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"/kanji_training.sqlite"]; if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB FILE ALREADY EXISTS"); } else { [fileManager copyItemAtPath:copyFrom toPath:[self dataFilePath] error:nil]; NSLog(@"COPIED DB TO DOCUMENTS BECAUSE IT DIDNT EXIST: NEW INSTALL"); } if (sqlite3_open([[self dataFilePath] UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); NSLog(@"FAILED TO OPEN DB"); } else { if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB PATH:"); //NSLog([self dataFilePath]); } } return database; } + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:@"kanji_training.sqlite"]; } I really can't work it out! Can anyone help me? Many thanks.

    Read the article

  • Insert a transformed integer_sequence into a variadic template argument?

    - by coderforlife
    How do you insert a transformed integer_sequence (or similar since I am targeting C++11) into a variadic template argument? For example I have a class that represents a set of bit-wise flags (shown below). It is made using a nested-class because you cannot have two variadic template arguments for the same class. It would be used like typedef Flags<unsigned char, FLAG_A, FLAG_B, FLAG_C>::WithValues<0x01, 0x02, 0x04> MyFlags. Typically, they will be used with the values that are powers of two (although not always, in some cases certain combinations would be made, for example one could imagine a set of flags like Read=0x1, Write=0x2, and ReadWrite=0x3=0x1|0x2). I would like to provide a way to do typedef Flags<unsigned char, FLAG_A, FLAG_B, FLAG_C>::WithDefaultValues MyFlags. template<class _B, template <class,class,_B> class... _Fs> class Flags { public: template<_B... _Vs> class WithValues : public _Fs<_B, Flags<_B,_Fs...>::WithValues<_Vs...>, _Vs>... { // ... }; }; I have tried the following without success (placed inside the Flags class, outside the WithValues class): private: struct _F { // dummy class which can be given to a flag-name template template <_B _V> inline constexpr explicit _F(std::integral_constant<_B, _V>) { } }; // we count the flags, but only in a dummy way static constexpr unsigned _count = sizeof...(_Fs<_B, _F, 1>); static inline constexpr _B pow2(unsigned exp, _B base = 2, _B result = 1) { return exp < 1 ? result : pow2(exp/2, base*base, (exp % 2) ? result*base : result); } template <_B... _Is> struct indices { using next = indices<_Is..., sizeof...(_Is)>; using WithPow2Values = WithValues<pow2(_Is)...>; }; template <unsigned N> struct build_indices { using type = typename build_indices<N-1>::type::next; }; template <> struct build_indices<0> { using type = indices<>; }; //// Another attempt //template < _B... _Is> struct indices { // using WithPow2Values = WithValues<pow2(_Is)...>; //}; //template <unsigned N, _B... _Is> struct build_indices // : build_indices<N-1, N-1, _Is...> { }; //template < _B... _Is> struct build_indices<0, _Is...> // : indices<_Is...> { }; public: using WithDefaultValues = typename build_indices<_count>::type::WithPow2Values; Of course, I would be willing to have any other alternatives to the whole situation (supporting both flag names and values in the same template set, etc). I have included a "working" example at ideone: http://ideone.com/NYtUrg - by "working" I mean compiles fine without using default values but fails with default values (there is a #define to switch between them). Thanks!

    Read the article

  • jQuery bug when trying to insert partial elements before() / after() ?

    - by RedGlobe
    I'm trying to wrap a div around an element (my 'template' div) by using jQuery's before() and after(). When I try to insert a closing after the selected element, it actually gets placed before the target. Example: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>Div Wrap</title> <script src="http://code.jquery.com/jquery-1.4.4.min.js"></script> <script> $('document').ready(function() { var beforestr = "<div id=\"wrap\"><div id=\"header\">Top</div><div id=\"page\">"; var afterstr = "</div><div id=\"footer\">Bottom</div></div>"; $('#template').before(beforestr); $('#template').after(afterstr); }); </script> </head> <body> <div id="template"> <h1>Page Title</h1> <p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. <script>document.write('This script should still work and might contain variables. Please don\'t recommend concatenation.');</script> Donec non enim in turpis pulvinar facilisis.</p> </div> </body> </html> The result is: <div id="wrap"> <div id="header">Top</div> <div id="page"> </div> </div> <div id="template"> <h1>Page Title</h1> <p>Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. This script should still work and might contain variables. Please don't recommend concatenation. Donec non enim in turpis pulvinar facilisis.</p> </div> <div id="footer">Bottom</div> Why are my closing wrap and page divs getting placed before the target, when I'm trying to place them after() ? Is there an alternative way to accomplish this (keeping in mind I may need to call script functions within the template div)? As I'm sure you're aware, best practices aren't what I'm going for here.

    Read the article

  • How to get data from dynamically created EditText views and insert it into an array?

    - by Snwspeckle
    So basically what I need my program to do at this point is that when I click the submit button, I need to loop through each dynamic row of the ListView and grab the value in the EditText view and then insert that into an array which I will do further calculations after. Here is my code right now. package com.hello_world; import java.util.ArrayList; import com.hello_world.ByteInputActivity.MyAdapter.ViewHolder; import android.app.Activity; import android.content.Context; import android.os.Bundle; import android.util.Log; import android.view.KeyEvent; import android.view.LayoutInflater; import android.view.View; import android.view.View.OnFocusChangeListener; import android.view.View.OnKeyListener; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ListView; import android.widget.TextView; public class ByteInputActivity extends Activity { private ListView myList; private MyAdapter myAdapter; private Integer resQuestions; private Integer indexVal = 0; private View caption; ViewHolder holder; ArrayList<Integer> intArrayList = new ArrayList<Integer>(); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fieldlist); //Gets number of questions from MainActivity Bundle extras = getIntent().getExtras(); if(extras !=null) { resQuestions = extras.getInt("index"); } myList = (ListView) findViewById(R.id.FieldList); myList.setItemsCanFocus(true); myAdapter = new MyAdapter(); myList.setAdapter(myAdapter); Button submit = (Button) findViewById(R.id.btn_New); submit.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { for (int i = 0; i < myList.getCount() ; i++) { View vListSortOrder; vListSortOrder = myList.getChildAt(i); String temp = holder.caption.getText().toString(); Log.e("VALUES", "" +temp); } } }); } public class MyAdapter extends BaseAdapter { private LayoutInflater mInflater; public ArrayList myItems = new ArrayList(); public MyAdapter() { mInflater = (LayoutInflater) getSystemService(Context.LAYOUT_INFLATER_SERVICE); for (int i = 0; i < resQuestions; i++) { ListItem listItem = new ListItem(); listItem.caption = "Index " + i; listItem.indexText = "Index " + i; myItems.add(listItem); indexVal += 1; } notifyDataSetChanged(); } public int getCount() { return myItems.size(); } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { if (convertView == null) { holder = new ViewHolder(); convertView = mInflater.inflate(R.layout.item, null); holder.indexText = (TextView) convertView .findViewById(R.id.textView1); holder.caption = (EditText) convertView .findViewById(R.id.ItemCaption); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } //Fill EditText with the value you have in data source holder.caption.setText(""); holder.caption.setId(position); holder.indexText.setText("Index " + position); holder.indexText.setId(position); //we need to update adapter once we finish with editing holder.caption.setOnFocusChangeListener(new OnFocusChangeListener() { public void onFocusChange(View v, boolean hasFocus) { if (!hasFocus){ final int position = v.getId(); final EditText Caption = (EditText) v; myItems.set(position, Caption.getText().toString()); } } }); return convertView; } class ViewHolder { EditText caption; TextView indexText; } class ListItem { String caption; String indexText; } } }

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >