Search Results

Search found 5233 results on 210 pages for 'records'.

Page 13/210 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • Insert records using ExecuteNonQuery, showing exception that invalid column name

    - by tina
    Hi all, I am using SQL Server 2008. I want to insert records into a table using ExecuteNonQuery, for that I have written: customUtility.ExecuteNonQuery("insert into furniture_ProductAccessories(Product_id, Accessories_id, SkuNo, Description1, Price, Discount) values(" + prodid + "," + strAcc + "," + txtSKUNo.Text + "," + txtAccDesc.Text + "," + txtAccPrices.Text + "," + txtAccDiscount.Text + ")"); & following is ExecuteNonQuery function: public static bool ExecuteNonQuery(string SQL) { bool retVal = false; using (SqlConnection con = new SqlConnection(System.Web.Configuration.WebConfigurationManager.ConnectionStrings["dbConnect"].ToString())) { con.Open(); SqlTransaction trans = con.BeginTransaction(); SqlCommand command = new SqlCommand(SQL, con, trans); try { command.ExecuteNonQuery(); trans.Commit(); retVal = true; } catch(Exception ex) { //HttpContext.Current.Response.Write(SQL + "<br>" + ex.Message); //HttpContext.Current.Response.End(); } finally { // Always call Close when done reading. con.Close(); } return retVal; } } but it showing exception that invalid column name to Description1 and even it's value which coming from txtAccDesc.Text. I have tried by removing Description1 column, other records are getting inserted successfully. Could you please help me? Thanks.

    Read the article

  • "Special case" records for foreign key constraints

    - by keithjgrant
    Let's say I have a mysql table, called foo with a foreign key option_id constrained to the option table. When I create a foo record, the user may or may not have selected an option, and 'no option' is a viable selection. What is the best way to differentiate between 'null' (i.e. the user hasn't made a selection yet) and 'no option' (i.e. the user selected 'no option')? Right now, my plan is to insert a special record into the option table. Let's say that winds up with an id of 227 (this table already has a number of records at this point, so '1' isn't available). I have no need to access this record at a database level, and it would act as nothing more than a placeholder that the foreign key in the foo table can reference. So do I just hard-code '227' in my codebase when I'm creating 'foo' records where the user has selected 'no option'? The hard-coded id seems sloppy, and leaves room for error as the code is maintained down the road, but I'm not really sure of another approach.

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Select multiple records by one query

    - by kofto4ka
    Hello there. Please, give me advice, how to construct select query. I have table table with fields type and obj_id. I want to select all records in concordance with next array: $arr = array( 0 => array('type' => 1, 'obj_id' => 5), 1 => array('type' => 3, 'obj_id' => 15), 2 => array('type' => 4, 'obj_id' => 14), 3 => array('type' => 12, 'obj_id' => 17), ); I want to select needed rows by one query, is it real? Smth like select * from `table` where type in (1,3,4,12) and obj_id in (5,15,14,17) But this query returns also records with type = 3 and obj_id = 14, and for example type = 1 and obj_id = 17. p.s. moderators, please fix my title, I dont know how to describe my question.

    Read the article

  • Session does not giving right records?

    - by Jugal
    I want to keep one session, but when I rollback transaction then transaction gets isActive=false, so I can not commit and rollback in next statements by using same transaction. then I need to create new transaction but what is going wrong here ? var session = NHibernateHelper.OpenSession();/* It returns new session. */ var transaction1 = session.BeginTransaction(); var list1 = session.Query<Make>().ToList(); /* It returs 4 records. */ session.Delete(list1[2]); /* After Rollback, transaction is isActive=false so I can not commit * and rollback from this transaction in future. so I need to create new transaction. */ transaction1.Rollback(); var transaction2 = session.BeginTransaction(); /* It returns 3 records. * I am not getting object(which was deleted but after that rollback) here why ? */ var list2 = session.Query<Make>().ToList(); Anyone have idea what is going wrong here ? I am not getting deleted object which was rollback.

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • Deleting multiple records in ASP.NET MVC using jqGrid

    - by Tevin
    How can you enable multiple selection in a jqGrid, and also allow users to delete all of the selected rows using an ASP.NET MVC controller? I have set the delete url property to my /Controller/Delete method, and this works fine if one record is selected. However, if multiple records are selected, it attempts to send a null value back to the controller where an integer id is required.

    Read the article

  • Data type mismatch when retrieving records from an access database using a DateTimePicker

    - by Daniel
    I get a Data type mismatch criteria expression error when i try to retrieve records from an access database between two dates using a DateTimePicker in C#. This is the Select statement else if (dtpDOBFrom.Value < dtpDOBTo.Value) { cmdSearch.CommandText = "SELECT [First Name], [Surname], [Contact Type], [Birthdate] FROM [Contacts] WHERE [Birthdate] >= '" + dtpDOBFrom.Value +"' AND [Birthdate] <= '" + dtpDOBTo.Value +"'"; }

    Read the article

  • How to get identities of inserted data records using SQL bulk copy

    - by Olga
    Hello I have a ADO.NET dataTable with about 100.000 records. In this table there is a column "xyID" which has no values in it, because they are generated by insertion into my MSSQL Database. Now i have the problem, that i need this IDs for other processes. I am looking for a way to bulk copy this dataTable into the MSSQL database, and within the same "step" to "fill" my dataTable with the generated IDs. Thank you for your answers!

    Read the article

  • Createing a new Index in SQL when current records don't meet that index

    - by Jonathan
    Hey all- I'd like to add an index to a table that already contains data. I know that there a few records currently in the table that are not unique with this new index. Clearly, MySQL won't let me add the index until all of them are. I need a query to identify the rows which currently have the same index. I can then delete or modify these rows as necessary. The new index contains 6 fields. Thanks- Jonathan

    Read the article

  • Function inserted not all records

    - by user1799459
    I wrote the following code by data transfer from Access to Firebird def FirebirdDatetime(dt): return '\'%s.%s.%s\'' % (str(dt.day).rjust(2,'0'), str(dt.month).rjust(2,'0'), str(dt.year).rjust(4,'0')) def SelectFromAccessTable(tablename): return 'select * from [' + tablename+']' def InsertToFirebirdTable(tablename, row): values='' i=0 for elem in row: i+=1 #print type(elem) if type(elem) == int: temp = str(elem) elif (type(elem) == str) or (type(elem)==unicode): temp = '\'%s\'' % (elem,) elif type(elem) == datetime.datetime: temp =FirebirdDatetime(elem) elif type(elem) == decimal.Decimal: temp = str(elem) elif elem==None: temp='null' if (i<len(row)): values+=temp+', ' else: values+=temp return 'insert into '+tablename+' values ('+values+')' def AccessToFirebird(accesstablename, firebirdtablename, accesscursor, firebirdcursor): SelectSql=SelectFromAccessTable(accesstablename) for row in accesscursor.execute(SelectSql): InsertSql=InsertToFirebirdTable(firebirdtablename, row) InsertSql=InsertSql print InsertSql firebirdcursor.execute(InsertSql) In the main module there is an AccessToFirebird function call conAcc = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\ThirdTask\Northwind.accdb') SqlAccess=conAcc.cursor(); conn.begin() cur=conn.cursor() sql.AccessToFirebird('Customers', 'CLIENTS', SqlAccess, cur) conn.commit() conn.begin() cur=conn.cursor() sql.AccessToFirebird('??????????', 'EMPLOYEES', SqlAccess, cur) sql.AccessToFirebird('????', 'ROLES', SqlAccess, cur) sql.AccessToFirebird('???? ???????????', 'EMPLOYEES_ROLES', SqlAccess, cur) sql.AccessToFirebird('????????', 'DELIVERY', SqlAccess, cur) sql.AccessToFirebird('??????????', 'SUPPLIERS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ???????', 'TAX_STATUS_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????? ? ??????', 'STATE_ORDER_DETAILS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????', 'CONDITION_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('??????', 'ORDERS', SqlAccess, cur) sql.AccessToFirebird('?????', 'BILLS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ?? ????????????', 'STATUS_PURCHASE_ORDER', SqlAccess, cur) sql.AccessToFirebird('?????? ?? ????????????', 'ORDERS_FOR_ACQUISITION', SqlAccess, cur) sql.AccessToFirebird('???????? ? ?????? ?? ????????????', 'INFORMPURCHASEORDER', SqlAccess, cur) sql.AccessToFirebird('??????', 'PRODUCTS', SqlAccess, cur) conn.commit() conAcc.commit() conn.close() conAcc.close() But as a result, not all records have been inserted into the table Products (Table Goods - Northwind database), for example, does not work request insert into PRODUCTS values ('4', 1, 'NWTB-1', '?????????? ???', null, 13.5000, 18.0000, 10, 40, '10 ??????? ?? 20 ?????????', '10 ??????? ?? 20 ?????????', 10, '???????', '') In ibexpert to this request message issued can't format message 13:587 -- message file C:\Windows\firebird.msg not found. conversion error from string "10 ?????????±???? ???? 20 ???°???µ?‚????????". Worked only requests insert into PRODUCTS values ('1', 82, 'NWTC-82', '???????', null, 2.0000, 4.0000, 20, 100, null, null, null, '????', '') insert into PRODUCTS values ('9', 83, 'NWTCS-83', '???????????? ?????', null, 0.5000, 1.8000, 30, 200, null, null, null, '????? ? ???????', '') insert into PRODUCTS values ('1', 97, 'NWTC-82', '???????', null, 3.0000, 5.0000, 50, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 98, 'NWTSO-98', '??????? ???', null, 1.0000, 1.8900, 100, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 99, 'NWTSO-99', '??????? ??????', null, 1.0000, 1.9500, 100, 200, null, null, null, '????', '') other records were not inserted.

    Read the article

  • recipe for adding Drupal node records.

    - by bert
    I am looking for a recipe for adding Drupal node records. I have identified three tables. node_revisions nid=249 - vid + 1? vid=248 - auto-increment node: nid=250 - vid + 1? vid=249 - auto-increment content_type_my_content vid=248 - from node_revisions table? nid=249 - from node table? Am I on right track? Is there some helper functions for this?

    Read the article

  • Get more records that appear more than once

    - by milo2010
    How can I see all the records that appear more than once per day? I have this table: ID Name Date 1 John 27.03.2010 18:17:00 2 Mike 27.03.2010 16:38:00 3 Sonny 28.03.2010 20:23:00 4 Anna 29.03.2010 13:51:00 5 Maria 29.03.2010 21:59:00 6 Penny 29.03.2010 17:25:00 7 Alba 30.03.2010 09:36:00 8 Huston 31.03.2010 10:19:00 I wanna get: 1 John 27.03.2010 18:17:00 2 Mike 27.03.2010 16:38:00 4 Anna 29.03.2010 13:51:00 5 Maria 29.03.2010 21:59:00 6 Penny 29.03.2010 17:25:00

    Read the article

  • HSQLDB how to manually insert records

    - by l245c4l
    Hey, My question is how can I manually add records to hsqldb database. I mean using command line or some client. I know I can use hsqldb manager but I cannot execute any query with it. It says that there is no table of specified name. What might be the problem?

    Read the article

  • Inserting Multiple Records in SQL2000

    - by Chris M
    I have a web app that currently is inserting x (between 1 + 40) records into a table that contains about 5 fields, via a linq-2-sql-stored procedure in a loop. Would it be better to manually write the SQL Inserts to say a string builder and run them against the database when the loops completed rather than 30 transactions? or should I just accept this is negligible for such a small number of inserts.

    Read the article

  • Select most recent records using LINQ to Entities

    - by begemotya
    I have a simple Linq to Enities table to query and get the most recent records using Date field So I tried this code: IQueryable<Alert> alerts = GetAlerts(); IQueryable latestAlerts = from a in alerts group a by a.UpdateDateTime into g select g.OrderBy(a = a.Identifier).First(); Error: NotSupportedException: The method 'GroupBy' is not supported. Is there any other way to get do it? Thanks a lot!

    Read the article

  • Redis - Records Fall Off

    - by Ian
    With memcache, when you exceed the available ram, it automatically drops the oldest records off the end of the stack.. Is there a way to do this with redis? I'm trying to find ways to avoid running in to a write error (when there's no more available ram), other than setting a timeout. The only reason the timeout isn't useful, it because it doesn't guaranty the ability to write.

    Read the article

  • Count Records Returned MySQL Doctrine

    - by 01010011
    Hi, How do I check the number of records returned from a search of my MySQL database with a statement like this: $searchKey = 'Something to search for'; $searchResults = Doctrine::getTable('TableName')->createQuery('t')- >where('columnName LIKE ?','%'.$searchKey.'%')->execute();

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >