Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 25/575 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • Retrieve Records from paypal csv file

    - by Pankaj Khurana
    Hi, I want to retrieve all the records from paypal csv file where type='Payment Processed' and store it in a database table.It should be available in the following format: 'Heading':'Value' The format of the csv is: "Transaction ID","Reference Transaction ID","Date","Type","Subject","Item Number","Item Name","Invoice ID","Name","Email","Shipping Name","Shipping Address Line 1","Shipping Address Line 2","Shipping Address City","Shipping State/Province","Shipping Zip/Postal Code","Shipping Address Country","Shipping Method","Address Status","Contact Phone Number","Gross Amount","Receipt ID","Custom Field","Option 1 Name","Option 1 Value","Option 2 Name","Option 2 Value","Note","Auction Site","Auction User ID","Item URL","Auction Closing Date","Insurance Amount","Currency","Fees","Net Amount","Shipping & Handling Amount","Sales Tax Amount","To Email","Time","Time Zone" "1T","",5/5/2010 2:10:44 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User1","[email protected]","","","","","","","","","N","","68.18","R1","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","01:40","Asia/Calcutta" "2T","",5/19/2010 4:04:08 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User2","[email protected]","","","","","","","","","N","","68.18","R2","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","03:34","Asia/Calcutta" "3T","1RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","17492.6","","","","","","","","","","",,"","INR","0","17492.6","0","0","","04:58","Asia/Calcutta" "4T","2RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-393.36","","","","","","","","","","",,"","USD","0","-393.36","0","0","","04:58","Asia/Calcutta" "5T","",5/19/2010 5:28:45 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","04:58","Asia/Calcutta" "6T","",5/20/2010 5:38:02 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","05:08","Asia/Calcutta" "7T","",5/21/2010 12:32:37 PM,"Payment Processed","FP - LVC Plus","","FP - LVC Plus","","User3","[email protected]","User3","NEW DELHI","BEHIND KARNATAKA BANK LD","SOUTH","NEW DELHI","110023","IN","","N","","283.96","","","","","","","","","","",,"","USD","-9.95","274.01","0","0","[email protected]","00:02","Asia/Calcutta" "8T","",5/25/2010 4:40:48 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:10","Asia/Calcutta" "9T","3RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-274.01","","","","","","","","","","",,"","USD","0","-274.01","0","0","","04:10","Asia/Calcutta" "10T","4RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","12569.85","","","","","","","","","","",,"","INR","0","12569.85","0","0","","04:10","Asia/Calcutta" "11T","",5/26/2010 4:57:39 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:27","Asia/Calcutta" "Total","-247.05 USD","-15.19","-262.24" "Total","0.00 INR","0.00","0.00" Please help me on this Thanks

    Read the article

  • Select multiple records from sql database table in a master-detail scenario

    - by Trex
    Hello, I have two tables in a master-detail relationship. The structure is more or less as follows: Master table: MasterID, DetailID, date, ... masterID1, detailID1, 2010/5/1, .... masterID2, detailID1, 2008/6/14, ... masterID3, detailID1, 2009/5/25, ... masterID4, detailID2, 2008/7/24, ... masterID5, detailID2, 2010/4/1, ... masterID6, detailID4, 2008/9/16, ... Details table: DetailID, ... detailID1, ... detailID2, ... detailID3, ... detailID4, ... I need to get all the records from the details table plus the LAST record from the master table (last by the date in the master table). Like this: detailID1, masterID1, 2010/5/1, .... detailID2, masterID5, 2010/4/1, ... detailID3, null, null, ... detailID4, masterID6, 2008/9/16, ... I have no idea how to do this. Can anybody help me? Thanks a lot. Jan

    Read the article

  • Creating has_many :through records 2x times

    - by antiarchitect
    I have models class Question < ActiveRecord::Base WEIGHTS = %w(medium hard easy) belongs_to :test has_many :answers, :dependent => :destroy has_many :testing_questions end class Testing < ActiveRecord::Base belongs_to :student, :foreign_key => 'user_id' belongs_to :subtest has_many :testing_questions, :dependent => :destroy has_many :questions, :through => :testing_questions end So when I try to bind questions to testing on it's creation: >> questions = Question.all ... >> questions.count => 3 >> testing = Testing.create(:user_id => 3, :subtest_id => 1, :questions => questions) Testing Columns (0.9ms) SHOW FIELDS FROM `testings` SQL (0.1ms) BEGIN SQL (0.1ms) COMMIT SQL (0.1ms) BEGIN Testing Create (0.3ms) INSERT INTO `testings` (`created_at`, `updated_at`, `user_id`, `subtest_id`) VALUES('2010-05-18 00:53:05', '2010-05-18 00:53:05', 3, 1) TestingQuestion Columns (0.9ms) SHOW FIELDS FROM `testing_questions` TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(1, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.4ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(2, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(3, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(1, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(2, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) TestingQuestion Create (0.3ms) INSERT INTO `testing_questions` (`question_id`, `created_at`, `updated_at`, `testing_id`) VALUES(3, '2010-05-18 00:53:05', '2010-05-18 00:53:05', 31) SQL (90.2ms) COMMIT => #<Testing id: 31, subtest_id: 1, user_id: 3, created_at: "2010-05-18 00:53:05", updated_at: "2010-05-18 00:53:05"> There is 6 SQL queries and 6 records in testing_questions are created. Why?

    Read the article

  • Problem in filtering records using Dataview (C#3.0)

    - by Newbie
    I have a data table . The data table is basically getting populated from excel sheet. And there are many excel sheets. Henceforth, I have written a utility method for accomplishing the same. Now in some of the excel sheets, there are date columns and in some it is not(only text/string). My function is populating the values properly into the datatable from the excell sheet. But there are many blank rows in the excel sheets some are filled with NULL , some with " ". So I need to filter those records (which are NULL or " " ) first before further processing. What I am after is to use a dataview and apply the filter over there. DataView dv = dataTable.DefaultView; dv.RowFilter = ColumnName + " <> ''"; Well by using metedata (GetOleDbSchemaTable(OleDbSchemaGuid.Columns, restrection)) I was able to get the column names from the excel sheet , so getting the column names is not an issue. But the problem is as I said in some Excel sheet there are date fileds some are not. So the Filter condition of the Dataview needs to be proper. If I apply the above logic, and if it encounters a Datafield, it is throwing error Cannot perform '<' operation on System.DateTime and System.String. Could you people please help me out? I need to filter columns(not known at compile time + their data types) which can have NULL and " " I am using C#3.0 Thanks

    Read the article

  • How to test if a doctrine records has any relations that are used

    - by murze
    Hi, I'm using a doctrine table that has several optional relations (of types Doctrine_Relation_Association and Doctrine_Relation_ForeignKey) with other tables. How can I test if a record from that table has connections with records from the related table. Here is an example to make my question more clear. Assume that you have a User and a user has a many to many relation with Usergroups and a User can have one Userrole How can I test if a give user is part of any Usergroups or has a role. The solution starts I believe with $relations = Doctrine_Core::getTable('User')->getRelations(); $user = Doctrine_Core::getTable('User')->findOne(1); foreach($relations as $relation) { //here should go a test if the user has a related record for this relation if ($relation instanceof Doctrine_Relation_Association) { //here the related table probably has more then one foreign key (ex. user_id and group_id) } if ($relation instanceof Doctrine_Relation_ForeignKey) { //here the related table probably has the primary key of this table (id) as a foreign key (user_id) } } //true or false echo $result I'm looking for a general solution that will work no matter how many relations there are between user and other tables. Thanks!

    Read the article

  • SQL - Updating records based on most recent date

    - by Remnant
    I am having difficulty updating records within a database based on the most recent date and am looking for some guidance. By the way, I am new to SQL. As background, I have a windows forms application with SQL Express and am using ADO.NET to interact with the database. The application is designed to enable the user to track employee attendance on various courses that must be attended on a periodic basis (e.g. every 6 months, every year etc.). For example, they can pull back data to see the last time employees attended a given course and also update attendance dates if an employee has recently completed a course. I have three data tables: EmployeeDetailsTable - simple list of employees names, email address etc., each with unique ID CourseDetailsTable - simple list of courses, each with unique ID (e.g. 1, 2, 3 etc.) AttendanceRecordsTable - has 3 columns { EmployeeID, CourseID, AttendanceDate, Comments } For any given course, an employee will have an attendance history i.e. if the course needs to be attended each year then they will have one record for as many years as they have been at the company. What I want to be able to do is to update the 'Comments' field for a given employee and given course based on the most recent attendance date. What is the 'correct' SQL syntax for this? I have tried many things (like below) but cannot get it to work: UPDATE AttendanceRecordsTable SET Comments = @Comments WHERE AttendanceRecordsTable.EmployeeID = (SELECT EmployeeDetailsTable.EmployeeID FROM EmployeeDetailsTable WHERE (EmployeeDetailsTable.LastName =@ParameterLastName AND EmployeeDetailsTable.FirstName =@ParameterFirstName) AND AttendanceRecordsTable.CourseID = (SELECT CourseDetailsTable.CourseID FROM CourseDetailsTable WHERE CourseDetailsTable.CourseName =@CourseName)) GROUP BY MAX(AttendanceRecordsTable.LastDate) After much googling, I discovered that MAX is an aggregate function and so I need to use GROUP BY. I have also tried using the HAVING keyword but without success. Can anybody point me in the right direction? What is the 'conventional' syntax to update a database record based on the most recent date?

    Read the article

  • Summarising grouped records in a dataframe in R

    - by monch1962
    Hello all, I have a data frame in R that looks like this: > TimeOffset, Source, Length > 0 1 1500 > 0.1 1 1000 > 0.2 1 50 > 0.4 2 25 > 0.6 2 3 > 1.1 1 1500 > 1.4 1 18 > 1.6 2 2500 > 1.9 2 18 > 2.1 1 37 > ... and I want to convert it to > TimeOffset, Source, Length > 0.2 1 2550 > 0.6 2 28 > 1.4 1 1518 > 1.9 2 2518 > ... Trying to put this into English, I want to group consecutive records with the same 'Source' together, then printing out a single record per group showing the highest time offset in that group, the source, and the sum of the lengths in that group. The TimeOffset values will always increase. I suspect this is possible in R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • Summarising grouped records in a dataframe in R (...again)

    - by monch1962
    Hello all, (I tried to ask this question earlier today, but later realised I over-simplified the question; the answers I received were correct, but I couldn't use them because of my over-simplification of the problem in the original question. Here's my 2nd attempt...) I have a data frame in R that looks like: "Timestamp", "Source", "Target", "Length", "Content" 0.1 , P1 , P2 , 5 , "ABCDE" 0.2 , P1 , P2 , 3 , "HIJ" 0.4 , P1 , P2 , 4 , "PQRS" 0.5 , P2 , P1 , 2 , "ZY" 0.9 , P2 , P1 , 4 , "SRQP" 1.1 , P1 , P2 , 1 , "B" 1.6 , P1 , P2 , 3 , "DEF" 2.0 , P2 , P1 , 3 , "IJK" ... and I want to convert this to: "StartTime", "EndTime", "Duration", "Source", "Target", "Length", "Content" 0.1 , 0.4 , 0.3 , P1 , P2 , 12 , "ABCDEHIJPQRS" 0.5 , 0.9 , 0.4 , P2 , P1 , 6 , "ZYSRQP" 1.1 , 1.6 , 0.5 , P1 , P2 , 4 , "BDEF" ... Trying to put this into English, I want to group consecutive records with the same 'Source' and 'Target' together, then print out a single record per group showing the StartTime, EndTime & Duration (=EndTime-StartTime) for that group, along with the sum of the Lengths for that group, and a concatenation of the Content (which will all be strings) in that group. The TimeOffset values will always increase throughout the data frame. I had a look at melt/recast and have a feeling that it could be used to solve the problem, but couldn't get my head around the documentation. I suspect it's possible to do this within R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • Selecting date NOT NULL records between a specific range with Propel

    - by Jon Winstanley
    Using Propel I would like to find records which have a date field which is not null and also between a specific range. However, Propel seems to overwrite the criteria with the NOTNULL criteria. Is it possible to do this? //create the date ranges $start_date = mktime(0, 0, 0, date("m") , date("d")+$start, date("Y")); $end_date = mktime(0, 0, 0, date("m") , date("d")+$end, date("Y")); //add the start of the range $c1 = $c->getNewCriterion(TaskPeer::DUE_DATE, null); $c1->addAnd($c->getNewCriterion(TaskPeer::DUE_DATE, $end_date, Criteria::LESS_EQUAL)); $c->add($c1); //add the end of the range $c2 = $c->getNewCriterion(TaskPeer::DUE_DATE, null); $c2->addAnd($c->getNewCriterion(TaskPeer::DUE_DATE, $start_date, Criteria::GREATER_EQUAL)); $c->add($c2); //remove the null entries $c3 = $c->getNewCriterion(TaskPeer::DUE_DATE, null); $c3->addAnd($c->getNewCriterion(TaskPeer::DUE_DATE, null, Criteria::ISNULL)); $c->add($c3);

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • Delete all previous records and insert new ones

    - by carlos
    When updating an employee with id = 1 for example, what is the best way to delete all previous records in the table certificate for this employee_id and insert the new ones?. create table EMPLOYEE ( id INT NOT NULL auto_increment, first_name VARCHAR(20) default NULL, last_name VARCHAR(20) default NULL, salary INT default NULL, PRIMARY KEY (id) ); create table CERTIFICATE ( id INT NOT NULL auto_increment, certificate_name VARCHAR(30) default NULL, employee_id INT default NULL, PRIMARY KEY (id) ); Hibernate mapping <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD//EN" "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="Employee" table="EMPLOYEE"> <id name="id" type="int" column="id"> <generator class="sequence"> <param name="sequence">employee_seq</param> </generator> </id> <set name="certificates" lazy="false" cascade="all"> <key column="employee_id" not-null="true"/> <one-to-many class="Certificate"/> </set> <property name="firstName" column="first_name"/> <property name="lastName" column="last_name"/> <property name="salary" column="salary"/> </class> <class name="Certificate" table="CERTIFICATE"> <id name="id" type="int" column="id"> <param name="sequence">certificate_seq</param> </id> <property name="employee_id" column="employee_id" insert="false" update="false"/> <property name="name" column="certificate_name"/> </class> </hibernate-mapping>

    Read the article

  • Convert enumerated records to php object

    - by Matt H
    I have a table containing a bunch of records like this: +-----------+--------+----------+ | extension | fwd_to | type | +-----------+--------+----------+ | 800 | 11111 | noanswer | | 800 | 12345 | uncond | | 800 | 22222 | unavail | | 800 | 54321 | busy | | 801 | 123 | uncond | +-----------+--------+----------+ etc The query looks like this: select fwd_to, type from forwards where extension='800'; Now I get back an array containing objects which look like the following when printed with Kohana::debug: (object) stdClass Object ( [fwd_to] => 11111 [type] => noanswer ) (object) stdClass Object ( [fwd_to] => 12345 [type] => uncond ) (object) stdClass Object ( [fwd_to] => 22222 [type] => unavail ) (object) stdClass Object ( [fwd_to] => 54321 [type] => busy ) What I'd like to do is convert this to an object of this form: (object) stdClass Object ( [busy] => 54321 [uncond] => 12345 [unavail] => 22222 [noanswer] => 11111 ) The reason being I want to then call json_encode on it. This will allow me to use jquery populate to populate a form. Is there a suggested way I can do this nicely? I'm fairly new to PHP and I'm sure this is easy but it's eluding me at the moment.

    Read the article

  • Basic user authentication with records in AngularFire

    - by ajkochanowicz
    Having spent literally days trying the different, various recommended ways to do this, I've landed on what I think is the most simple and promising. Also thanks to the kind gents from this SO question: Get the index ID of an item in Firebase AngularFire Curent setup Users can log in with email and social networks, so when they create a record, it saves the userId as a sort of foreign key. Good so far. But I want to create a rule so twitter2934392 cannot read facebook63203497's records. Off to the security panel Match the IDs on the backend Unfortunately, the docs are inconsistent with the method from is firebase user id unique per provider (facebook, twitter, password) which suggest appending the social network to the ID. The docs expect you to create a different rule for each of the login method's ids. Why anyone using 1 login method would want to do that is beyond me. (From: https://www.firebase.com/docs/security/rule-expressions/auth.html) So I'll try to match the concatenated auth.provider with auth.id to the record in userId for the respective registry item. According to the API, this should be as easy as In my case using $registry instead of $user of course. { "rules": { ".read": true, ".write": true, "registry": { "$registry": { ".read": "$registry == auth.id" } } } } But that won't work, because (see the first image above), AngularFire sets each record under an index value. In the image above, it's 0. Here's where things get complicated. Also, I can't test anything in the simulator, as I cannot edit {some: 'json'} To even authenticate. The input box rejects any input. My best guess is the following. { "rules": { ".write": true, "registry": { "$registry": { ".read": "data.child('userId').val() == (auth.provider + auth.id)" } } } } Which both throws authentication errors and simultaneously grants full read access to all users. I'm losing my mind. What am I supposed to do here?

    Read the article

  • Limiting a search to records from last_request_at...

    - by bgadoci
    I am trying to figure out how to display a count for records that have been created in a table since the last_request_at of a user. In my view I am counting the notes of a question with the following code: <% unless @questions.empty? %> <% @questions.each do |question| %> <%= h(question.notes.count) %> end end This is happening in the /views/users/show.html.erb file. Instead of counting all the notes for the question, I would only like to count the notes that have been created since the users last_request_at datetime. I don't neccessarily want to scope notes to display this 'new notes' count application wide, just simply in this one instance. To accomplish I am assuming I need to create a variable in the User#show action and call it in the view but not really sure how to do that. Other information you may need: class Note < ActiveRecord::Base belongs_to :user belongs_to :question end class Question < ActiveRecord::Base has_many :notes, :dependent => :destroy belongs_to :user end

    Read the article

  • Displaying a message after adding duplicate records in database

    - by user1770370
    I wrote program in C# winforms and SQL server and LINQ to SQL. I use user control instead of form. In my user control, I put 3 textbox, txtStartNumber, txtEndNumber, txtQuantity. user define value of textboxes, when clicked button, it will insert some records according to the value of txtQuantity. I want to when duplicate number is created, it won't add to database and display message. how do i do? I must write code in code behind or server side? i must set this in store procedure or trigger? private void btnSave_Click(object sender, EventArgs e) { long from = Convert.ToInt64(txt_barcode_f.Text); long to = Convert.ToInt64(txt_barcode_t.Text); long quantity = Convert.ToInt64(to - from); int card_Type_ID=Convert.ToInt32(cmb_BracodeType .SelectedValue); long[] arrCardNum = new long[(to - from)]; arrCardNum[0]=from; for (long i = from; i < to; i++) { for(int j=0; j<(to-from) ;j++) { arrCardNum[j]=from+j; string r = arrCardNum[j].ToString(); sp.SaveCards(r, 2, card_Type_ID, SaveDate, 2); } } } Stored Procedure code. ALTER PROCEDURE dbo.SaveCards @Barcode_Num int ,@Card_Status_ID int ,@Card_Type_ID int ,@SaveDate varchar(10) ,@Save_User_ID int AS BEGIN INSERT INTO [Parking].[dbo].[TBL_Cards] ([Barcode_Num] ,[Card_Status_ID] ,[Card_Type_ID] ,[Save_User_ID]) VALUES (@Barcode_Num ,@Card_Status_ID ,@Card_Type_ID ,@Save_User_ID) END

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Issues with LINQ (to Entity) [adding records]

    - by Mario
    I am using LINQ to Entity in a project, where I pull a bunch of data (from the database) and organize it into a bunch of objects and save those to the database. I have not had problems writing to the db before using LINQ to Entity, but I have run into a snag with this particular one. Here's the error I get (this is the "InnerException", the exception itself is useless!): New transaction is not allowed because there are other threads running in the session. I have seen that before, when I was trying to save my changes inside a loop. In this case, the loop finishes, and it tries to make that call, only to give me the exception. Here's the current code: try { //finalResult is a list of the keys to match on for the records being pulled foreach (int i in finalResult) { var queryEff = (from eff in dbMRI.MemberEligibility where eff.Member_Key == i && eff.EffDate >= DateTime.Now select eff.EffDate).Min(); if (queryEff != null) { //Add a record to the Process table Process prRecord = new Process(); prRecord.GroupData = qa; prRecord.Member_Key = i; prRecord.ProcessDate = DateTime.Now; prRecord.RecordType = "F"; prRecord.UsernameMarkedBy = "Autocard"; prRecord.GroupsId = qa.GroupsID; prRecord.Quantity = 2; prRecord.EffectiveDate = queryEff; dbMRI.AddObject("Process", prRecord); } } dbMRI.SaveChanges(); //<-- Crashes here foreach (int i in finalResult) { var queryProc = from pro in dbMRI.Process where pro.Member_Key == i && pro.UsernameMarkedBy == "Autocard" select pro; foreach (var qp in queryProc) { Audit aud = new Audit(); aud.Member_Key = i; aud.ProcessId = qp.ProcessId; aud.MarkDate = DateTime.Now; aud.MarkedByUsername = "Autocard"; aud.GroupData = qa; dbMRI.AddObject("Audit", aud); } } dbMRI.SaveChanges(); //<-- AND here (if the first one is commented out) } catch (Exception e) { //Do Something here } Basically, I need it to insert a record, get the identity for that inserted record and insert a record into another table with the identity from the first record. Given some other constraints, it is not possible to create a FK relationship between the two (I've tried, but some other parts of the app won't allow it, AND my DBA team for whatever reason hates FK's, but that's for a different topic :)) Any ideas what might be causing this? Thank!

    Read the article

  • Continuous Form, how to add/update records with external connection

    - by Mohgeroth
    EDIT After some more research I found that I cannot use a continuous form with an unbound form since it can only reference a single record at a time. Given that I've altered my question... I have a sample form that pulls out data to enter into a table as an intermediary. Initially the form is unbound and I open connections to two main recordsets. I set the listbox's recordset equal to one of them and the forms recordset equal to the other. The problem is that I cannot add records or update existing ones. Attempting to key into the fields does nothing almost as if the field was locked (Which it is not). Settings of the recordsets are OpenKeyset and LockPessimistic. Tables are not linked, they come from an outside access database seperate from this project and must remain that way. I am using an adodb connection to get the data. Could the separation of the data from the project be causing this? Sample Code from the Form Option Compare Database Option Explicit Private conn As CRobbers_Connections Private exception As CError_Trapping Private mClient_Translations As ADODB.Recordset Private mUnmatched_Clients As ADODB.Recordset Private mExcluded_Clients As ADODB.Recordset //Construction Private Sub Form_Open(Cancel As Integer) Set conn = New CRobbers_Connections Set exception = New CError_Trapping Set mClient_Translations = New ADODB.Recordset Set mUnmatched_Clients = New ADODB.Recordset Set mExcluded_Clients = New ADODB.Recordset mClient_Translations.Open "SELECT * FROM Client_Translation" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic mUnmatched_Clients.Open "SELECT DISTINCT(a.Client) as Client" _ & " FROM Master_Projections a " _ & " WHERE Client NOT IN ( " _ & " SELECT DISTINCT ClientID " _ & " FROM Client_Translation);" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic mExcluded_Clients.Open "SELECT * FROM Clients_Excluded" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic End Sub //Add new record to the client translations Private Sub cmdAddNew_Click() If lstUnconfirmed <> "" Then AddRecord End If End Sub Private Function AddRecord() With mClient_Translations .AddNew .Fields("ClientID") = Me.lstUnconfirmed .Fields("ClientAbbr") = Me.txtTmpShort .Fields("ClientName") = Me.txtTmpLong .Update End With UpdateRecords End Function Private Function UpdateRecords() Me.lstUnconfirmed.Requery End Function //Load events (After construction) Private Sub Form_Load() Set lstUnconfirmed.Recordset = mUnmatched_Clients //Link recordset into listbox Set Me.Recordset = mClient_Translations End Sub //Destruction method Private Sub Form_Close() Set conn = Nothing Set exception = Nothing Set lstUnconfirmed.Recordset = Nothing Set Me.Recordset = Nothing Set mUnmatched_Clients = Nothing Set mExcluded_Clients = Nothing Set mClient_Translations = Nothing End Sub

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • C# Bind DataTable to Existing DataGridView Column Definitions

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { PerformQuery(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); UpdateResults(entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.AddRange(new DataColumn[] { new DataColumn("event_type", typeof(Int32)), new DataColumn("event_time", typeof(DateTime)), new DataColumn("event_detail", typeof(String))}); using (SqlConnection conn = new SqlConnection(DSN)) { using (SqlDataAdapter adapter = new SqlDataAdapter( "SELECT event_type, event_time, event_detail FROM event_log " + "WHERE event_time >= @start AND event_time <= @stop", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new SqlParameter("@start", start), new SqlParameter("@stop", stop)}); adapter.Fill(entries); } } return entries; } Update I'd like to summarize and provide some additional information I've learned from the discussion here and debugging efforts since I originally posted this question. I am refactoring old code that retrieved records from a database, collected those records as an array, and then later iterated through the array to populate a DataGridView row by row. Threading was originally implemented to compensate and keep the UI responsive during the unnecessary looping. I have since stripped out Thread/Invoke; everything now occurs on the same execution thread (thank you, Sam). I am attempting to replace the slow, unwieldy approach using a DataTable which I can fill with a DataAdapter, and assign to the DataGridView through it's DataSource property (above code updated). I've iterated through the entries DataTable's rows to verify the table contains the expected data before assigning it as the DataGridView's DataSource. foreach (DataRow row in entries.Rows) { System.Diagnostics.Trace.WriteLine( String.Format("{0} {1} {2}", row[0], row[1], row[2])); } One of the column of the DataGridView is a custom DataGridViewColumn to stylize the event_type value. I apologize I didn't mention this before in the original post but I wasn't aware it was important to my problem. I have converted this column temporarily to a standard DataGridViewTextBoxColumn control and am no longer experiencing the Exception. The fields in the DataTable are appended to the list of fields that have been pre-specified in Design view of the DataGridView. The records' values are being populated in these appended fields. When the run time attempts to render the cell a null value is provided (as the value that should be rendered is done so a couple columns over). In light of this, I am re-titling and re-tagging the question. I would still appreciate it if others who have experienced this can instruct me on how to go about binding the DataTable to the existing column definitions of the DataGridView.

    Read the article

  • how to run existing ant script from groovy

    - by Omnipresent
    my build is a 3 step process. run ant to build. transfer war to server. touch reload file. I have transfered last two steps in groovy, using antbuilder. However, I am not able to run my existing ant script using groovy. Usually I run it using the following command in dos prompt: ant -Dsystem=mysystem -DsomeotherOption=true from groovy when I try to do "ant -Dsystem=mysystem -DsomeotherOption=true".execute() it gives an error saying ant is not a recognized command. How can I utilize my existing ant script in groovy?

    Read the article

  • Web framework for an application utilizing existing database?

    - by tputkonen
    A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all. I'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java. According to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database. With Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. Also I appreciate suggestions of other frameworks worth considering.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >