Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 21/39 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Postgres : Post statement (or insert) asynchronous, non-blocking processing.

    - by Hassan Syed
    I'm wondering if it is possible, that after a collection of rows is inserted, to initiate an operation that is executed asynchronously, is non-blocking, and doesn't need to inform the originator of the request - of the result. I am working with large amounts of events and I can guarantee that the post-insert logic will not fail -- I just want to have a single insert thread in my event-sources, and I want this thread to keep flying without blocking, and without being responsible for any post-delivery book-keeping. I can tell you that I would potentially have a 100 of these jobs executing concurrently and each job might operate on 5 tables with anywhere between 200-1000 inserts on each of these tables. A hint in the right direction should be enough.

    Read the article

  • postgres SQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • Java library to partially export a database while respecting referential integrity constraints

    - by Mwanji Ezana
    My production database is several GB's uncompressed and it's getting to be a pain to download and run locally when trying to reproduce a bug or test a feature with real data. I would like to be able to select the specific records that interest me, then have the library figure out what other records are necessary to produce a dataset that respects the databases integrity constraints and finally print it out as a list of insert statements or dump that I can restore. For example: given Author, Blog and Comment tables when I select comments posted after a certain date I should get inserts for the Blog records the comments have foreign keys to and the Author records those Blogs have foreign keys to.

    Read the article

  • mysql ignores not null constraint?

    - by Marga Keuvelaar
    I have created a table with NOT NULL constraints on some columns in MySQL. Then in PHP I wrote a script to insert data, with an insert query. When I omit one of the NOT NULL columns in this insert statement I would expect an error message from MySQL, and I would expect my script to fail. Instead, MySQL inserts empty strings in the NOT NULL fields. In other omitted fields the data is NULL, which is fine. Could someone tell me what I did wrong here? I'm using this table: CREATE TABLE IF NOT EXISTS tblCustomers ( cust_id int(11) NOT NULL AUTO_INCREMENT, custname varchar(50) NOT NULL, company varchar(50), phone varchar(50), email varchar(50) NOT NULL, country varchar(50) NOT NULL, ... date_added timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (cust_id) ) ; And this insert statement: $sql = "INSERT INTO tblCustomers (custname,company) VALUES ('".$customerName."','".$_POST["CustomerCompany"]."')"; $res = mysqli_query($mysqli, $sql);

    Read the article

  • SQL Server 2008 Insert performance issue

    - by mithiya
    is there any way to increase performance of SQL server inserts, as you can see below i have used below sql 2005, 2008 and oracle. i am moving data from ORACLe to SQL. while inserting data to SQL i am using a procedure. insert to Oracles is very fast in compare to SQL, is there any way increase performance. or a better way to move data from Oracle to SQL (data size approx 100000 records an hour) please find below stats as i gathered, RUN1 and RUN2 time is in millisecond.

    Read the article

  • SQL Server pivots? some way to set column names to values within a row

    - by ccsimpson3
    I am building a system of multiple trackers that are going to use a lot of the same columns so there is a table for the trackers, the tracker columns, then a cross reference for which columns go with which tracker, when a user inserts a tracker row the different column values are stored in multiple rows that share the same record id and store both the value and the name of the particular column. I need to find a way to dynamically change the column name of the value to be the column name that is stored in the same row. i.e. id | value | name ------------------ 23 | red | color 23 | fast | speed needs to look like this. id | color | speed ------------------ 23 | red | fast Any help is greatly appreciated, thank you.

    Read the article

  • PHP Xml add edit delete

    - by user560411
    Hi there. I am kind of new and I have the following XML structure an I would like to have codes that INSERTS DELETES Delete an entire element based on it's TITLE value. EDIT Replace Publisher's value with another value <Game type="XXX"> <TITLE>XXX</TITLE> <PUBLISHER>XXX</PUBLISHER> </Game> This is my XML structure <Game type="adventure"> <TITLE>Assassin's Creed: Brotherhood</TITLE> <PUBLISHER>Ubisoft</PUBLISHER> </Game> <Game type="adventure"> <TITLE>Batman: Arkham Asylum</TITLE> <PUBLISHER>Eidos</PUBLISHER> </Game> </GameStore> Any help would be great and thank you in avdance!

    Read the article

  • oralce + java encoding problem while insert

    - by Ahmad
    hi, I am kind of stuck on this one. im not a java or oracle guru, so please give detailed answers :) i've a web-service that inserts something into DB. the web-service is hosted on axis. the db is oracle with following properties: NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CHARACTERSET ZHS16GBK the web-service is hosted on windows server 2008, english version but i have changed the locale of the system to chinese now the data after insert has encoding problem and shows strange characters like ????,exxk?? the jws file has GBK encoding. and the data that is inserted into the DB is hard-coded in the file [we are not reading it from REQUEST]

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • Cannot add an entity that already exists. (LINQ to SQL)

    - by Vicheanak
    Hello guys, in my database there are 3 tables CustomerType CusID EventType EventTypeID CustomerEventType CusID EventTypeID Dim db = new CustomerEventDataContext Dim newEvent = new EventType newEvent.EventTypeID = txtEventID.text db.EventType.InsertOnSubmit(newEvent) db.SubmitChanges() 'To select the last ID of event' Dim lastEventID = (from e in db.EventType Select e.EventTypeID Order By EventTypeID Descending).first() Dim chkbx As CheckBoxList = CType(form1.FindControl("CheckBoxList1"), CheckBoxList) Dim newCustomerEventType = New CustomerEventType Dim i As Integer For i = 0 To chkbx.Items.Count - 1 Step i + 1 If (chkbx.Items(i).Selected) Then newCustomerEventType.INTEVENTTYPEID = lastEventID newCustomerEventType.INTSTUDENTTYPEID = chkbxStudentType.Items(i).Value db.CustomerEventType.InsertOnSubmit(newCustomerEventType) db.SubmitChanges() End If Next It works fine when I checked only 1 Single ID of CustomerEventType from CheckBoxList1. It inserts data into EventType with ID 1 and CustomerEventType ID 1. However, when I checked both of them, the error message said Cannot add an entity that already exists. Any suggestions please? Thx in advance.

    Read the article

  • Get current updated column name to use in a trigger

    - by Serge
    Is there a way to actually get the column name that was updated in order to use it in a trigger? Basically I'm trying to have an audit trail whenever a user inserts or updates a table (in this case it has to do with a Contact table) CREATE TRIGGER `after_update_contact` AFTER UPDATE ON `contact` FOR EACH ROW BEGIN INSERT INTO user_audit (id_user, even_date, table_name, record_id, field_name, old_value, new_value) VALUES (NEW.updatedby, NEW.lastUpdate, 'contact', NEW.id_contact, [...]) END How can I get the name of the column that's been updated and from that get the OLD and NEW values of that column. If multiple columns have been updated in a row or even multiple rows would it be possible to have an audit for each update?

    Read the article

  • Check if row already exists, if so tell the referenced table the id

    - by flhe
    Let's assume I have a table magazine: CREATE TABLE magazine ( magazine_id integer NOT NULL DEFAULT nextval(('public.magazine_magazine_id_seq'::text)::regclass), longname character varying(1000), shortname character varying(200), issn character varying(9), CONSTRAINT pk_magazine PRIMARY KEY (magazine_id) ); And another table issue: CREATE TABLE issue ( issue_id integer NOT NULL DEFAULT nextval(('public.issue_issue_id_seq'::text)::regclass), number integer, year integer, volume integer, fk_magazine_id integer, CONSTRAINT pk_issue PRIMARY KEY (issue_id), CONSTRAINT fk_magazine_id FOREIGN KEY (fk_magazine_id) REFERENCES magazine (magazine_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); Current INSERTS: INSERT INTO magazine (longname,shotname,issn) VALUES ('a long name','ee','1111-2222'); INSERT INTO issue (fk_magazine_id,number,year,volume) VALUES (currval('magazine_magazine_id_seq'),'8','1982','6'); Now a row should only be inserted into 'magazine', if it does not already exist. However if it exists, the table 'issue' needs to get the 'magazine_id' of the row that already exists in order to establish the reference. How can i do this? Thx in advance!

    Read the article

  • What should I do that GWT use <th> instead of <td>?

    - by Paulus
    To build a HTML table I use direct DOM instead of a predefined Widget to set thead and tbody. But when I insert a cell with the insertCell() method GWT inserts an td element but should insert an th element. So TableElement table = Document.get().createTableElement(); TableSectionElement thead = table.createTHead(); TableRowElement headRow = thead.insertRow(-1); headRow.insertCell(-1).setInnerText( "header1" ); gives table/thead/tr/td but should give table/thead/tr/th Any idea how to solve this problem? Use the "old-school" DOM.createXXX()-Methods?

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • HOw Do I update from table-2 to table-1

    - by mathew
    HI what is the wrong in this code?? i have set it 24 hours to update the value in the table.but the problem is if $row is empty then it inserts value from table-2 but after 24 hours it wont update the value. what I want is it must delete the existing value and insert new one(random value) or it must update the same $row with new value what ever... if ($row == 0){ mysql_query("INSERT INTO table-1 (regtime,person,location,address,rank,ip,geocode) SELECT NOW(),person,location,address,rank,ip,geocode FROM table-2 ORDER BY RAND() LIMIT 1");}else{ mysql_query("UPDATE table-1 SELECT regtime=NOW(), person=person, location=location, address=address, rank=rank, ip=ip, geocode=geocode FROM table-2 ORDER BY RAND() LIMIT 1");}

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • save changes problem

    - by ognjenb
    // GET: /Transporter/Edit/5 public ActionResult Edit(int id) { var FilesQueryEdit = (from i in FilesEntities.transporter where i.Id == id select i).First(); return View(FilesQueryEdit); } // // POST: /Transporter/Edit/5 [HttpPost] public ActionResult Edit(int id, transporter trn) { try { // TODO: Add update logic here FilesEntities.AddTotransporter(trn); FilesEntities.SaveChanges(); return RedirectToAction("Transporters"); } catch { return View(); } } This code inserts new row after post edited data. How to modify post action result to save changes in edited row without inserting new row

    Read the article

  • Map null column as 0 in a legacy database (JPA)

    - by Indrek
    Using Play! framework and it's JPASupport class I have run into a problem with a legacy database. I have the following class: @Entity @Table(name="product_catalog") public class ProductCatalog extends JPASupport { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) public Integer product_catalog; @OneToOne @JoinColumn(name="upper_catalog") public ProductCatalog upper_catalog; public String name; } Some product catalogs don't have an upper catalog, and this is referenced as 0 in a legacy database. If I supply the upper_catalog as NULL, then expectedly JPA inserts a NULL value to that database column. How could I force the null values to be 0 when writing to the database and the other way around when reading from the database?

    Read the article

  • PHP Array Efficiency and Memory Clarification

    - by CogitoErgoSum
    When declaring an Array in PHP, the index's may be created out of order...I.e Array[1] = 1 Array[19] = 2 Array[4] = 3 My question. In creating an array like this, is the length 19 with nulls in between? If I attempted to get Array[3] would it come as undefined or throw an error? Also, how does this affect memory. Would the memory of 3 index's be taken up or 19? Also currently a developer wrote a script with 3 arrays FailedUpdates[] FailedDeletes[] FailedInserts[] Is it more efficient to do it this way, or do it in the case of an associative array controlling several sub arrays "Failures" array(){ ["Updates"] => array(){ [0] => 12 [1] => 41 } ["Deletes"] => array(){ [0] => 122 [1] => 414 [1] => 43 } ["Inserts"] => array(){ [0] => 12 } }

    Read the article

  • Hibernate / MySQL Bulk insert problem

    - by Marty Pitt
    I'm having trouble getting Hibernate to perform a bulk insert on MySQL. I'm using Hibernate 3.3 and MySQL 5.1 At a high level, this is what's happening: @Transactional public Set<Long> doUpdate(Project project, IRepository externalSource) { List<IEntity> entities = externalSource.loadEntites(); buildEntities(entities, project); persistEntities(project); } public void persistEntities(Project project) { projectDAO.update(project); } This results in n log entries (1 for every row) as follows: Hibernate: insert into ProjectEntity (name, parent_id, path, project_id, state, type) values (?, ?, ?, ?, ?, ?) I'd like to see this get batched, so the update is more performant. It's possible that this routine could result in tens-of-thousands of rows generated, and a db trip per row is a killer. Why isn't this getting batched? (It's my understanding that batch inserts are supposed to be default where appropriate by hibernate).

    Read the article

  • What are the best settings of the H2 database for high concurrency?

    - by dexter
    There are a lot of settings that can be used in H2 database. AUTO_SERVER, MVCC, LOCK_MODE, FILE_LOCK and MULTI_THREADED. I wonder what combination works best for high concurrency setup e.g. one thread is doing INSERTs and another connection does some UPDATEs and SELECTs? I tried MVCC=TRUE;LOCK_MODE=3lFILE_LOCK=NO but whenever I do some UPDATEs in one connection, the other connection does not see it even though I commit it. By the way the connections are from different processes e.g. separate program.

    Read the article

  • How to use JQuery to remove dynamic content?

    - by ed.talmadge
    I have an html page that is generated by a CMS. I cannot modify the page, but can add JavaScript. Each time the page loads, a JavaScript function (that I cannot modify) dynamically inserts a paragraph onto the page. How can I use JQuery to .remove() that paragraph whenever it is loaded? For example, when the page first loads, it look like this (blank): <div></div> Then, a few seconds later, a JavaScript function (that I have no control over) adds a paragraph to the page. The page then looks like this: <div><p id="foo">bar</p></div> How can I use JQuery to remove the paragraph with id=foo each time it is dynamically loaded onto the page?

    Read the article

  • SQLITE (C/C++interface) - How to commit a transaction

    - by AJ
    I am using sqlite c/c++ interface. Now here is my scenario - I have 3 tables (related tables) say A, B, C. Now, there is a function called Set, which get some inputs and based on the inputs inserts rows into these three tables. (sometimes it can be an update in one of the tables) Now I need two things. One, i dont want autocommit feature. Basically I would like to commit after every 1000 calls to Set function Secondly, within the set function itself, if i find that after inserting into two tables, the third insert fails, then i have to revert, those particular changes in that Set function call. Now i dont see any sqlite3_commit function exposed. I only see a function called sqlite3_commit_hook() which is slightly diff in documentation. Are there any function exposed for this purpose? or What is the way to achieve this behaviour? Can you help me with the best approach of doing this. Regards, Arjun

    Read the article

  • Scraping html WITHOUT uniquie identifiers using python

    - by Nicholas Law
    I would like to design an algorithm using python that scrapes thousands of pages like this one and this one, gathers all the data and inserts it into a MySQL database. The script will be run on a weekly or bi-weekly basis to update the database of any new information added to each individual page. Ideally I would like a scraper that is easy to work with for table structured data but also data that does not have unique identifiers (ie. id and classes attributes). Which scraper add-on should I use? BeautifulSoup, Scrapy or Mechanize? Are there any particular tutorials/books I should be looking at for this desired result? In the long-run I will be implementing a mobile app that works with all this data through querying the database.

    Read the article

  • Open source framework à la Microsoft Sync Framework suggestions?

    - by drskol
    We are implementing a warehouse management system atop an open source stack (Java, web services & friends). In this system, we want to integrate many mobile devices which should also be capable of adequate online/offline functionality, e.g. preparing database inserts while a mobile device is temporarily unconnected, and performing them on the backend database when reconnected. For a .NET stack, Microsoft Sync Framework would be a perfect solution, e.g. to do database replication and hoarding. Can anyone suggest an open source alternative to the MS Sync Framework and possibly describe his experiences with it? Thanks in advance for any answers.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >