Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 153/780 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • can't commit or rollback, MySQL out of sync error on .net

    - by sergiogx
    Im having trouble with a stored procedure, I can't commit after I execute it. Its showing this error "[System.Data.Odbc.OdbcException] = {"ERROR [HY000] [MySQL][ODBC 5.1 Driver]Commands out of sync; you can't run this command now"}" The SP by itself works fine. does anyone have idea of what might be happening? .net code: [WebMethod()] [SoapHeader("sesion")] public Boolean aceptarTasaCero(int idMunicipio, double valor) { Boolean resultado = false; OdbcConnection cxn = new OdbcConnection(); cxn.ConnectionString = ConfigurationManager.ConnectionStrings["mysqlConnection"].ConnectionString; cxn.Open(); OdbcCommand cmd = new OdbcCommand("call aceptarTasaCero(?,?)", cxn); cmd.Parameters.Add("idMunicipio", OdbcType.Int).Value = idMunicipio; cmd.Parameters.Add("valor", OdbcType.Double).Value = valor; cmd.Transaction = cxn.BeginTransaction(); try { OdbcDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { resultado = Convert.ToBoolean(dr[0]); } cmd.Transaction.Commit(); } catch (Exception ex) { ExBi.log(ex, sesion.idUsuario); cmd.Transaction.Rollback(); } finally { cxn.Close(); } return resultado; } and this is the code for the stored procedure DELIMITER $$ DROP PROCEDURE IF EXISTS `aceptartasacero` $$ CREATE DEFINER=`database`@`%` PROCEDURE `aceptartasacero`(pidMun INTEGER, pvalor double) BEGIN declare vExito BOOLEAN; INSERT INTO tasacero(anio,valor,idmunicipios) VALUES(YEAR(curdate()),pValor,pidMun); set vExito = true; select vExito; END $$ DELIMITER ; thanks.

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • CommandBuilder and SqlTransaction to insert/update a row

    - by Jesse
    I can get this to work, but I feel as though I'm not doing it properly. The first time this runs, it works as intended, and a new row is inserted where "thisField" contains "doesntExist" However, if I run it a subsequent time, I get a run-time error that I can't insert a duplicate key as it violate the primary key "thisField". static void Main(string[] args) { using(var sqlConn = new SqlConnection(connString) ) { sqlConn.Open(); var dt = new DataTable(); var sqlda = new SqlDataAdapter("SELECT * FROM table WHERE thisField ='doesntExist'", sqlConn); sqlda.Fill(dt); DataRow dr = dt.NewRow(); dr["thisField"] = "doesntExist"; //Primary key dt.Rows.Add(dr); //dt.AcceptChanges(); //I thought this may fix the problem. It didn't. var sqlTrans = sqlConn.BeginTransaction(); try { sqlda.SelectCommand = new SqlCommand("SELECT * FROM table WITH (HOLDLOCK, ROWLOCK) WHERE thisField = 'doesntExist'", sqlConn, sqlTrans); SqlCommandBuilder sqlCb = new SqlCommandBuilder(sqlda); sqlda.InsertCommand = sqlCb.GetInsertCommand(); sqlda.InsertCommand.Transaction = sqlTrans; sqlda.DeleteCommand = sqlCb.GetDeleteCommand(); sqlda.DeleteCommand.Transaction = sqlTrans; sqlda.UpdateCommand = sqlCb.GetUpdateCommand(); sqlda.UpdateCommand.Transaction = sqlTrans; sqlda.Update(dt); sqlTrans.Commit(); } catch (Exception) { //... } } } Even when I can get that working through trial and error of moving AcceptChanges around, or encapsulating changes within Begin/EndEdit, then I begin to experience a "Concurrency violation" in which it won't update the changes, but rather tell me it failed to update 0 of 1 affected rows. Is there something crazy obvious I'm missing?

    Read the article

  • transactions and delete using fluent nhibernate

    - by Will I Am
    I am starting to play with (Fluent) nHibernate and I am wondering if someone can help with the following. I'm sure it's a total noob question. I want to do: delete from TABX where name = 'abc' where table TABX is defined as: ID int name varchar(32) ... I build the code based on internet samples: using (ITransaction transaction = session.BeginTransaction()) { IQuery query = session.CreateQuery("FROM TABX WHERE name = :uid") .SetString("uid", "abc"); session.Delete(query.List<Person>()[0]); transaction.Commit(); } but alas, it's generating two queries (one select and one delete). I want to do this in a single statement, as in my original SQL. What is the correct way of doing this? Also, I noticed that in most samples on the internet, people tend to always wrap all queries in transactions. Why is that? If I'm only running a single statement, that seems an overkill. Do people tend to just mindlessly cut and paste, or is there a reason beyond that? For example, in my query above, if I do manage it to get it from two queries down to one, i should be able to remove the begin/commit transaction, no? if it matters, I'm using PostgreSQL for experimenting.

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • Can't wrap my head around appengine data store persistence

    - by aloo
    Hi, I've run into the "can't operate on multiple entity groups in a single transaction." problem when using APPENGINE FOR JAVA w/ JDO with the following code: PersistenceManager pm = PMF.get().getPersistenceManager(); Query q = pm.newQuery("SELECT this FROM " + TypeA.class.getName() + " WHERE userId == userIdParam "); q.declareParameters("String userIdParam"); List<TypeA> poos = (List<TypeA>) q.execute(userIdParam); for (TypeA a : allTypeAs) { a.setSomeField(someValue); } pm.close(); } The problem it seems is that I can't operate on a multiple entities at the same time b/c they arent in the same entity group while in a transaction. Even though it doesn't seem like I'm in a transaction, appengine generates one because I have the following set in my jdoconfig.xml: <property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/> Fine. So far I think I understand. BUT - if I replace TypeA in the above code, with TypeB - I don't get the error. I don't believe there is anything different between type a and type b - they both have the same key structure. They do have different fields but that shouldn't matter, right? My question is - what could possible be different between TypeA and TypeB that they give this different behavior? And consequently what do you I fundamentally misunderstand that this behavior could even exist.... Thanks.

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • How to bind an ADF Table on button click

    - by Juan Manuel Formoso
    Coming from ASP.NET I'm having a hard time with basic ADF concepts. I need to bind a table on a button click, and for some reason I don't understand (I'm leaning towards page life cycle, which I guess is different from ASP.NET) it's not working. This is my ADF code: <af:commandButton text="#{viewcontrollerBundle.CMD_SEARCH}" id="cmdSearch" action="#{backingBeanScope.indexBean.cmdSearch_click}" partialSubmit="true"/> <af:table var="row" rowBandingInterval="0" id="t1" value="#{backingBeanScope.indexBean.transactionList}" partialTriggers="::cmdSearch" binding="#{backingBeanScope.indexBean.table}"> <af:column sortable="false" headerText="idTransaction" id="c2"> <af:outputText value="#{row.idTransaction}" id="ot4"/> </af:column> <af:column sortable="false" headerText="referenceCode" id="c5"> <af:outputText value="#{row.referenceCode}" id="ot7"/> </af:column> </af:table> This is cmdSearch_click: public String cmdSearch_click() { List l = new ArrayList(); Transaction t = new Transaction(); t.setIdTransaction(BigDecimal.valueOf(1)); t.setReferenceCode("AAA"); l.add(t); t = new Transaction(); t.setIdTransaction(BigDecimal.valueOf(2)); t.setReferenceCode("BBB"); l.add(t); setTransactionList(l); // AdfFacesContext.getCurrentInstance().addPartialTarget(table); return null; } The commented line also doesn't work. If I populate the list on my Bean's constructor, the table renders ok. Any ideas?

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • How can the Three-Phase Commit Protocol (3PC) guarantee atomicity?

    - by AndiDog
    I'm currently exploring worst case scenarios of atomic commit protocols like 2PC and 3PC and am stuck at the point that I can't find out why 3PC can guarantee atomicity. That is, how does it guarantee that if cohort A commits, cohort B also commits? Here's the simplified 3PC from the Wikipedia article: Now let's assume the following case: Two cohorts participate in the transaction (A and B) Both do their work, then vote for commit Coordinator now sends precommit messages... A receives the precommit message, acknowledges, and then goes offline for a long time B doesn't receive the precommit message (whatever the reason might be) and is thus still in "uncertain" state The results: Coordinator aborts the transaction because not all precommit messages were sent and acknowledged successfully A, who is in precommit state, is still offline, thus times out and commits B aborts in any case: He either stays offline and times out (causes abort) or comes online and receives the abort command from the coordinator And there you have it: One cohort committed, another aborted. The transaction is screwed. So what am I missing here? In my understanding, if the automatic commit on timeout (in precommit state) was replaced by infinitely waiting for a coordinator command, that case should work fine.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • Ant build script executing <sql> task using java code

    - by Jay
    Any idea, why none of the debugging comments are printed once after executing the ANT build script's SQL task via java code? The java class to execute the sql in build scirpt is public class AntRunnerTest { private Project project; public void executeTask(String taskName) { try { project = new Project(); project.init(); project.setBasedir(new String(".")); ProjectHelper helper = ProjectHelper.getProjectHelper(); project.addReference("ant.projectHelper", helper); helper.parse(project, new File("build-copy.xml")); System.out.println("Before"); project.executeTarget(taskName); System.out.println("After"); } catch(Exception ex) { System.out.println(ex.getMessage()); } } public static void main(String args[]) { try { AntRunnerTest newInst = new AntRunnerTest(); newInst.executeTask("sql"); } catch(Exception e) { System.out.println(""+e); } } } I dont see the debug String "After" getting printed in the console. I noticed this issue only when i try to execute a sql task using java code. The ant script has the following simple transaction tag in it. <transaction> <![CDATA[ select now() ]]> </transaction> Any thoughts? Thanks in advance.

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Mapping one column in a table to multiple tables

    - by user1721814
    I am working on a new product development creating a WMS system. I have done it in past using ASP, VB, and other techniques where we did not hard code the mapping. But now i am working on it using MVC and entity frame work and i am stumped. How can i map one column in transaction table to a column in multiple tables. I have transaction table trans Transid orderref TType productid qty ....(More Columns) now the orderref will hold either Receiptkey, orderkey , movementkey, adjustmentkey and the TType column will tell me which type of transaction i am dealing with and based on that i would know which table to link further. Now how can i achieve this in Entity Frame work. This is the most important step. I have done this many times in my other languages but now using EF i am stuck. Please help. I have checked a lot online but i have not found it. I am new to MVC and entity frame work architecture. Any guidance would be highly appreciated. Ranjit

    Read the article

  • Concurrent usage of table causing issues

    - by Sven
    Hello In our current project we are interfacing with a third party data provider. They need to insert data in a table of ours. This inserting can be frequent every 1 min, every 5min, every 30, depends on the amount of new data they need to provide. The use the isolation level read committed. On our end we have an application, windows service, that calls a webservice every 2 minutes to see if there is new data in this table. Our isolation level is repeatable read. We retrieve the records and update a column on these rows. Now the problem is that sometimes this third party provider needs to insert a lot of data, let's say 5000 records. They do this per transaction (5rows per transaction), but they don't close the connection. They do one transaction and then the next untill all records are inserted. This caused issues for our process, we receive a timeout. If this goes on for a long time the database get's completely unstable. For instance, they maybe stopped, but the table somehow still stays unavailable. When I try to do a select on the table, I get several records but at a certain moment I don't get any response anymore. It just says retrieving data but nothing comes anymore until I get a timeout exception. Only solution is to restart the database and then I see the other records. How can we solve this. What is the ideal isolation level setting in this scenario?

    Read the article

  • Under what circumstances will an entity be able to lazily load its relationships in JPA

    - by Mowgli
    Assuming a Java EE container is being used with JPA persistence and JTA transaction management where EJB and WAR packages are inside a EAR package. Say an entity with lazy-load relationships has just been returned from a JPQL search, such as the getBoats method below: @Stateless public class BoatFacade implements BoatFacadeRemote, BoatFacadeLocal { @PersistenceContext(unitName = "boats") private EntityManager em; @Override public List<Boat> getBoats(Collection<Integer> boatIDs) { if(boatIDs.isEmpty()) { return Collections.<Boat>emptyList(); } Query query = em.createNamedQuery("getAllBoats"); query.setParameter("boatID", boatIDs); List<Boat> boats = query.getResultList(); return boats; } } The entity: @Entity @NamedQuery( name="getAllBoats", query="Select b from Boat b where b.id in : boatID") public class Boat { @Id private long id; @OneToOne(fetch=FetchType.LAZY) private Gun mainGun; public Gun getMainGun() { return mainGun; } } Where will its lazy-load relationships be loadable (assuming the same stateless request): Same JAR: A method in the same EJB A method in another EJB A method in a POJO in the same EJB JAR Same EAR, but outside EJB JAR: A method in a web tier managed bean. A method in a web tier POJO. Different EAR: A method in a different EAR which receives the entity through RMI. What is it that restricts the scope, for example: the JPA transaction, persistence context or JTA transaction?

    Read the article

  • How to show useful error messages from a database error callback in Phonegap?

    - by Magnus Smith
    Using Phonegap you can set a function to be called back if the whole database transaction or the individual SQL statement errors. I'd like to know how to get more information about the error. I have one generic error-handling function, and lots of different SELECTs or INSERTs that may trigger it. How can I tell which one was at fault? It is not always obvious from the error message. My code so far is... function get_rows(tx) { tx.executeSql("SELECT * FROM Blah", [], lovely_success, statement_error); } function add_row(tx) { tx.executeSql("INSERT INTO Blah (1, 2, 3)", [], carry_on, statement_error); } function statement_error(tx, error) { alert(error.code + ' / ' + error.message); } From various examples I see the error callback will be passed a transaction object and an error object. I read that .code can have the following values: UNKNOWN_ERR = 0 DATABASE_ERR = 1 VERSION_ERR = 2 TOO_LARGE_ERR = 3 QUOTA_ERR = 4 SYNTAX_ERR = 5 CONSTRAINT_ERR = 6 TIMEOUT_ERR = 7 Are there any other properties/methods of the error object? What are the properties/methods of the transaction object at this point? I can't seem to find a good online reference for this. Certainly not on the Phonegap website!

    Read the article

  • UPDATE Table SET Field

    - by davlyo
    This is my Very first Post! Bear with me. I have an Update Statement that I am trying to understand how SQL Server handles it. UPDATE a SET a.vField3 = b.vField3 FROM tableName a INNER JOIN tableName b ON a.vField1 = b.vField1 AND b.nField2 = a.nField2 – 1 This is my query in its simplest form. vField1 is a Varchar nField2 is an int (autonumber) vField3 is a Varchar I have left the WHERE clause out so understand there is logic that otherwise makes this a nessessity. Say vField1 is a Customer Number and that Customer has 3 records The value in nField2 is 1, 2, and 3 consecutively. vField3 is a Status When the Update comes to a.nField2 = 1 there is no a.nField2 -1 so it continues When the Update comes to a.nField2 = 2, b.nField2 = 1 When the Update comes to a.nField2 = 3, b.nField2 = 2 So when the Update is on a.nField2 = 2, alias b reflects what is on the line prior (b.nField2 = 1) And it SETs the Varchar Value of a.vField3 = b.vField3 When the Update is on a.nField2 = 3, alias b reflects what is on the line prior (b.nField2 = 2) And it (should) SET the Varchar Value of a.vField3 = b.vField3 When the process is complete –the Second of three records looks as expected –hence the value in vField3 of the second record reflects the value in vField3 from the First record However, vField3 of the Third record does not reflect the value in vField3 from the Second record. I think this demonstrates that SQL Server may be producing a transaction of some sort and then an update. Question: How can I get the DB to Update after each transaction so I can reference the values generated by each transaction?

    Read the article

  • Unity: how to apply programmatical changes to the Terrain SplatPrototype?

    - by Shivan Dragon
    I have a script to which I add a Terrain object (I drag and drop the terrain in the public Terrain field). The Terrain is already setup in Unity to have 2 PaintTextures: 1 is a Square (set up with a tile size so that it forms a checkered pattern) and the 2nd one is a grass image: Also the Target Strength of the first PaintTexture is lowered so that the checkered pattern also reveals some of the grass underneath. Now I want, at run time, to change the Tile Size of the first PaintTexture, i.e. have more or less checkers depending on various run time conditions. I've looked through Unity's documentation and I've seen you have the Terrain.terrainData.SplatPrototype array which allows you to change this. Also there's a RefreshPrototypes() method on the terrainData object and a Flush() method on the Terrain object. So I made a script like this: public class AStarTerrain : MonoBehaviour { public int aStarCellColumns, aStarCellRows; public GameObject aStarCellHighlightPrefab; public GameObject aStarPathMarkerPrefab; public GameObject utilityRobotPrefab; public Terrain aStarTerrain; void Start () { //I've also tried NOT drag and dropping the Terrain on the public field //and instead just using the commented line below, but I get the same results //aStarTerrain = this.GetComponents<Terrain>()[0]; Debug.Log ("Got terrain "+aStarTerrain.name); SplatPrototype[] splatPrototypes = aStarTerrain.terrainData.splatPrototypes; Debug.Log("Terrain has "+splatPrototypes.Length+" splat prototypes"); SplatPrototype aStarCellSplat = splatPrototypes[0]; Debug.Log("Re-tyling splat prototype "+aStarCellSplat.texture.name); aStarCellSplat.tileSize = new Vector2(2000,2000); Debug.Log("Tyling is now "+aStarCellSplat.tileSize.x+"/"+aStarCellSplat.tileSize.y); aStarTerrain.terrainData.RefreshPrototypes(); aStarTerrain.Flush(); } //... Problem is, nothing gets changed, the checker map is not re-tiled. The console outputs correctly tell me that I've got the Terrain object with the right name, that it has the right number of splat prototypes and that I'm modifying the tileSize on the SplatPrototype object corresponding to the right texture. It also tells me the value has changed. But nothing gets updated in the actual graphical view. So please, what am I missing?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >