Search Results

Search found 2993 results on 120 pages for 'distributed transactions'.

Page 97/120 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • Hibernate 3.5.0 causes extreme performance problems

    - by user303396
    I've recently updated from hibernate 3.3.1.GA to hibernate 3.5.0 and I'm having a lot of performance issues. As a test, I added around 8000 entities to my DB (which in turn cause other entities to be saved). These entities are saved in batches of 20 so that the transactions aren't too large for performance reasons. When using hibernate 3.3.1.GA all 8000 entities get saved in about 3 minutes. When using hibernate 3.5.0 it starts out slower than with hibernate 3.3.1. But it gets slower and slower. At around 4,000 entities, it sometimes takes 5 minutes just to save a batch of 20. If I then go to a mysql console and manually type in an insert statement from the mysql general query log, half of them run perfect in 0.00 seconds. And half of them take a long time (maybe 40 seconds) or timeout with "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction" from MySQL. Has something changed in hibernate's transaction management in version 3.5.0 that I should be aware of? The ONLY thing I changed to experience these unusable performance issues is replace the following hibernate 3.3.1.GA jar files: com.springsource.org.hibernate-3.3.1.GA.jar, com.springsource.org.hibernate.annotations-3.4.0.GA.jar, com.springsource.org.hibernate.annotations.common-3.3.0.ga.jar, com.springsource.javassist-3.3.0.ga.jar with the new hibernate 3.5.0 release hibernate3.jar and javassist-3.9.0.GA.jar. Thanks.

    Read the article

  • Disadvantages of MySQL Row Locking

    - by Nyxynyx
    I am using row locking (transactions) in MySQL for creating a job queue. Engine used is InnoDB. SQL Query START TRANSACTION; SELECT * FROM mytable WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1 FOR UPDATE; UPDATE mytable SET status = 1; COMMIT; According to this webpage, The problem with SELECT FOR UPDATE is that it usually creates a single synchronization point for all of the worker processes, and you see a lot of processes waiting for the locks to be released with COMMIT. Question: Does this mean that when the first query is executed, which takes some time to finish the transaction before, when the second similar query occurs before the first transaction is committed, it will have to wait for it to finish before the query is executed? If this is true, then I do not understand why the row locking of a single row (which I assume) will affect the next transaction query that would not require reading that locked row? Additionally, can this problem be solved (and still achieve the effect row locking does for a job queue) by doing a UPDATE instead of the transaction? UPDATE mytable SET status = 1 WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Messages not forwarded to error queue when exception is thrown in handler (it works on my machine)

    - by darthjit
    e are using NServicebus 4.0.5 with sql server(sql server 2012) as transport. When the handler throws an exception, NSB does not retry or move the message to the error queue. Successful messages make it to the audit queue but the failed/errored ones don't! . Interestingly, all this works on our local machines(windows 7 ,sql server localdb) but not on windows server 2012 (sql server 2012). Here is the config info on the subscriber: <add name="NServiceBus/Transport" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <add name="NServiceBus/Persistence" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <MessageForwardingInCaseOfFaultConfig ErrorQueue="error" /> <UnicastBusConfig ForwardReceivedMessagesTo="audit"> <MessageEndpointMappings> <add Assembly="Services.Section.Messages" Endpoint= "Services.ACL.Worker" /> </MessageEndpointMappings> </UnicastBusConfig> And in code it is configured as follows: public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { IContainer container = ContainerInstanceProvider. GetContainerInstance(); Configure .Transactions.Enable(); Configure.With() .AutofacBuilder(container) .UseTransport<SqlServer>() .Log4Net() //.Serialization.Json() .UseNHibernateSubscriptionPersister() .UseNHibernateTimeoutPersister() .MessageForwardingInCaseOfFault() .RijndaelEncryptionService() .DefiningCommandsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Commands")) .DefiningEventsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Events")) .UnicastBus(); } } Any ideas on how to fix this? here is the log info (there is a lot there, search for error to see the relevant parts) https://gist.github.com/ranji/7378249

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Creating a function in Postgresql that does not return composite values

    - by celenius
    I'm learning how to write functions in Postgresql. I've defined a function called _tmp_myfunction() which takes in an id and returns a table (I also define a table object type called _tmp_mytable) -- create object type to be returned CREATE TYPE _tmp_mytable AS ( id integer, cost double precision ); -- create function which returns query CREATE OR REPLACE FUNCTION _tmp_myfunction( id integer ) RETURNS SETOF _tmp_mytable AS $$ BEGIN RETURN QUERY SELECT id, cost FROM sales WHERE id = sales.id; END; $$ LANGUAGE plpgsql; This works fine when I use one id and call it using the following approach: SELECT * FROM _tmp_myfunction(402); What I would like to be able to do is to call it, but to use a column of values instead of just one value. However, if I use the following approach I end up with all values of the table in one column, separated by commas: -- call function using all values in a column SELECT _tmp_myfunction(t.id) FROM transactions as t; I understand that I can get the same result if I use SELECT _tmp_myfunction(402); instead of SELECT * FROM _tmp_myfunction(402); but I don't know how to construct my query in such a way that I can separate out the results.

    Read the article

  • sqlite3 'database is locked' won't go away with retries

    - by Azarias
    I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem. All of the threads both read and write from this database. Whenever I do a write, I have the following construct: try: Cursor.execute(q, params) Connection.commit() except sqlite3.IntegrityError: Notify except sqlite3.OperationalError: print sys.exc_info() print("DATABASE LOCKED; sleeping for 3 seconds and trying again") time.sleep(3) Retry On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers). I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)

    Read the article

  • treeview dynamically populated

    - by Laziale
    Hello everyone - I have this treeview control where I want to put uploaded files on the server. I want to be able to create the nodes and the child nodes dynamically from the database. I am using this query for getting the data from DB: SELECT c.Category, d.DocumentName FROM Categories c INNER JOIN DocumentUserFile d ON c.ID = d.CategoryId WHERE d.UserId = '9rge333a-91b5-4521-b3e6-dfb49b45237c' The result from that query is this one: Agendas transactions.pdf Minutes accounts.pdf I want to have the treeview sorted that way too. I am trying with this piece of code: TreeNode tn = new TreeNode(); TreeNode tnSub = new TreeNode(); foreach (DataRow dt in tblTreeView.Rows) { tn.Text = dt[0].ToString(); tn.Value = dt[0].ToString(); tnSub.Text = dt[1].ToString(); tnSub.NavigateUrl = "../downloading.aspx?file=" + dt[1].ToString() +"&user=" + userID; tn.ChildNodes.Add(tnSub); tvDocuments.Nodes.Add(tn); } I am getting the treeview populated nicely for the 1st category and the document under that category, but I can't get it to work when I want to show more documents under that category, or even more complicate to show new category beneath the 1st one with documents from that category. How can I solve this? I appreciate the answers a lot. Thanks, Laziale

    Read the article

  • Intermittent Issue Writing to Google Appengine Datastore

    - by user242153
    Hi, I have a functioning app and recently have had intermittent problems writing to the datastore. I did not make any relevant code changes, however in the last few days my attempts to write to the datastore sometimes work and sometimes don't. I am trying to save an object that is in a many to one relationship with an existing persisted parent. So, the logic works like this: 1) Parent pulled from the datastore 2) Child created / instantiated using constructor 3) Parent.addSingleChild(child); // the "addSingleChild" method just adds the object argument to the collection of children 4) child.setParent(Parent); // sets the Parent object to the parent field I am using transactions as explained in the documentation ending with "finally {if (tx.isActive()) {tx.rollback(); } }" When the servlet is called, the parent is called from the datastore and the child object is created and added to the many to one mapping to the pre-existing parent. The child should automatically be persisted, since the parent is already persistent, and the child is added to the collection of children that map to the parent. And it worked this way in the past. However, to be sure, i did add a pm.makePersistent(child). Doesn't seem to help, still have the intermittent problem. Any suggestions would be appreciated, and if you need to see the actual code I can post. Thanks

    Read the article

  • Filtering a PHP array containing dates into a yearly summary

    - by privateace
    I'm looking at a way to create a summary of transactions within a certain month based on the contents of a PHP array. Intended outcome (excusing layout): ------------------------------------------- | December 2009 | 12 | | January 2010 | 02 | | February 2010 | 47 | | March 2010 | 108 | | April 2010 | 499 | ------------------------------------------- Based on my array: Array ( [0] => Array ( [name] => 2009-10-23 [values] => Array ( [0] => INzY2MTI4ZWM4OGRm ) ) [1] => Array ( [name] => 2009-10-26 [values] => Array ( [0] => IYmIzOWNmMmU3OWQz ) ) [2] => Array ( [name] => 2009-11-23 [values] => Array ( [0] => INTg4YzgxYWU1ODkx [1] => IMjhkNDZkY2FjNDhl ) ) [3] => Array ( [name] => 2009-11-24 [values] => Array ( [0] => INTg4YzgxYWU1ODkx [1] => INTg4YzgxYWU1ODkx ) ) [4] => Array ( [name] => 2009-12-01 [values] => Array ( [0] => IMWFiODk5ZjU1OTFk ) ) I've had absolutely no luck no matter what I've tried. Especially with adding months that do not contain any variables.

    Read the article

  • Client-server synchronization pattern / algorithm?

    - by tm_lv
    I have a feeling that there must be client-server synchronization patterns out there. But i totally failed to google up one. Situation is quite simple - server is the central node, that multiple clients connect to and manipulate same data. Data can be split in atoms, in case of conflict, whatever is on server, has priority (to avoid getting user into conflict solving). Partial synchronization is preferred due to potentially large amounts of data. Are there any patterns / good practices for such situation, or if you don't know of any - what would be your approach? Below is how i now think to solve it: Parallel to data, a modification journal will be held, having all transactions timestamped. When client connects, it receives all changes since last check, in consolidated form (server goes through lists and removes additions that are followed by deletions, merges updates for each atom, etc.). Et voila, we are up to date. Alternative would be keeping modification date for each record, and instead of performing data deletes, just mark them as deleted. Any thoughts?

    Read the article

  • What's the most DRY-appropriate way to execute an SQL command?

    - by Sean U
    I'm looking to figure out the best way to execute a database query using the least amount of boilerplate code. The method suggested in the SqlCommand documentation: private static void ReadOrderData(string connectionString) { string queryString = "SELECT OrderID, CustomerID FROM dbo.Orders;"; using (SqlConnection connection = new SqlConnection(connectionString)) { SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); try { while (reader.Read()) { Console.WriteLine(String.Format("{0}, {1}", reader[0], reader[1])); } } finally { reader.Close(); } } } mostly consists of code that would have to be repeated in every method that interacts with the database. I'm already in the habit of factoring out the establishment of a connection, which would yield code more like the following. (I'm also modifying it so that it returns data, in order to make the example a bit less trivial.) private SQLConnection CreateConnection() { var connection = new SqlConnection(_connectionString); connection.Open(); return connection; } private List<int> ReadOrderData() { using(var connection = CreateConnection()) using(var command = connection.CreateCommand()) { command.CommandText = "SELECT OrderID FROM dbo.Orders;"; using(var reader = command.ExecuteReader()) { var results = new List<int>(); while(reader.Read()) results.Add(reader.GetInt32(0)); return results; } } } That's an improvement, but there's still enough boilerplate to nag at me. Can this be reduced further? In particular, I'd like to do something about the first two lines of the procedure. I don't feel like the method should be in charge of creating the SqlCommand. It's a tiny piece of repetition as it is in the example, but it seems to grow if transactions are being managed manually or timeouts are being altered or anything like that.

    Read the article

  • SQL different joins not making any difference to result

    - by Chrissi
    I'm trying to write a quick (ha!) program to organise some of my financial information. What I ideally want is a query that will return all records with financial information in them from TableA. There should be one row for each month, but in instances where there were no transactions for a month there will be no record. I get results like this: SELECT Period,Year,TotalValue FROM TableA WHERE Year='1997' Result: Period Year TotalValue 1 1997 298.16 2 1997 435.25 4 1997 338.37 8 1997 336.07 9 1997 578.97 11 1997 361.23 By joining on a table (well a View in this instance) which just contains a field Period with values from 1 to 12, I expect to get something like this: SELECT p.Period,a.Year,a.TotalValue FROM Periods AS p LEFT JOIN TableA AS a ON p.Period = a.Period WHERE Year='1997' Result: Period Year TotalValue 1 1997 298.16 2 1997 435.25 3 NULL NULL 4 1997 338.37 5 NULL NULL 6 NULL NULL 7 NULL NULL 8 1997 336.07 9 1997 578.97 10 NULL NULL 11 1997 361.23 12 NULL NULL What I'm actually getting though is the same result no matter how I join it (except CROSS JOIN which goes nuts, but it's really not what I wanted anyway, it was just to see if different joins are even doing anything). LEFT JOIN, RIGHT JOIN, INNER JOIN all fail to provide the NULL records I am expecting. Is there something obvious that I'm doing wrong in the JOIN? Does it matter that I'm joining onto a View?

    Read the article

  • Logic for rate approximation

    - by Rohan
    I am looking for some logic to solve the below problem. There are n transaction amounts : T1,T2,T3.. Tn. Commission for these transactions are calculated using a rate table provided as below. if amount between 0 and A1 - rate is r1 if amount between A1 and A2 - rate is r2 if amount between A2 and A1 - rate is r3 ... ... if amount greater than An - rate is r4 So if T1 < A1 then rate table returns r1 else if r1 < T1 < r2;it returns r2. So,lets says the rate table results for T1,T2 and T3 are r1,r2 and r3 respectively. Commission C = T1 * r1 + T2 * r2 + T3 * r3 e.g; if rate table is defined(rates are in %) 0 - 2500 - 1 2501 - 5000 - 2 5001 - 10000 - 4 10000 or more- 6 If T1 = 6000,T2 = 3000, T3 = 2000, then C= 6000 * 0.04 + 3000* 0.02 + 2000 * 0.01 = 320 Now my problem is whether we can approximate the commission amount if instead of individual values of T1,T2 and T3 we are provided with T1+T2+T3 (T) In the above example if T (11000) is applied to the rate tablewe would get 6% and which would result in a commision of 600. Is there a way to approximate the commission value given T instead of individual values of T1,T2,T3?

    Read the article

  • ecommerce platform or from scratch? customer specific catalogs and purchase orders

    - by rafi
    I have a possible freelance job in front of me for a distributor who wants product ordering set up but the orders are all P.O.s basically - no actual credit card or paypal transaction. The customer is simply billed and the order archived. Customers will need to login to this site and each customer will have their own custom catalog of a few dozen products which have been setup via a control panel this distributor uses. So there will be a master catalog of over 1,000 products (perhaps browsable but not to be ordered from on the site) but each customer will only be able to order from the products specified for their accounts. I know I can build this from scratch but I figured it's worth looking into what ecommerce platforms would get me a nice head start. Obviously shopping cart, order history, catalog management are concepts that I can reuse but are any of the ecommerce systems out there also capable of handling custom catalogs (maybe as multi-stores?) or transactions billed to accounts without credit card? The more I could reuse the better. I've messed with OSCommerce (way back) and a little Zen Cart more recently. I've also worked on a number of totally custom e-commerce sites. But my knowledge of the open source e-commerce tools is pretty limited and I'm trying to keep the effort as simple as I possibly can on this. I'm pretty flexible on the language of the platform by the way. Thanks in advance.

    Read the article

  • Need some advice before starting coding my next iPhone app...

    - by Tom
    Hi! I need some advice about how should I start coding something. So here is the context: I've just finished building a CMS that manage a SQLite database. My application will be picking this database and use its content as the application's content. So far it's pretty simple. The application will have a navigation that will browse through various workflows, and once at the end workflow, it'll show contents from the database. A consultation kind a thing, example: Liquids - Juice - Orange Juice - Informations about Orange Juice. For my SQLite transactions, so far I believe I'll be using fmdb. It looks like a great wrapper. Here's a simple schema from one of the database: Workflow: id: { type: integer(3), primary: true, autoincrement: true } workflow_id: { type: integer(1) } name: { type: string(255) } That table's rows will be my navigations. Do you believe I should use a navigation controller? If so, then how could I generate the navigation tree from it? I have a good working knowledge of Objective-C and Foundation framework, but never went too far with it so that is why I'm asking before starting in the wrong direction :) Thanks a lot.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Hibernate Search + Spring

    - by Zane
    I'm trying to integrate Hibernate Search with Spring, but I can't seem to index anything. I was able to get Hibernate Search to work without Spring, but I'm having a problem integrating it with Spring. Any help would be much appreciated. Below is my springmvc-servlet.xml: <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="enewsclipsPersistenceUnit" /> </bean> And here is my DAO class: @Repository public class SearchDaoImpl implements SearchDao { JpaTemplate jpaTemplate; @Autowired public SearchDaoImpl(EntityManagerFactory entityManagerFactory) { this.jpaTemplate = new JpaTemplate(entityManagerFactory); } @SuppressWarnings("unchecked") public void updateSearchIndex() { /* Implement the callback method */ jpaTemplate.execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { List<Article> articles = em.createQuery("select a from Article a").getResultList(); FullTextEntityManager ftEm = Search.getFullTextEntityManager(em); ftEm.getTransaction().begin(); for(Article article : articles) { System.out.println("Indexing Item " + article.getTitle()); ftEm.index(article); } ftEm.getTransaction().commit(); return null; } }); } } I think that it may have to do with the transactions but I'm not exactly sure. If you could just point me in the right direction, that would be helpful too! Thank you.

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • object.valid? returns false but object.errors.full_messages is empty

    - by user549563
    Hello I'm confuse about objects that I can't save, simplified model is class Subscription < ActiveRecord::base belongs_to :user, :class_name => "User", :foreign_key => "user_id" has_many :transactions, :class_name => "SubscriptionTransaction" validates_presence_of :first_name, :message => "ne peut être vide" validates_presence_of :last_name, :message => "ne peut être vide" validates_presence_of :card_number, :message => "ne peut être vide" validates_presence_of :card_verification, :message => "ne peut être vide" validates_presence_of :card_type, :message => "ne peut être vide" validates_presence_of :card_expires_on, :message => "ne peut être vide" attr_accessor :card_number, :card_verification validate_on_create :validate_card def validate_card unless credit_card.valid? credit_card.errors.full_messages.each do |message| errors.add_to_base message end end end def credit_card @credit_card ||= ActiveMerchant::Billing::CreditCard.new( :type => card_type, :number => card_number, :verification_value => card_verification, :month => card_expires_on.month, :year => card_expires_on.year, :first_name => first_name, :last_name => last_name ) end end and in my subscription_controller if subscription.save # do something else debugger # means breakpoint where i try subscription.errors.full_messages # do something else end I tried to use ruby-debug for this adding a breakpoint before. And subscription.valid? return false which explains that ActiveRecord doesn't allow the save method. Unfortunately i can't know why the object is invalid. subscription.errors.full_messages # => [] I'm stucked, if you have any idea, thank you.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >