Search Results

Search found 2819 results on 113 pages for 'healthcare transaction ba'.

Page 54/113 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Django IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • ClassCastException in iterating list returned by Query using Hibernate Query Language

    - by Tushar Paliwal
    I'm beginner in hibernate.I'm trying a simplest example using HQL but it generates exception at line 25 ClassCastException when i try to iterate list.When i try to cast the object returned by next() methode of iterator it generates the same problem.I could not identify the problem.Kindly give me solution of the problem. Employee.java package one; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class Employee { @Id private Long id; private String name; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Employee(Long id, String name) { super(); this.id = id; this.name = name; } public Employee() { } } Main2.java package one; import java.util.Iterator; import java.util.List; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class Main2 { public static void main(String[] args) { SessionFactory sf=new Configuration().configure().buildSessionFactory(); Session s1=sf.openSession(); Query q=s1.createQuery("from Employee "); Transaction tx=s1.beginTransaction(); List l=q.list(); Iterator itr=l.iterator(); while(itr.hasNext()) { Object obj[]=(Object[])itr.next();//Line 25 for(Object temp:obj) { System.out.println(temp); } } tx.commit(); s1.close(); sf.close(); } }

    Read the article

  • Help with Java Generics: Cannot use "Object" as argument for "? extends Object"

    - by AniDev
    Hello, I have the following code: import java.util.*; public class SellTransaction extends Transaction { private Map<String,? extends Object> origValueMap; public SellTransaction(Map<String,? extends Object> valueMap) { super(Transaction.Type.Sell); assignValues(valueMap); this.origValueMap=valueMap; } public SellTransaction[] splitTransaction(double splitAtQuantity) { Map<String,? extends Object> valueMapPart1=origValueMap; valueMapPart1.put(nameMappings[3],(Object)new Double(splitAtQuantity)); Map<String,? extends Object> valueMapPart2=origValueMap; valueMapPart2.put(nameMappings[3],((Double)origValueMap.get(nameMappings[3]))-splitAtQuantity); return new SellTransaction[] {new SellTransaction(valueMapPart1),new SellTransaction(valueMapPart2)}; } } The code fails to compile when I call valueMapPart1.put and valueMapPart2.put, with the error: The method put(String, capture#5-of ? extends Object) in the type Map is not applicable for the arguments (String, Object) I have read on the Internet about generics and wildcards and captures, but I still don't understand what is going wrong. My understanding is that the value of the Map's can be any class that extends Object, which I think might be redundant, because all classes extend Object. And I cannot change the generics to something like ? super Object, because the Map is supplied by some library. So why is this not compiling? Also, if I try to cast valueMap to Map<String,Object>, the compiler gives me that 'Unchecked conversion' warning. Thanks!

    Read the article

  • Rescuing a failed WCF call

    - by illdev
    Hello, I am happily using Castle's WcfFacility. From Monorail I know the handy concept of Rescues - consumer friendly results that often, but not necessarily, contain Data about what went wrong. I am creating a Silverlight application right now, doing quite a few WCF service calls. All these request return an implementation of public class ServiceResponse { private string _messageToUser = string.Empty; private ActionResult _result = ActionResult.Success; public ActionResult Result // Success, Failure, Timeout { get { return _result; } set { _result = value; } } public string MessageToUser { get { return _messageToUser; } set { _messageToUser = value; } } } public abstract class ServiceResponse<TResponseData> : ServiceResponse { public TResponseData Data { get; set; } } If the service has trouble responding the right way, I would want the thrown Exception to be intercepted and converted to the expected implementation. base on the thrown exception, I would want to pass on a nice message. here is how one of the service methods looks like: [Transaction(TransactionMode.Requires)] public virtual SaveResponse InsertOrUpdate(WarehouseDto dto) { var w = dto.Id > 0 ? _dao.GetById(dto.Id) : new Warehouse(); w.Name = dto.Name; _dao.SaveOrUpdate(w); return new SaveResponse { Data = new InsertData { Id = w.Id } }; } I need the thrown Exception for the Transaction to be rolled back, so i cannot actually catch it and return something else. Any ideas, where I could hook in?

    Read the article

  • How should I properly format this code?

    - by ct2k7
    Hi, I've a small issue here. I am using an if statement with UIAlertView and I have two situations, both result in UIAlertViews. However, in one situation, I want to dismiss just the UIAlertView, the other, I want the UIAlertView to be dismissed and view to return to root view. This code describes is: if([serverOutput isEqualToString:@"login.true"]){ [Alert dismissWithClickedButtonIndex:0 animated:YES]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; UIAlertView *success = [[UIAlertView alloc] initWithTitle:@"Success" message:@"The transaction was a success!" delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil, nil]; [success show]; [success release]; } else { UIAlertView *failure = [[UIAlertView alloc] initWithTitle:@"Failure" message:@"The transaction failed. Contact sales operator!" delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil, nil]; [failure show]; [failure release]; } } -(void)alertView: (UIAlertView *)success clickedButtonAtIndex: (NSInteger)buttonIndex{ switch(buttonIndex) { case 0: { [self.navigationController popToRootViewControllerAnimated:YES]; } } } So, in both cases, they follow the above action, but obviously, that's not what I want. Any ideas on what I do here?

    Read the article

  • Why is Grails Searchable Plugin causing errors on Hibernate AutoFlush?

    - by Mark Rogers
    In the Grails 1.2.5 project that I am trying to troubleshoot, we use the Grails Searchable plugin .5.5.1. The problem is that whenever we attempt to index large sets domain classes, Grails keeps throwing: ERROR hibernate.AssertionFailure - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session) org.hibernate.AssertionFailure: collection [domain-class] was not processed by flush() But the domain classes involved have been mapped and used by hibernate without issues outside of the calls to searchable plugin. The use of the searchable plugin goes as follows: Create a compass session with compass.openSession() Begin compass transaction: compassSession.beginTransaction() Then compassSession.create(result.get(0)) is called on an important unindexed domain class Finally compassTransaction.commit() is called to commit the transaction. Goto 2 and process next domain class Between the 3 and 4th Domain class, an autoflush is triggered that throws the error. Can anyone give me any hints about how to solve this problem? Has anyone encountered this problem before? I know that they had a systemic issue with this back in pre .5 versions of the searchable-plugin. Is it possible those issues weren't totally fixed?

    Read the article

  • Compound Primary Key in Hibernate using Annotations

    - by Rich
    Hi, I have a table which uses two columns to represent its primary key, a transaction id and then the sequence number. I tried what was recommended http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#entity-mapping in section 2.2.3.2.2, but when I used the Hibernate session to commit this Entity object, it leaves out the TXN_ID field in the insert statement and only includes the BA_SEQ field! What's going wrong? Here's the related code excerpt: @Id @Column(name="TXN_ID") private long txn_id; public long getTxnId(){return txn_id;} public void setTxnId(long t){this.txn_id=t;} @Id @Column(name="BA_SEQ") private int seq; public int getSeq(){return seq;} public void setSeq(int s){this.seq=s;} And here are some log statements to show what exactly happens to fail: In createKeepTxnId of DAO base class: about to commit Transaction :: txn_id->90625 seq->0 ...<Snip>... Hibernate: insert into TBL (BA_ACCT_TXN_ID, BA_AUTH_SRVC_TXN_ID, BILL_SRVC_ID, BA_BILL_SRVC_TXN_ID, BA_CAUSE_TXN_ID, BA_CHANNEL, CUSTOMER_ID, BA_MERCHANT_FREETEXT, MERCHANT_ID, MERCHANT_PARENT_ID, MERCHANT_ROOT_ID, BA_MERCHANT_TXN_ID, BA_PRICE, BA_PRICE_CRNCY, BA_PROP_REQS, BA_PROP_VALS, BA_REFERENCE, RESERVED_1, RESERVED_2, RESERVED_3, SRVC_PROD_ID, BA_STATUS, BA_TAX_NAME, BA_TAX_RATE, BA_TIMESTAMP, BA_SEQ) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [WARN] util.JDBCExceptionReporter SQL Error: 1400, SQLState: 23000 [ERROR] util.JDBCExceptionReporter ORA-01400: cannot insert NULL into ("SCHEMA"."TBL"."TXN_ID") The important thing to note is I print out the entity object which has a txn_id set, and then the following insert into statement does not include TXN_ID in the listing and thus the NOT NULL table constraint rejects the query.

    Read the article

  • NHibernate.MappingException on table insertion.

    - by Suja
    The table structure is : The controller action to insert a row to table is public bool CreateInstnParts(string data) { IDictionary myInstnParts = DeserializeData(data); try { HSInstructionPart objInstnPartBO = new HSInstructionPart(); using (ISession session = Document.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { objInstnPartBO.DocumentId = Convert.ToInt32(myInstnParts["documentId"]); objInstnPartBO.InstructionId = Convert.ToInt32(myInstnParts["instructionId"]); objInstnPartBO.PartListId = Convert.ToInt32(myInstnParts["part"]); objInstnPartBO.PartQuantity = Convert.ToInt32(myInstnParts["quantity"]); objInstnPartBO.IncPick = Convert.ToBoolean(myInstnParts["incpick"]); objInstnPartBO.IsTracked = Convert.ToBoolean(myInstnParts["istracked"]); objInstnPartBO.UpdatedBy = User.Identity.Name; objInstnPartBO.UpdatedAt = DateTime.Now; session.Save(objInstnPartBO); transaction.Commit(); } return true; } } catch (Exception ex) { Console.Write(ex.Message); return false; } } This is throwing an exception NHibernate.MappingException was caught Message="No persister for: Hexsolve.Data.BusinessObjects.HSInstructionPart" Source="NHibernate" StackTrace: at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Event.Default.AbstractSaveEventListener.SaveWithGeneratedId(Object entity, String entityName, Object anything, IEventSource source, Boolean requiresImmediateIdAccess) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.EntityIsTransient(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSave(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.Save(Object obj) at HexsolveMVC.Controllers.InstructionController.CreateInstnParts(String data) in F:\Project\HexsolveMVC\Controllers\InstructionController.cs:line 1342 InnerException: Can anyone help me solve this??

    Read the article

  • Database doesn't update using TransactionScope

    - by Dissonant
    I have a client trying to communicate with a WCF service in a transactional manner. The client passes some data to the service and the service adds the data to its database accordingly. For some reason, the new data the service submits to its database isn't being persisted. When I have a look at the table data in the Server Explorer no new rows are added... Relevant code snippets are below: Client static void Main() { MyServiceClient client = new MyServiceClient(); Console.WriteLine("Please enter your name:"); string name = Console.ReadLine(); Console.WriteLine("Please enter the amount:"); int amount = int.Parse(Console.ReadLine()); using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required)) { client.SubmitData(amount, name); transaction.Complete(); } client.Close(); } Service Note: I'm using Entity Framework to persist objects to the database. [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)] public void SubmitData(int amount, string name) { DatabaseEntities db = new DatabaseEntities(); Payment payment = new Payment(); payment.Amount = amount; payment.Name = name; db.AddToPayment(payment); //add to Payment table db.SaveChanges(); db.Dispose(); } I'm guessing it has something to do with the TransactionScope being used in the client. I've tried all combinations of db.SaveChanges() and db.AcceptAllChanges() as well, but the new payment data just doesn't get added to the database!

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • DataReader is not returning results from a MySQL Stored Proc

    - by Glenn Slaven
    I have a stored proc in MySQL (5.5) that I'm calling from a C# application (using MySQL.Data v6.4.4.0). I have a bunch of other procs that work fine, but this is not returning any results, the data reader says the result set is empty. The proc does a couple of inserts & an update inside a transaction then selects 2 local variables to return. The inserts & update are happening, but the select is not returning. When I run the proc manually it works, gives a single row with the two fields, but the data reader is empty. This is the proc: CREATE DEFINER=`root`@`localhost` PROCEDURE `File_UpdateFile`(IN siteId INT, IN fileId INT, IN description VARCHAR(100), IN folderId INT, IN fileSize INT, IN filePath VARCHAR(100), IN userId INT) BEGIN START TRANSACTION; SELECT MAX(v.versionNumber) + 1 INTO @versionNumber FROM `file_version` v JOIN `file` f ON (v.fileId = f.fileId) WHERE v.fileId = fileId AND f.siteId = siteId; INSERT INTO `file_version` (fileId, versionNumber, description, fileSize, filePath, uploadedOn, uploadedBy, fileVersionState) VALUES (fileId, @versionNumber, description, fileSize, filePath, NOW(), userId, 0); INSERT INTO filehistory (fileId, `action`, userId, createdOn) VALUES (fileId, 'UPDATE', userId, NOW()); UPDATE `file` f SET f.checkedOutBy = NULL WHERE f.fileId = fileId; COMMIT; SELECT fileId, @versionNumber `versionNumber`; END$$ I'm calling the proc using Dapper, but I've debugged into the SqlMapper class and I can see that the reader is not returning anything.

    Read the article

  • Configure nHibernate for multiple-project solution

    - by NoOne
    Hello, Im doing a project with C# winforms. This project is composed by: Client project: Windows Forms where user will do CRUD operations on the models; Server project; Common Project: This project will hold the models (in the image only have the model Item); ListSingleton: Remote Object that will do the operations on the models; I already have all the communication working, but now I need to work on the persistence of the data in a mysql database. I was trying to use nHibernate but I’m having some troubles. My main problem is how to organize my hibernate configuration. In which project do I keep the mapping? Common project? In which project do I keep the hibernate configuration file (App.config)? ListSingleton project? In which project do I do this: Configuration cfg = new Configuration(); cfg.AddXmlFile("Item.hbm.xml"); ISessionFactory factory = cfg.BuildSessionFactory(); ISession session = factory.OpenSession(); ITransaction transaction = session.BeginTransaction(); Item newItem = new Item("BLAA"); // Tell NHibernate that this object should be saved session.Save(newItem); // commit all of the changes to the DB and close the ISession transaction.Commit(); session.Close(); In the ListSingleton project? Altho I had reference to the Common Project in the ListSingleton I keep getting error in the addXml line… My mapping is correct because I tried with a one-project solution and it worked :X

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • Netbeans, JPA Entity Beans in seperate projects. Unknown entity bean class

    - by Stu
    I am working in Netbeans and have my entity beans and web services in separate projects. I include the entity beans in the web services project however the ApplicaitonConfig.java file keeps getting over written and removing the entries I make for the entity beans in the associated jar file. My question is: is it required to have both the EntityBeans and the WebServices share the same project/jar file? If not what is the appropriate way to include the entity beans which are in the jar file? <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="jdbc/emrPool" transaction-type="JTA"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/emrPool</jta-data-source> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="eclipselink.ddl-generation" value="none"/> <property name="eclipselink.cache.shared.default" value="false"/> </properties> </persistence-unit> </persistence> Based on Melc's input i verified that that the transaction type is set to JTA and the jta-data-source is set to the value for the Glassfish JDBC Resource. Unfortunately the problem still persists. I have opened the WAR file and validated that the EntityBean.jar file is the latest version and is located in the WEB-INF/lib directory of the War file. I think it is tied to the fact that the entities are not being "registered" with the entity manager. However i do not know why they are not being registered.

    Read the article

  • Hibernate Session flush behaviour [ and Spring @Transactional ]

    - by EugeneP
    I use Spring and Hibernate in a web-app, SessionFactory is injected into a DAO bean, and then this DAO is used in a Servlet through webservicecontext. DAO methods are transactional, inside one of the methods I use ... getCurrentSession().save(myObject); One servlet calls this method with an object passed. The update seems to not be flushed at once, it takes about 5 seconds to see the changes in the database. The servlet's method in which that DAO's update method is called, takes a fraction of second to complete. After the @Transactional method of DAO is completed, flushing may NOT happen ? It does not seem to be a rule [ I already see it ]. Then the question is this: what to do to force the session to flush after every DAO method? It may not be a good thing to do, but talking about a Service layer, some methods must end with immediate flush, and Hibernate Session behavior is not predictable. So what to do to guarantee that my @Transactional method persists all the changes after the last line of that method code? getCurrentSession().flush() is the only solution? p.s. I read somewhere that @Transactional IS ASSOCIATED with a DB Transaction. Method returns, transaction must be committed. I do not see this happens.

    Read the article

  • Git Svn dcommit error - restart the commit

    - by Rob Wilkerson
    Last week, I made a number of changes to my local branch before leaving town for the weekend. This morning I wanted to dcommit all of those changes to the company's Svn repository, but I get a merge conflict in one file: Merge conflict during commit: Your file or directory 'build.properties.sample' is probably out-of-date: The version resource does not correspond to the resource within the transaction. Either the requested version resource is out of date (needs to be updated), or the requested version resource is newer than the transaction root (restart the commit). I'm not sure exactly why I'm getting this, but before attempting to dcommit, I did a git svn rebase. That "overwrote" my commits. To recover from that, I did a git reset --hard HEAD@{1}. Now my working copy seems to be where I expect it to be, but I have no idea how to get past the merge conflict; there's not actually any conflict to resolve that I can find. Any thoughts would be appreciated. EDIT: Just wanted to specify that I am working locally. I have a local branch for the trunk that references svn/trunk (the remote branch). All of my work was done on the local trunk: $ git branch maint-1.0.x master * trunk $ git branch -r svn/maintenance/my-project-1.0.0 svn/trunk Similarly, git log currently shows 10 commits on my local trunk since the last commit with a Svn ID. Hopefully that answers a few questions. Thanks again.

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • What is the best way to create a running integer id on the AppEngine data storage?

    - by Freed
    For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?

    Read the article

  • Time.new does not work as I would expect

    - by Marius Pop
    I am trying to generate some seed material. seed_array.each do |seed| Task.create(date: Date.new(2012,06,seed[1]), start_t: Time.new(2012,6,2,seed[2],seed[3]), end_t: Time.new(2012,6,2,seed[2] + 2,seed[3]), title: "#{seed[0]}") end Ultimately I will put random hours, minutes, seconds. The problem that I am facing is that instead of creating a time with the 2012-06-02 date it creates a time with a different date: 2000-01-01. I tested Time.new(2012,6,2,2,20,45) in rails console and it works as expected. When I am trying to seed my database however some voodo magic happens and I don't get the date I want. Any inputs are appreciated. Thank you! Update1: * [1m[36m (0.0ms)[0m [1mbegin transaction[0m [1m[35mSQL (0.5ms)[0m INSERT INTO "tasks" ("created_at", "date", "description", "end_t", "group_id", "start_t", "title", "updated_at") VALUES (?, ?, ?, ?, ?, ?, ?, ?) [["created_at", Tue, 03 Jul 2012 02:15:34 UTC +00:00], ["date", Thu, 07 Jun 2012], ["description", nil], ["end_t", 2012-06-02 10:02:00 -0400], ["group_id", nil], ["start_t", 2012-06-02 08:02:00 -0400], ["title", "99"], ["updated_at", Tue, 03 Jul 2012 02:15:34 UTC +00:00]] [1m[36m (2.3ms)[0m [1mcommit transaction * This is a small sample of the log. Update 2 Task id: 101, date: "2012-06-26", start_t: "2000-01-01 08:45:00", end_t: "2000-01-01 10:45:00", title: "1", description: nil, group_id: nil, created_at: "2012-07-03 02:15:33", updated_at: "2012-07-03 02:15:33" This is what shows up in rails console.

    Read the article

  • Can't start my Windows XP Virtual Machine: Insufficient Disk Space

    - by Rob
    Okay, I currently have a server with two virtual machines installed on it, a CentOS5.4 and a Windows XP. I was remote desktopping the Windows XP chatting on IRC, and all of a sudden I lost connection. I checked with my HyperVisor and tried to restart it, and it won't start at all. It's giving me this error: Message from server0297.serverpool.gnet.ba: Failed to extend swap file (fileHandle 16414) from 0 KB to 524288 KB: No space left on device. Could not power on VM : No space left on device. Failed to power on VM info 4/17/2010 9:49:20 PM root Basically I bought the set up from a host, he installed the HyperVisor and the VirtualMachines, and honestly I don't really know what I'm doing. I've looked at some of the settings, and I can't figure it out. If you need any additional information, I'll try to provide it. The CentOS5.4 is still starting and working flawlessly, if that's relevant.

    Read the article

  • NMap route determination on Windows 7 x64

    - by user30772
    C:\Windows\system32>nmap --iflist Starting Nmap 6.01 ( http://nmap.org ) at 2012-08-31 06:51 Central Daylight Time ************************INTERFACES************************ DEV (SHORT) IP/MASK TYPE UP MTU MAC eth0 (eth0) fe80::797f:b9b6:3ee0:27b8/64 ethernet down 1500 5C:AC:4C:E9:2D:46 eth0 (eth0) 169.254.39.184/4 ethernet down 1500 5C:AC:4C:E9:2D:46 eth1 (eth1) fe80::5c02:7e48:8fbe:c7c9/64 ethernet down 1500 00:FF:3F:7C:7C:2B eth1 (eth1) 169.254.199.201/4 ethernet down 1500 00:FF:3F:7C:7C:2B eth2 (eth2) fe80::74e4:1ab7:1b7d:a0d0/64 ethernet up 1500 14:FE:B5:BA:8A:C3 eth2 (eth2) 10.0.0.0.253/24 ethernet up 1500 14:FE:B5:BA:8A:C3 eth3 (eth3) fe80::b03e:ddf5:bb5c:5f76/64 ethernet up 1500 00:50:56:C0:00:01 eth3 (eth3) 169.254.95.118/16 ethernet up 1500 00:50:56:C0:00:01 eth4 (eth4) fe80::b175:831d:e60:27b/64 ethernet up 1500 00:50:56:C0:00:08 eth4 (eth4) 192.168.153.1/24 ethernet up 1500 00:50:56:C0:00:08 lo0 (lo0) ::1/128 loopback up -1 lo0 (lo0) 127.0.0.1/8 loopback up -1 tun0 (tun0) fe80::100:7f:fffe/64 point2point down 1280 tun1 (tun1) (null)/0 point2point down 1280 tun2 (tun2) fe80::5efe:a9fe:5f76/128 point2point down 1280 tun3 (tun3) (null)/0 point2point down 1280 tun4 (tun4) fe80::5efe:c0a8:9901/128 point2point down 1280 tun5 (tun5) fe80::5efe:ac14:fd/128 point2point down 1280 DEV WINDEVICE eth0 \Device\NPF_{0024872A-5A41-42DF-B484-FB3D3ED3FCE9} eth0 \Device\NPF_{0024872A-5A41-42DF-B484-FB3D3ED3FCE9} eth1 \Device\NPF_{3F7C7C2B-9AF3-45BB-B96E-2F00143CC2F7} eth1 \Device\NPF_{3F7C7C2B-9AF3-45BB-B96E-2F00143CC2F7} eth2 \Device\NPF_{08116FE5-F0FF-498A-9BF1-515528C57C13} eth2 \Device\NPF_{08116FE5-F0FF-498A-9BF1-515528C57C13} eth3 \Device\NPF_{AA83C6CE-AB2E-4764-92D1-CDEAFBA7AD21} eth3 \Device\NPF_{AA83C6CE-AB2E-4764-92D1-CDEAFBA7AD21} eth4 \Device\NPF_{D0679889-E9D4-411D-BDC5-F4DDB758E151} eth4 \Device\NPF_{D0679889-E9D4-411D-BDC5-F4DDB758E151} lo0 <none> lo0 <none> tun0 <none> tun1 <none> tun2 <none> tun3 <none> tun4 <none> tun5 <none> **************************ROUTES************************** DST/MASK DEV GATEWAY 192.168.153.255/32 eth0 255.255.255.255/32 eth0 255.255.255.255/32 eth0 127.0.0.1/32 eth0 127.255.255.255/32 eth0 255.255.255.255/32 eth0 169.254.95.118/32 eth0 169.254.255.255/32 eth0 10.0.0.0.253/32 eth0 255.255.255.255/32 eth0 10.0.0.0.255/32 eth0 255.255.255.255/32 eth0 192.168.153.1/32 eth0 255.255.255.255/32 eth0 10.0.0.0.0/24 eth0 192.168.153.0/24 eth0 10.10.10.0/24 eth0 10.0.0.0.4 169.254.0.0/16 eth0 127.0.0.0/8 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 0.0.0.0/0 eth0 10.0.0.0.1 JMeterX - I worded that way in hopes of raising answer efficnecy, but that probably wasnt the smartest choice. IMHO the problem (could be a symptom) is that nmap retardedly chooses eth0 as the gateway interface for any and all networks. Here's the result: C:\Windows\system32>nmap 10.0.0.55 Starting Nmap 6.01 ( http://nmap.org ) at 2012-08-31 07:43 Central Daylight Time Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 0.95 seconds C:\Windows\system32>nmap -e eth2 10.0.0.55 Starting Nmap 6.01 ( http://nmap.org ) at 2012-08-31 07:44 Central Daylight Time Nmap scan report for esxy5.dionne.net (10.0.0.55) Host is up (0.00070s latency). Not shown: 991 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 427/tcp open svrloc 443/tcp open https 902/tcp open iss-realsecure 5988/tcp closed wbem-http 5989/tcp open wbem-https 8000/tcp open http-alt 8100/tcp open xprint-server MAC Address: 00:1F:29:59:C7:03 (Hewlett-Packard Company) Nmap done: 1 IP address (1 host up) scanned in 5.29 seconds Just to be clear, this is what makes absolutly no sense to me whatsoever. For reference, I've included similar info from an Ubuntu (that works normally) vm on the affected host below. Jacked Windows 7 **************************ROUTES************************** DST/MASK DEV GATEWAY 192.168.153.255/32 eth0 255.255.255.255/32 eth0 255.255.255.255/32 eth0 127.0.0.1/32 eth0 127.255.255.255/32 eth0 255.255.255.255/32 eth0 169.254.95.118/32 eth0 169.254.255.255/32 eth0 10.0.0.0.253/32 eth0 255.255.255.255/32 eth0 10.0.0.0.255/32 eth0 255.255.255.255/32 eth0 192.168.153.1/32 eth0 255.255.255.255/32 eth0 10.0.0.0.0/24 eth0 192.168.153.0/24 eth0 10.10.10.0/24 eth0 10.0.0.0.4 169.254.0.0/16 eth0 127.0.0.0/8 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 224.0.0.0/4 eth0 0.0.0.0/0 eth0 10.0.0.0.1 Working Ubuntu VM root@ubuntu:~# nmap --iflist Starting Nmap 5.21 ( http://nmap.org ) at 2012-08-31 07:44 PDT ************************INTERFACES************************ DEV (SHORT) IP/MASK TYPE UP MAC lo (lo) 127.0.0.1/8 loopback up eth0 (eth0) 172.20.0.89/24 ethernet up 00:0C:29:0A:C9:35 eth1 (eth1) 192.168.225.128/24 ethernet up 00:0C:29:0A:C9:3F eth2 (eth2) 192.168.150.128/24 ethernet up 00:0C:29:0A:C9:49 **************************ROUTES************************** DST/MASK DEV GATEWAY 192.168.225.0/0 eth1 192.168.150.0/0 eth2 172.20.0.0/0 eth0 169.254.0.0/0 eth0 0.0.0.0/0 eth0 172.20.0.1 root@ubuntu:~# nmap esxy2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-08-31 07:44 PDT Nmap scan report for esxy2 (172.20.0.52) Host is up (0.00036s latency). rDNS record for 172.20.0.52: esxy2.dionne.net Not shown: 994 filtered ports PORT STATE SERVICE 80/tcp open http 427/tcp closed svrloc 443/tcp open https 902/tcp closed iss-realsecure 8000/tcp open http-alt 8100/tcp open unknown MAC Address: 00:04:23:B1:FA:6A (Intel) Nmap done: 1 IP address (1 host up) scanned in 4.76 seconds

    Read the article

  • How to build a small network/server at home, basics

    - by Moe
    I'm one class away from my BA IT, I took several classes in general IT. Out of all the books I found just two to be really beneficial. I'm trying to get the hands on experience so my question is.... I want to build a small network in my home, wireless and also wired; printer, laptop, desktop, server (I have 4 1TB external drives of movies/music I want to be available to all computers) Where would I start from building a server with my hard drives, good modem, router, switch port, firewall internet speed/connection etc. This is my first project I want to try.

    Read the article

  • php server settings for restrict post queries

    - by Korjavin Ivan
    I have php script on hosting, which receive big data with ajax/post. Just now, after some works on hosting, I see that script is broken. I checked with curl: file temp1: user_avatar=&user_baner=&user_sig=.... 237 chars total, and curl -H "X-Requested-With: XMLHttpRequest" -X POST --data @temp1 'http://host/mypage.php' works perfect. But with file temp2: name=%D0%9C%D0%B5%D0%B1%%B5%D0%BB%D1%8C%D0%A4%%B0%D0%B1%D1%80%D0%B8%D0%BA%D1%8A&user_payed=0000-00-00&...positions%5B5231%5D=on total chars: 65563 curl -H "X-Requested-With: XMLHttpRequest" -X POST --data @temp2 'http://host/mypage.php' curl return nothing. Looks like a problem with apache/php/php.ini or something like that. I check .htaccess php_value post_max_size 20M Which other parameters I should check? Is it possible that %BO encode kill php/apache? Or total number of parameters (about 2800) ?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >