Search Results

Search found 3634 results on 146 pages for 'commit charge'.

Page 123/146 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • What strategy do you use to sync your code when working from home

    - by Ben Daniel
    At my work I currently have my development environment inside a Virtual Machine. When I need to do work from home I copy my VM and any databases I need onto a laptop drive sized external USB drive. After about 10 minutes of copying I put the drive in my pocket and head home, copy back the VM and databases onto my personal computer and I'm ready to work. I follow the same steps to take the work back with me. So if I count the total amount of time I spend waiting around for files to finish copying in order for me to take work home and bring it back again, it comes to around 40 minutes! I do have a VPN connection to my work from home (providing the internet is up at both sites) and a decent internet speed (8mbits down/?up) but I find Remote Desktoping into my work machine laggy enough for me to want to work on my VM directly. So in looking at what other options I have or how I could improve my existing option I'm interested in what strategy you use or recommend to do work at home and keeping your code/environment in sync. EDIT: I'd prefer an option where I don't have to commit my changes into version control before I leave work - as I like to make meaningful descriptive comments in my commits, committing would take longer than just copying my VM onto a portable drive! lol Also I'd prefer a solution where my dev environment stays in sync too. Having said that I'm still very interested in your own solutions even if they don't exactly solve my problem as best as I'd like. :)

    Read the article

  • Good resources for building web-app in Tapestry

    - by Rich
    Hi, I'm currently researching into Tapestry for my company and trying to decide if I think we can port our pre-existing proprietary web applications to something better. Currently we are running Tomcat and using JSP for our front end backed by our own framework that eventually uses JDBC to connect to an Oracle database. I've gone through the Tapestry tutorial, which was really neat and got me interested, but now I'm faced with what seems to be a common issue of documentation. There are a lot of things I'd need to be sure that I could accomplish with Tapestry before I'd be ready to commit fully to it. Does anyone have any good resources, be it a book or web article or anything else, that go into more detail beyond what the Tapestry tutorial explains? I am also considering integrating with Hibernate, and have read a little bit about Spring too. I'm still having a hard time understanding how Spring would be more useful than cumbersome in tandem with Tapestry,as they seem to have a lot of overlapping features. An example I read seemed to use Spring to interface with Hibernate, and then Tapestry to Spring, but I was under the impression Tapestry integrates to the same degree with Hibernate. The resource I'm speaking of is http://wiki.apache.org/tapestry/Tapstry5First_project_with_Tapestry5,_Spring_and_Hibernate . I was interested because I hadn't found information anywhere else on how to maintain user levels and sessions through a Tapestry application before, but wasn't exactly impressed by the need to use Spring in the example.

    Read the article

  • iPhone app crashes on start-up, in stack-trace only messages from built-in frameworks

    - by Aleksejs
    My app some times crashes at start-up. In stack-trace only messages from built-in frameworks. An excerpt from a crash log: OS Version: iPhone OS 3.1.3 (7E18) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x000e6000 Crashed Thread: 0 Thread 0 Crashed: 0 CoreGraphics 0x339305d8 argb32_image_mark_RGB32 + 704 1 CoreGraphics 0x338dbcd4 argb32_image + 1640 2 libRIP.A.dylib 0x320d99f0 ripl_Mark 3 libRIP.A.dylib 0x320db3ac ripl_BltImage 4 libRIP.A.dylib 0x320cc2a0 ripc_RenderImage 5 libRIP.A.dylib 0x320d5238 ripc_DrawImage 6 CoreGraphics 0x338d7da4 CGContextDelegateDrawImage + 80 7 CoreGraphics 0x338d7d14 CGContextDrawImage + 364 8 UIKit 0x324ee68c compositeCGImageRefInRect 9 UIKit 0x324ee564 -[UIImage(UIImageDeprecated) compositeToRect:fromRect:operation:fraction:] 10 UIKit 0x32556f44 -[UINavigationBar drawBackButtonBackgroundInRect:withStyle:pressed:] 11 UIKit 0x32556b00 -[UINavigationItemButtonView drawRect:] 12 UIKit 0x324ecbc4 -[UIView(CALayerDelegate) drawLayer:inContext:] 13 QuartzCore 0x311cacfc -[CALayer drawInContext:] 14 QuartzCore 0x311cab00 backing_callback 15 QuartzCore 0x311ca388 CABackingStoreUpdate 16 QuartzCore 0x311c978c -[CALayer _display] 17 QuartzCore 0x311c941c -[CALayer display] 18 QuartzCore 0x311c9368 CALayerDisplayIfNeeded 19 QuartzCore 0x311c8848 CA::Context::commit_transaction(CA::Transaction*) 20 QuartzCore 0x311c846c CA::Transaction::commit() 21 QuartzCore 0x311c8318 +[CATransaction flush] 22 UIKit 0x324f5e94 -[UIApplication _reportAppLaunchFinished] 23 UIKit 0x324a7a80 -[UIApplication _runWithURL:sourceBundleID:] 24 UIKit 0x324f8df8 -[UIApplication handleEvent:withNewEvent:] 25 UIKit 0x324f8634 -[UIApplication sendEvent:] 26 UIKit 0x324f808c _UIApplicationHandleEvent 27 GraphicsServices 0x335067dc PurpleEventCallback 28 CoreFoundation 0x323f5524 CFRunLoopRunSpecific 29 CoreFoundation 0x323f4c18 CFRunLoopRunInMode 30 UIKit 0x324a6c00 -[UIApplication _run] 31 UIKit 0x324a5228 UIApplicationMain 32 Journaler 0x000029ac main (main.m:14) 33 Journaler 0x00002948 start + 44 File main.m is simple as possible: #import <UIKit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); // line 14 [pool release]; return retVal; } What my cause the app to crash?

    Read the article

  • Flush separate Castle ActiveRecord Transaction, and refresh object in another Transaction

    - by eanticev
    I've got all of my ASP.NET requests wrapped in a Session and a Transaction that gets commited only at the very end of the request. At some point during execution of the request, I would like to insert an object and make it visible to other potential threads - i.e. split the insertion into a new transaction, commit that transaction, and move on. The reason is that the request in question hits an API that then chain hits another one of my pages (near-synchronously) to let me know that it processed, and thus double submits a transaction record, because the original request had not yet finished, and thus not committed the transaction record. So I've tried wrapping the insertion code with a new SessionScope, TransactionScope(TransactionMode.New), combination of both, flushing everything manually, etc. However, when I call Refresh on the object I'm still getting the old object state. Here's some code sample for what I'm seeing: Post outsidePost = Post.Find(id); // status of this post is Status.Old using (TransactionScope transaction = new TransactionScope(TransactionMode.New)) { Post p = Post.Find(id); p.Status = Status.New; // new status set here p.Update(); SessionScope.Current.Flush(); transaction.Flush(); transaction.VoteCommit(); } outsidePost.Refresh(); // refresh doesn't get the new status, status is still Status.Old Any suggestions, ideas, and comments are appreciated!

    Read the article

  • WYSIHAT Photos upload not working - Paperclip - Ruby on Rails

    - by bgadoci
    I just successfully installed WysiHat in my rails blog. Seems that the 'add a picture' feature is not working. It successfully allows me to find and select a picture from my desktop but upon clicking save, it does nothing. I also have Paperclip successfully installed. I am wondering if this may have something to do with it. Perhaps Paperclip is getting in the way, or, perhaps I need to connect Paperclip and WysiHat somehow. Any ideas? (let me know if I need to post any code). Also, WysiHat-engine uses facebox, not sure if that is relevant. UPDATE: Added Server Log, looks like paperclip is saving it so not sure what else is going wrong. Processing PostsController#update (for 127.0.0.1 at 2010-04-23 16:42:14) [PUT] Parameters: {"commit"=>"Update", "post"=>{"body"=>"<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</p>", "title"=>"Rails Code for Search"}, "authenticity_token"=>"hndm6pxaPLfgnSMFAmLDGNo86mZG3XnlfJoNOI/P+O8=", "id"=>"105"} Post Load (0.2ms) SELECT * FROM "posts" WHERE ("posts"."id" = 105) Post Update (0.3ms) UPDATE "posts" SET "updated_at" = '2010-04-23 21:42:14', "body" = '<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</p>' WHERE "id" = 105 [paperclip] Saving attachments. Redirected to http://localhost:3000/posts/105 Completed in 12ms (DB: 0) | 302 Found [http://localhost/posts/105]

    Read the article

  • How to select and crop an image in android?

    - by Guy
    Hey, I am currently working on a live wallpaper and I allow the user to select an image which will go behind my effects. Currently I have: Intent i = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); i.putExtra("crop", "true"); startActivityForResult(i, 1); And slightly under that: @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == 1) if (resultCode == Activity.RESULT_OK) { Uri selectedImage = data.getData(); Log.d("IMAGE SEL", "" + selectedImage); // TODO Do something with the select image URI SharedPreferences customSharedPreference = getSharedPreferences("imagePref", Activity.MODE_PRIVATE); SharedPreferences.Editor editor = customSharedPreference.edit(); Log.d("HO", "" + selectedImage); editor.putString("imagePref", getRealPathFromURI(selectedImage)); Log.d("IMAGE SEL", getRealPathFromURI(selectedImage)); editor.commit(); } } When my code is ran, Logcat tells me that selectedImage is null. If I comment out the i.putExtra("crop", "true"): Logcat does not give me the null pointer exception, and I am able to do what I want with the image. So, what is the problem here? Does any one have any idea how I can fix this? Thanks, for your time.

    Read the article

  • jqModal dialog - only displaying once

    - by James Inman
    I've got a jqModal dialog with the following code: $(document).ready( function() { $('td.item.active').click( function(e) { $(this).append( '<div class="new">&nbsp;</div>' ); $("#jqModal").jqm({ modal:true, onHide: function(e) { e.w.hide(); // Hide window e.o.remove(); // Remove overlay } }); $('#jqModal').jqmShow(); $('input#add_session').click( function(e) { e.preventDefault(); $('#jqModal').hide(); $('.jqmOverlay').remove(); }); }); }); The HTML used is as follows: <div id="jqModal" class="jqmWindow"> <form action="" method="post"> <ul> <li> <input id="add_session" name="commit" type="submit" value="Add Session" /> <input type="button" name="close" value="Close" id="close" class="jqmClose" /> </li> </ul> </form> </div> When I first click a <td>, the dialog launches without a problem. On the second click (or subsequent), the new class is added to the <div>, but the dialog doesn't launch.

    Read the article

  • SQL deadlock on delete then bulk insert

    - by StarLite
    I have an issue with a deadlock in SQL Server that I haven't been able to resolve. Basically I have a large number of concurrent connections (from many machines) that are executing transactions where they first delete a range of entries and then re-insert entries within the same range with a bulk insert. Essentially, the transaction looks like this BEGIN TRANSACTION T1 DELETE FROM [TableName] WITH( XLOCK HOLDLOCK ) WHERE [Id]=@Id AND [SubId]=@SubId INSERT BULK [TableName] ( [Id] Int , [SubId] Int , [Text] VarChar(max) COLLATE SQL_Latin1_General_CP1_CI_AS ) WITH(CHECK_CONSTRAINTS, FIRE_TRIGGERS) COMMIT TRANSACTION T1 The bulk insert only inserts items matching the Id and SubId of the deletion in the same transaction. Furthermore, these Id and SubId entries should never overlap. When I have enough concurrent transaction of this form, I start to see a significant number of deadlocks between these statements. I added the locking hints XLOCK HOLDLOCK to attempt to deal with the issue, but they don't seem to be helpling. The canonical deadlock graph for this error shows: Connection 1: Holds RangeX-X on PK_TableName Holds IX Page lock on the table Requesting X Page lock on the table Connection 2: Holds IX Page lock on the table Requests RangeX-X lock on the table What do I need to do in order to ensure that these deadlocks don't occur. I have been doing some reading on the RangeX-X locks and I'm not sure I fully understand what is going on with these. Do I have any options short of locking the entire table here?

    Read the article

  • Refactoring ADO.NET - SqlTransaction vs. TransactionScope

    - by marc_s
    I have "inherited" a little C# method that creates an ADO.NET SqlCommand object and loops over a list of items to be saved to the database (SQL Server 2005). Right now, the traditional SqlConnection/SqlCommand approach is used, and to make sure everything works, the two steps (delete old entries, then insert new ones) are wrapped into an ADO.NET SqlTransaction. using (SqlConnection _con = new SqlConnection(_connectionString)) { using (SqlTransaction _tran = _con.BeginTransaction()) { try { SqlCommand _deleteOld = new SqlCommand(......., _con); _deleteOld.Transaction = _tran; _deleteOld.Parameters.AddWithValue("@ID", 5); _con.Open(); _deleteOld.ExecuteNonQuery(); SqlCommand _insertCmd = new SqlCommand(......, _con); _insertCmd.Transaction = _tran; // add parameters to _insertCmd foreach (Item item in listOfItem) { _insertCmd.ExecuteNonQuery(); } _tran.Commit(); _con.Close(); } catch (Exception ex) { // log exception _tran.Rollback(); throw; } } } Now, I've been reading a lot about the .NET TransactionScope class lately, and I was wondering, what's the preferred approach here? Would I gain anything (readibility, speed, reliability) by switching to using using (TransactionScope _scope = new TransactionScope()) { using (SqlConnection _con = new SqlConnection(_connectionString)) { .... } _scope.Complete(); } What you would prefer, and why? Marc

    Read the article

  • problem when trying to empty a stack in c

    - by frx08
    Hi all, (probably it's a stupid thing but) I have a problem with a stack implementation in C language, when I try to empty it, the function to empty the stack does an infinite loop.. the top of the stack is never null. where I commit an error? thanks bye! #include <stdio.h> #include <stdlib.h> typedef struct stack{ size_t a; struct stack *next; } stackPos; typedef stackPos *ptr; void push(ptr *top, size_t a){ ptr temp; temp = malloc(sizeof(stackPos)); temp->a = a; temp->next = *top; *top = temp; } void freeStack(ptr *top){ ptr temp = *top; while(*top!=NULL){ //the program does an infinite loop *top = temp->next; free(temp); } } int main(){ ptr top = NULL; push(&top, 4); push(&top, 8); //down here the problem freeStack(&top); return 0; }

    Read the article

  • Hibernate one-to-one mapping

    - by Andrey Yaskulsky
    I have one-to-one hibernate mapping between class Student and class Points: @Entity @Table(name = "Users") public class Student implements IUser { @Id @Column(name = "id") private int id; @Column(name = "name") private String name; @Column(name = "password") private String password; @OneToOne(fetch = FetchType.EAGER, mappedBy = "student") private Points points; @Column(name = "type") private int type = getType(); //gets and sets... @Entity @Table(name = "Points") public class Points { @GenericGenerator(name = "generator", strategy = "foreign", parameters = @Parameter(name = "property", value = "student")) @Id @GeneratedValue(generator = "generator") @Column(name = "id", unique = true, nullable = false) private int Id; @OneToOne @PrimaryKeyJoinColumn private Student student; //gets and sets And then i do: Student student = new Student(); student.setId(1); student.setName("Andrew"); student.setPassword("Password"); Points points = new Points(); points.setPoints(0.99); student.setPoints(points); points.setStudent(student); Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); session.save(student); session.getTransaction().commit(); And hibernate saves student in the table but not saves corresponding points. Is it OK? Should i save points separately?

    Read the article

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • Versions. "Is not a working copy"

    - by bartclaeys
    A little background first: I'm a designer/developer and decided to use subversion for a personal project. I'm the only one working on this project. I've setup a Beanstalk account and installed Versions on Mac. Locally I have MySQL and PHP running through MAMP. What I want to do is develop locally and push code into Beanstalk. I'm not planning on deploying from Beanstalk to my live server at this moment. In Beanstalk I created a repository and imported all my code. I then installed Versions and added a bookmark to the Beanstalk repository. So far so good. Next I guess (this is a wild guess) I need to add a so called 'working copy bookmark' so that Versions can watch my local copy for changes and commit it to my Beanstalk repository. Problem: When I click 'Create working copy bookmark' in Versions and I select a folder on my computer I get the error: '/Applications/MAMP/www_mydomain' is not a working copy' I have no clue what that means and now I'm stuck. How can I tell Versions to keep track of changes of a local folder?

    Read the article

  • How implement the Open Session in View pattern in NHibernate?

    - by MCardinale
    I'm using ASP.NET MVC + NHibernate + Fluent NHibernate and having a problem with lazy loading. Through this question (http://stackoverflow.com/questions/2519964/how-to-fix-a-nhibernate-lazy-loading-error-no-session-or-session-was-closed), I've discovered that I have to implement the Open Session in View pattern , but I don't know how. In my repositories classes, I use methods like this public ImageGallery GetById(int id) { using(ISession session = NHibernateSessionFactory.OpenSession()) { return session.Get<ImageGallery>(id); } } public void Add(ImageGallery imageGallery) { using(ISession session = NHibernateSessionFactory.OpenSession()) { using(ITransaction transaction = session.BeginTransaction()) { session.Save(imageGallery); transaction.Commit(); } } } And this is my Session Factory helper class: public class NHibernateSessionFactory { private static ISessionFactory _sessionFactory; private static ISessionFactory SessionFactory { get { if(_sessionFactory == null) { _sessionFactory = Fluently.Configure() .Database(MySQLConfiguration.Standard.ConnectionString(MyConnString)) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<ImageGalleryMap>()) .ExposeConfiguration(c => c.Properties.Add("hbm2ddl.keywords", "none")) .BuildSessionFactory(); } return _sessionFactory; } } public static ISession OpenSession() { return SessionFactory.OpenSession(); } } Someone could help me to implements Open Session in View pattern? Thank you.

    Read the article

  • Pitfalls and practical Use-Cases: Toplink, Hibernate, Eclipse Link, Ibatis ...

    - by Martin K.
    I worked a lot with Hibernate as my JPA implementation. In most cases it works fine! But I have also seen a lot of pitfalls: Remoting with persisted Objects is difficult, because Hibernate replaces the Java collections with its own collection implementation. So the every client must have the Hibernate .jar libraries. You have to take care on LazyLoading exceptions etc. One way to get around this problem is the use of webservices. Dirty checking is done against the Database without any lock. "Delayed SQL", causes that the data access isn't ACID compliant. (Lost data...) Implict Updates So we don't know if an object is modified or not (commit causes updates). Are there similar issues with Toplink, Eclipse Link and Ibatis? When should I use them? Have they a similar performance? Are there reasons to choose Eclipse Link/Toplink... over Hibernate?

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • Django Cannot set values on a ManyToManyField which specifies an intermediary model

    - by dana
    i am using a m2m and a through table, and when i was trying to save, my error was: Cannot set values on a ManyToManyField which specifies an intermediary model so, i've modified my code, so that when i save the form, to insert data into the 'through' table too.But now, i'm having another error. (i've bolded the lines where i think i am wrong) i have in models.py: class Classroom(models.Model): user = models.ForeignKey(User, related_name = 'classroom_creator') classname = models.CharField(max_length=140, unique = True) date = models.DateTimeField(auto_now=True) open_class = models.BooleanField(default=True) members = models.ManyToManyField(User,related_name="list of invited members", through = 'Membership') class Membership(models.Model): accept = models.BooleanField(User) date = models.DateTimeField(auto_now = True) classroom = models.ForeignKey(Classroom, related_name = 'classroom_membership') member = models.ForeignKey(User, related_name = 'user_membership') and in def save_classroom(request): if request.method == 'POST': form = ClassroomForm(request.POST, request.FILES, user = request.user) **classroom_instance = Classroom member_instance = Membership** if form.is_valid(): new_obj = form.save(commit=False) new_obj.user = request.user r = Relations.objects.filter(initiated_by = request.user) membership = Membership.objects.create(**classroom = classroom_instance, member = member_instance,date=datetime.datetime.now())** new_obj.save() form.save_m2m() return HttpResponseRedirect('/classroom/classroom_view/{{user}}/') else: form = ClassroomForm(user = request.user) return render_to_response('classroom/classroom_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to initialise okay the classroom_instance and menber_instance.My error os: Cannot assign "": "Membership.classroom" must be a "Classroom" instance. Thanks!

    Read the article

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • In git, how can I get the diff between two dates?

    - by Weidenrinde
    Basically, I am looking for something equivalent to cvs diff -D"1 day ago" -D"2010-02-29 11:11". While collecting more and more information, I found a solution. I paste it here, for others that might have similar problems. Things I have tried: In git, how can I get the diff between all the commits that occured between two dates? was ansered here with: git whatchanged --since="1 day ago" -p But this gives a diff for each commit, even if there are multiple commits in one file. I know that "date" is a bit of a loose concept in git, I thought there must be some way to do this. git diff 'master@{1 day ago}..master gives some warning warning: Log for 'master' only goes back to Tue, 16 Mar 2010 14:17:32 +0100. and does not show old diffs. git format-patch --since=yesterday --stdout does not give anything. revs=$(git log --pretty="format:%H" --since="1 day ago");git diff $(echo "$revs"|tail -n1) $(echo "$revs"|head -n1) works somehow, but seems complicated and does not restrict to the current branch. git diff $(git rev-list -n1 --before="1 day ago" master) seems to work and a default way to do similar things, although more complicated than I thought. Funnily, git-cvsserver does not support "cvs diff -D" (without that it is documented somewhere).

    Read the article

  • The type of field isn't supported by declared persistence strategy "OneToMany"

    - by Robert
    We are new to JPA and trying to setup a very simple one to many relationship where a pojo called Message can have a list of integer group id's defined by a join table called GROUP_ASSOC. Here is the DDL: CREATE TABLE "APP"."MESSAGE" ( "MESSAGE_ID" INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) ); ALTER TABLE "APP"."MESSAGE" ADD CONSTRAINT "MESSAGE_PK" PRIMARY KEY ("MESSAGE_ID"); CREATE TABLE "APP"."GROUP_ASSOC" ( "GROUP_ID" INTEGER NOT NULL, "MESSAGE_ID" INTEGER NOT NULL ); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_PK" PRIMARY KEY ("MESSAGE_ID", "GROUP_ID"); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_FK" FOREIGN KEY ("MESSAGE_ID") REFERENCES "APP"."MESSAGE" ("MESSAGE_ID"); Here is the pojo: @Entity @Table(name = "MESSAGE") public class Message { @Id @Column(name = "MESSAGE_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private int messageId; @OneToMany(fetch=FetchType.LAZY, cascade=CascadeType.PERSIST) private List groupIds; public int getMessageId() { return messageId; } public void setMessageId(int messageId) { this.messageId = messageId; } public List getGroupIds() { return groupIds; } public void setGroupIds(List groupIds) { this.groupIds = groupIds; } } When we try to execute the following test code we get <openjpa-1.2.3-SNAPSHOT-r422266:907835 fatal user error> org.apache.openjpa.util.MetaDataException: The type of field "pojo.Message.groupIds" isn't supported by declared persistence strategy "OneToMany". Please choose a different strategy. Message msg = new Message(); List groups = new ArrayList(); groups.add(101); groups.add(102); EntityManager em = Persistence.createEntityManagerFactory("TestDBWeb").createEntityManager(); em.getTransaction().begin(); em.persist(msg); em.getTransaction().commit(); Help!

    Read the article

  • Unable to Git-push master to Github

    - by Masi
    This question is related to my problem in understanding rebase, branch and merge, and to the problem How can you commit to your github account as you have a teamMate in your remote list? I found out that other people have had the same problem. The problem seems to be related to /etc/xinet.d/. Problem: unable to push my local branch to my master branch at Github I run git push origin master I get fatal: 'origin' does not appear to be a git repository fatal: The remote end hung up unexpectedly The error message suggests me that the branch 'origin' is not in my local git repository. This way, Git stops connecting to Github. This is strange, since I have not removed the branch 'origin'. My git tree is dev * master ticgit remotes/Math/Math remotes/Math/master remotes/origin/master remotes/Masi/master How can you push your local branch to Github, while you have a teamMate's branch in your local Git? VonC's answer solves the main problem. I put a passphares to my ssh keys. I run $git push github master I get Permission denied (publickey). fatal: The remote end hung up unexpectedly It seems that I need to give the passphrase for Git somehow. How can you make Git to ask passphares?

    Read the article

  • What objects would get defined in a Bridge scoring app (Javascript)

    - by Alex Mcp
    I'm writing a Bridge (card game) scoring application as practice in javascript, and am looking for suggestions on how to set up my objects. I'm pretty new to OO in general, and would love a survey of how and why people would structure a program to do this (the "survey" begets the CW mark. Additionally, I'll happily close this if it's out of range of typical SO discussion). The platform is going to be a web-app for webkit on the iPhone, so local storage is an option. My basic structure is like this: var Team = { player1: , player2: , vulnerable: , //this is a flag for whether or //not you've lost a game yet, affects scoring scoreAboveLine: , scoreBelowLine: gamesWon: }; var Game = { init: ,//function to get old scores and teams from DB currentBid: , score: , //function to accept whether bid was made and apply to Teams{} save: //auto-run that is called after each game to commit to DB }; So basically I'll instantiate two teams, and then run loops of game.currentBid() and game.score. Functionally the scoring is working fine, but I'm wondering if this is how someone else would choose to break down the scoring of Bridge, and if there are any oversights I've made with regards to OO-ness and how I've abstracted the game. Thanks!

    Read the article

  • Why does ActiveMQ hold messages that should be deleted from Topic?

    - by rauch
    I use ActiveMQ as Notification System(Pub/Sub model). On server: if any changes of data occur, Server send this updated data (File) to Topic using BlobMessages. There are few Clients, that subscribe on this Topic and get updated File if it exsist in Topic. The problem is that all of BlobMessages, that were sent to Topic, are hold by ActiveMQ all time. this.producer = new ProducerTool.Builder("tcp://localhost:61616?jms.blobTransferPolicy.uploadUrl=http://localhost:8161/fileserver/", "ServerProdTopic").topic(true) .transacted(false).durable(false).timeToLive(10000L).build(); this.consumer = new ConsumerTool.Builder("tcp://localhost:61616", "ServerConsTopic").topic(true) .transacted(false).durable(false).build(); consumer.setMessageListener(this); The File is sent: connection = createConnection(); session = createSession(connection); producer = createProducer(session); BlobMessage blobMsg = ((ActiveMQSession) session).createBlobMessage(resource); blobMsg.setStringProperty("sourceName", resource.getName()); producer.send(blobMsg); if (transacted) { System.out.println("Producer Committing..."); session.commit(); } Where createProducer is: protected Connection createConnection() throws JMSException, Exception { ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url); //connectionFactory.getBlobTransferPolicy().setUploadUrl("http://localhost:8161/fileserver/"); Connection connection = connectionFactory.createConnection(); connection.start(); ((ActiveMQConnection) connection).setCopyMessageOnSend(false); return connection; } All, that could be useful I set as need: Session.AUTO_ACKNOWLEDGE; Non-Durable Subscription; TimeToLive = 9000; JMSDeliveryMode = Non-Persistent; What I have at runtime: in ActiveMQ directory: ~/apache-activemq-5.3.0/webapps/fileserver/ tere are all File, that where delivered and not delivered to Subscribers. Why? Sometimes Server send Big Files about 1 GB....And even this Files are hold at that directory, Even after stopping Subscribers(Clients), Publisher(Server) and ActiveMQ Broker.

    Read the article

  • Best way to enforce inter-table constraints inside database

    - by FerranB
    I looking for the best way to check for inter-table constraints an step forward of foreing keys. For instance, to check if a date child record value is between a range date on two parent rows columns. For instance: Parent table ID DATE_MIN DATE_MAX ----- ---------- ---------- 1 01/01/2009 01/03/2009 ... Child table PARENT_ID DATE ---------- ---------- 1 01/02/2009 1 01/12/2009 <--- HAVE TO FAIL! ... I see two approaches: Create materialized views on-commit as shown in this article (or other equivalent on other RDBMS). Use stored-procedures and triggers. Any other approach? Which is the best option? UPDATE: The motivation of this question is not about "putting the constraints on database or on application". I think this is a tired question and anyone does the way she loves. And, I'm sorry for detractors, I'm developing with constraints on database. From here, the question is "which is the best option to manage inter-table constraints on database?". I'm added "inside database" on the question title. UPDATE 2: Some one added the "oracle" tag. Of course materialized views are oracle-tools but I'm interested on any option regardless it's on oracle or others RDBMSs.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >