Search Results

Search found 5295 results on 212 pages for 'transaction scope'.

Page 5/212 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Transaction classification. Artificial intelligence

    - by Alex
    For a project, I have to classify a list of banking transactions based on their description. Supose I have 2 categories: health and entertainment. Initially, the transactions will have basic information: date and time, ammount and a description given by the user. For example: Transaction 1: 09/17/2012 12:23:02 pm - 45.32$ - "medicine payments" Transaction 2: 09/18/2012 1:56:54 pm - 8.99$ - "movie ticket" Transaction 3: 09/18/2012 7:46:37 pm - 299.45$ - "dentist appointment" Transaction 4: 09/19/2012 6:50:17 am - 45.32$ - "videogame shopping" The idea is to use that description to classify the transaction. 1 and 3 would go to "health" category while 2 and 4 would go to "entertainment". I want to use the google prediction API to do this. In reality, I have 7 different categories, and for each one, a lot of key words related to that category. I would use some for training and some for testing. Is this even possible? I mean, to determine the category given a few words? Plus, the number of words is not necesarally the same on every transaction. Thanks for any help or guidance! Very appreciated Possible solution: https://developers.google.com/prediction/docs/hello_world?hl=es#theproblem

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

  • Rails nested models and data separation by scope

    - by jobrahms
    I have Teacher, Student, and Parent models that all belong to User. This is so that a Teacher can create Students and Parents that can or cannot log into the app depending on the teacher's preference. Student and Parent both accept nested attributes for User so a Student and User object can be created in the same form. All four models also belong to Studio so I can do data separation by scope. The current studio is set in application_controller.rb by looking up the current subdomain. In my students controller (all of my controllers, actually) I'm using @studio.students.new instead of Student.new, etc, to scope the new student to the correct studio, and therefore the correct subdomain. However, the nested User does not pick up the studio from its parent - it gets set to nil. I was thinking that I could do something like params[:student][:user_attributes][:studio_id] = @student.studio.id in the controller, but that would require doing attr_accessible :studio_id in User, which would be bad. How can I make sure that the nested User picks up the same scope that the Student model gets when it's created? student.rb class Student < ActiveRecord::Base belongs_to :studio belongs_to :user, :dependent => :destroy attr_accessible :user_attributes accepts_nested_attributes_for :user, :reject_if => :all_blank end students_controller.rb def create @student = @studio.students.new @student.attributes = params[:student] if @student.save redirect_to @student, :notice => "Successfully created student." else render :action => 'new' end end user.rb class User < ActiveRecord::Base belongs_to :studio accepts_nested_attributes_for :studio attr_accessible :email, :password, :password_confirmation, :remember_me, :studio_attributes devise :invitable, :database_authenticatable, :recoverable, :rememberable, :trackable end

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC netwo

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); } This is the ErrorStack we are getting when the problem occurs: at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken) at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx) at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx) at System.Transactions.EnlistableStates.Promote(InternalTransaction tx) at System.Transactions.Transaction.Promote() at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction) at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts) at System.Data.SqlClient.SqlInternalConnection.GetTransactionCookie(Transaction transaction, Byte[] whereAbouts) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user) at System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe() at System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode() at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges() at RegBook.classes.DbBase.Save() at RegBook.usercontrols.BookingProcess.confirmBookingButton_Click(Object sender, EventArgs e)

    Read the article

  • Proving Transaction Processing in Azure

    - by dayscott
    Since I am writing a seminar work on "Transaction Processing in MS Azure" for my university I wanted to launch a bank-transfer simulation. I already have implemented a getting-started thingy to get familiar with Azure: http://www.c-sharpcorner.com/UploadFile/dhananjaycoder/48/Default.aspx . Question: What is the most easy way (using SQL Azure) to implement a small app which (dis)proves that transactions in Azure are done properly? (e.g. no lost updates)

    Read the article

  • How can I find the space used by a SQL Transaction Log?

    - by Sean Earp
    The SQL Server sp_spaceused stored procedure is useful for finding out a database size, unallocated space, etc. However (as far as I can tell), it does not report that information for the transaction log (and looking at database properties within SQL Server Management Studio also does not provide that information for transaction logs). While I can easily find the physical space used by a transaction log by looking at the .ldf file, how can I find out how much of the log file is used and how much is unused?

    Read the article

  • JS Anonymous Scope...

    - by Simon
    this Application.EventManager.on('Click', function(args) { // event listener, args is JSON TestAction.getContents(args.node.id, function(result, e) { console.log(result); this.add({ title: args.node.id, html: result }).show(); }); }); I'm really struggling with scope and anonymous functions... I want this (on the 1st line) to be the same object as this (on the 5th line)... .call() and .apply() seemed to be the right sort of idea but I don't want to trigger the event... just change it's scope.... For a bit of contexts... the this in question is a TabContainer and TestAction is a RPC that returns content... Thanks....

    Read the article

  • Scope of the Pages in a Silverlight application

    - by AngryHacker
    I have an app built with the Silverlight Navigation Application Template. I have a main form (e.g. MainPage.xaml) and a bunch of Silverlight Pages, which are swapped in and out of the main content area. In the MainPage.xaml, I have a DispatcherTimer which hits some Uri resources, regardless of which page I am on. Every now and then, it will inexplicably stop firing. I have an inkling that it has to do with the scope of various pages. Can pages inside the MainPage.xaml take away the scope from its parent? Or is this something much simpler?

    Read the article

  • Help needed with Javascript Variable Scope / OOP and Call Back Functions

    - by gargantaun
    I think this issue goes beyond typical variable scope and closure stuff, or maybe I'm an idiot. Here goes anyway... I'm creating a bunch of objects on the fly in a jQuery plugin. The object look something like this function WedgePath(canvas){ this.targetCanvas = canvas; this.label; this.logLabel = function(){ console.log(this.label) } } the jQuery plugin looks something like this (function($) { $.fn.myPlugin = function() { return $(this).each(function() { // Create Wedge Objects for(var i = 1; i <= 30; i++){ var newWedge = new WedgePath(canvas); newWedge.label = "my_wedge_"+i; globalFunction(i, newWedge]); } }); } })(jQuery); So... the plugin creates a bunch of wedgeObjects, then calls 'globalFunction' for each one, passing in the latest WedgePath instance. Global function looks like this. function globalFunction(indicator_id, pWedge){ var targetWedge = pWedge; targetWedge.logLabel(); } What happens next is that the console logs each wedges label correctly. However, I need a bit more complexity inside globalFunction. So it actually looks like this... function globalFunction(indicator_id, pWedge){ var targetWedge = pWedge; someSql = "SELECT * FROM myTable WHERE id = ?"; dbInterface.executeSql(someSql, [indicator_id], function(transaction, result){ targetWedge.logLabel(); }) } There's a lot going on here so i'll explain. I'm using client side database storage (WebSQL i call it). 'dbInterface' an instance of a simple javascript object I created which handles the basics of interacting with a client side database [shown at the end of this question]. the executeSql method takes up to 4 arguments The SQL String an optional arguments array an optional onSuccess handler an optional onError handler (not used in this example) What I need to happen is: When the WebSQL query has completed, it takes some of that data and manipulates some attribute of a particular wedge. But, when I call 'logLabel' on an instance of WedgePath inside the onSuccess handler, I get the label of the very last instance of WedgePath that was created way back in the plugin code. Now I suspect that the problem lies in the var newWedge = new WedgePath(canvas); line. So I tried pushing each newWedge into an array, which I thought would prevent that line from replacing or overwriting the WedgePath instance at every iteration... wedgeArray = []; // Inside the plugin... for(var i = 1; i <= 30; i++){ var newWedge = new WedgePath(canvas); newWedge.label = "my_wedge_"+i; wedgeArray.push(newWedge); } for(var i = 0; i < wedgeArray.length; i++){ wedgeArray[i].logLabel() } But again, I get the last instance of WedgePath to be created. This is driving me nuts. I apologise for the length of the question but I wanted to be as clear as possible. END ============================================================== Also, here's the code for dbInterface object should it be relevant. function DatabaseInterface(db){ var DB = db; this.sql = function(sql, arr, pSuccessHandler, pErrorHandler){ successHandler = (pSuccessHandler) ? pSuccessHandler : this.defaultSuccessHandler; errorHandler = (pErrorHandler) ? pErrorHandler : this.defaultErrorHandler; DB.transaction(function(tx){ if(!arr || arr.length == 0){ tx.executeSql(sql, [], successHandler, errorHandler); }else{ tx.executeSql(sql,arr, successHandler, errorHandler) } }); } // ---------------------------------------------------------------- // A Default Error Handler // ---------------------------------------------------------------- this.defaultErrorHandler = function(transaction, error){ // error.message is a human-readable string. // error.code is a numeric error code console.log('WebSQL Error: '+error.message+' (Code '+error.code+')'); // Handle errors here var we_think_this_error_is_fatal = true; if (we_think_this_error_is_fatal) return true; return false; } // ---------------------------------------------------------------- // A Default Success Handler // This doesn't do anything except log a success message // ---------------------------------------------------------------- this.defaultSuccessHandler = function(transaction, results) { console.log("WebSQL Success. Default success handler. No action taken."); } }

    Read the article

  • validates_uniqueness_of...limiting scope - How do I restrict someone from creating a certain number

    - by bgadoci
    I have the following code: class Like < ActiveRecord::Base belongs_to :site validates_uniqueness_of :ip_address, :scope => [:site_id] end Which limits a person from "liking" a site more than one time based on a remote ip request. Essentially when someone "likes" a site, a record is created in the Likes table and I use a hidden field to request and pass their ip address to the :ip_address column in the like table. With the above code I am limiting the user to one "like" per their ip address. I would like to limit this to a certain number for instance 10. My initial thought was do something like this: validates_uniqueness_of :ip_address, :scope => [:site_id, :limit => 10] But that doesn't seem to work. Is there a simple syntax here that will allow me to do such a thing?

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • What did programmers do before variable scope, where everything is global?

    - by hydroparadise
    So, I am having to deal with seemingly archiac language (called PowerOn) where I have a main method, a few datatypes to define variables with, and has the ability to have sub-procedures (essentially void methods) that does not return a type nor accepts any arguements. The problem here is that EVERYTHING is global. I've read of these type of languages, but most books take the aproach "Ok, we use to use a horse and cariage, but now, here's a car so let's learn how to work on THAT!" We will NEVER relive those days". I have to admit, the mind is struggling to think outside of scope and extent. Well here I am. I am trying to figure out how to best manage nothing but global variables across several open methods. Yep, even iterators for for loops have to be defined globaly, which I find myself recycling in different parts of my code. My Question: for those that have this type experience, how did programmers deal with a large amount of variables in a global playing field? I have feeling it just became a mental juggling trick, but I would be interested to know if there were any known aproaches.

    Read the article

  • JPA and MySQL transaction isolation level

    - by armandino
    I have a native query that does a batch insert into a MySQL database: String sql = "insert into t1 (a, b) select x, y from t2 where x = 'foo'"; EntityTransaction tx = entityManager.getTransaction(); try { tx.begin(); int rowCount = entityManager.createNativeQuery(sql).executeUpdate(); tx.commit(); return rowCount; } catch(Exception ex) { tx.rollback(); log.error(...); } This query causes a deadlock: while it reads from t2 with insert .. select, another process tries to insert a row into t2. I don't care about the consistency of reads from t2 when doing an insert .. select and want to set the transaction isolation level to READ_UNCOMMITTED. How do I go about setting it in JPA?

    Read the article

  • Spring Transaction Manager

    - by cedric
    Hi. I have a j2ee application running on spring framework. I am implementing a transaction manager with AOP. It works fine. When an exception occurs in my method it is detected by my AOP configuration and rolls back the changes in the DB. But the problem is the code where you expect the error should not be surrounded with try-catch. When I suround it with try-catch it won't roll-back. But I need to do some stuffs like logging whenever there are errors and the only place I can think of placing it is in the catch block.

    Read the article

  • Recovering transaction log from corrupt SQL database

    - by Don Kirkham
    We have a database that is backed up weekly in simple mode. Yesterday, we had a crc error corrupt the mdf file and we were unable to save it. I restored the backup from last week, but now we have a gap from the time of the backup to the time of the restore. Since I have the ldf file from that database, is there any way to "replay" that transaction log to fill in the gap? I have tried reattaching the ldf file to the recovered mdf file, but SQL will not allow me to do that. (It just creates a new ldf file with a different name when I reattach the database.) Any ideas would help. This is a lot of data to lose and although it is not critical data, I'd like to get it back (as well as learn as well as learn how to do it.)

    Read the article

  • Getting around IBActions limited scope

    - by Septih
    Hello, I have an NSCollectionView and the view is an NSBox with a label and an NSButton. I want a double click or a click of the NSButton to tell the controller to perform an action with the represented object of the NSCollectionViewItem. The Item View is has been subclassed, the code is as follows: #import <Cocoa/Cocoa.h> #import "WizardItem.h" @interface WizardItemView : NSBox { id delegate; IBOutlet NSCollectionViewItem * viewItem; WizardItem * wizardItem; } @property(readwrite,retain) WizardItem * wizardItem; @property(readwrite,retain) id delegate; -(IBAction)start:(id)sender; @end #import "WizardItemView.h" @implementation WizardItemView @synthesize wizardItem, delegate; -(void)awakeFromNib { [self bind:@"wizardItem" toObject:viewItem withKeyPath:@"representedObject" options:nil]; } -(void)mouseDown:(NSEvent *)event { [super mouseDown:event]; if([event clickCount] > 1) { [delegate performAction:[wizardItem action]]; } } -(IBAction)start:(id)sender { [delegate performAction:[wizardItem action]]; } @end The problem I've run into is that as an IBAction, the only things in the scope of -start are the things that have been bound in IB, so delegate and viewItem. This means that I cannot get at the represented object to send it to the delegate. Is there a way around this limited scope or a better way or getting hold of the represented object? Thanks.

    Read the article

  • question about MySQL transaction and trigger

    - by WilliamLou
    I quickly browsed MySQL manual but didn't find the exact information about my question. Here is my question: if I have a InnoDB table A with two triggers triggered by 'AFTER INSERT ON A' and 'AFTER UPDATE ON A'. More specifically, For example: one trigger is defined as: CREATE TRIGGER test_trigger AFTER INSERT ON A FOR EACH ROW BEGIN INSERT INTO B SELECT * FROM A WHERE A.col1 = NEW.col1 END; You can ignore the query between BEGIN AND END, basically I mean this trigger will insert several rows into table B which is also a InnoDB table. Now, if I started a transaction and then insert many rows, say: 10K rows, into table A. If there is no trigger associated with table A, all these inserts are atomic, that's for sure. Now, if table A is associated with several insert/update triggers which insert/update many rows to table B and/or table C etc.. will all these inserts and/or updates are still all atomic? I think it's still atomic, but it's kind of difficult to test and I can't find any explanations in the Manual. Anyone can confirm this? Thanks a lot!

    Read the article

  • Extending WikiPlex with Scope Augmenters

    - by mhawley
    [In addition to blogging, I am also using Twitter. Follow me: @matthawley] Another extension point with WikiPlex is Scope Augmenters. Scope Augmenters allow you to post process the collection of scopes to further augment, or insert/remove, new scopes prior to being rendered. WikiPlex comes with 3 out-of-the-box Scope Augmenters that it uses for indentation, tables, and lists. For reference, I'll be explaining… (read more)

    Read the article

  • Announcing the Drive Installation Scope

    Announcing the Drive Installation Scope On September 12, Google Drive released a new feature of great interest to many Drive web app developers: the installation scope. In this session we'll discuss the benefits of the installation scope, walk through the related documentation, and do a brief demo of how it works. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • askubuntu scope seems to lag

    - by Nkciy84
    I installed the askubuntu scope/lens in Ubuntu 12.04 LTS but it seems to be almost extremely laggy. Sometimes when I enter text in the searchfield it takes a (whole lot of) while before something is displayed underneath the icons in text. Usualy I just see the first letters of the text I put in. Like: "damn, why is this scope so slow" i see "find 'dam'" on ask ubuntu" This 'bug' makes the lens/scope/thing almost impossible to use for me.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >