Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 43/95 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Enteprise Library Exception Handling for WCF Fault Contracts - CLIENT SIDE

    - by Huw
    I have a Windows Service which communicates with WCF services. The WCF services are all fault shielded and generate custom UserFaultContracts and ServiceFaultContracts. No problems there. In the Windows Service I am using EntLib for exception handling and logging. I do not want to try catch for faults try { } catch (FaultException<UserFaultContract>) { } I want to use EntLib try { } catch (Exception ex) { var rethrow = ExceptionPolicy.HandleException(ex, "Transaction Policy"); if (rethrow) throw; } This also works, however, in my Tranasaction Policy I want to Log the details of the UserFaultContract. This is where I am unglued. And I hate becoming unglued. The fault is captured and logged...but I can't get the details of the fault. My exception policy is <add name="Transaction Policy"> <exceptionTypes> <add type="System.Exception, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="None" name="Exception"> <exceptionHandlers> <add logCategory="General" eventId="200" severity="Error" title="Transaction Error" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" priority="2" useDefaultLogger="true" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Logging Handler" /> </exceptionHandlers> </add> <add type="System.ServiceModel.FaultException, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="None" name="FaultException"> <exceptionHandlers> <add logCategory="General" eventId="200" severity="Error" title="Service Fault" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" priority="2" useDefaultLogger="true" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Logging Handler" /> </exceptionHandlers> </add> </exceptionTypes> </add> The exception logged is: Timestamp: 5/13/2010 14:53:40 Message: HandlingInstanceID: e9038634-e16e-4d87-ab1e-92379431838b An exception of type 'System.ServiceModel.FaultException`1[[LCI.DispatchMaster.FaultContracts.ServiceFaultContract, LCI.DispatchMaster.FaultContracts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]' occurred and was caught. 05/13/2010 10:53:40 Type : System.ServiceModel.FaultException`1[[LCI.DispatchMaster.FaultContracts.ServiceFaultContract, LCI.DispatchMaster.FaultContracts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : There was an internal fault at the DispatchMaster service. Source : mscorlib Help link : Detail : LCI.DispatchMaster.FaultContracts.ServiceFaultContract Action : http://LCI.DispatchMaster.LogicalChoices.com/ITruckMasterService/MergeScenarioServiceFaultContractFault Code : System.ServiceModel.FaultCode Reason : There was an internal fault at the DispatchMaster service. Data : System.Collections.ListDictionaryInternal TargetSite : Void HandleReturnMessage(System.Runtime.Remoting.Messaging.IMessage, System.Runtime.Remoting.Messaging.IMessage) Stack Trace : In the fault contact there is an ID and a Message. I would, as you can see, like the ID and Message to be logged by EntLib. I am assuming that I'm going to have to write a custom handler to exctract the fault details - but thought I'd ask if I'm missing something in EntLib which might help me avoid that task. Thanks to anyone who is willing to help.

    Read the article

  • invalid input by the user

    - by TJL
    hey guys so this is my program, I need to notify the user that if hhe/she enters a letter other than w d b or w that is an invalid request. what ive done so far does this, but when i input a number to the dollars_withdraw or dollars_deposit or account_balance the program will do the transaction but also add the "invalid request" before going back to main loop. how do i change it so the program wont do that for numerical inputs for the withdraw deposit and balance?: // Atm machine.cpp : Defines the entry point for the console application. #include <iostream> #include <string> using namespace std; int main () { char user_request; string user_string; double account_balance, dollars_withdraw, dollars_deposit; account_balance = 5000; while(account_balance >0) { cout << "Would you like to [W]ithdraw, [D]eposit, Check your [b]alance or [Q]uit?" << endl; cin >> user_string; user_request= user_string[0]; if(user_request == 'w' || user_request== 'W') { cout << "How much would you like to withdraw?" << endl; cin >> dollars_withdraw; if (dollars_withdraw > account_balance || dollars_withdraw <0) cout << "Invalid transaction" << endl; else account_balance = account_balance - dollars_withdraw; cout << "Your new balance is $" << account_balance << endl; } if (user_request == 'd' || user_request== 'D') { cout << "How much would you like to deposit?" << endl; cin >> dollars_deposit; if (dollars_deposit <0) cout << "Invalid transaction" << endl; else account_balance= account_balance + dollars_deposit; cout << "Your new balance is $" << account_balance << endl; } if(user_request == 'b' || user_request == 'B') { account_balance= account_balance; cout << "Your available balance is $" << account_balance << endl; } if(user_request == 'q' || user_request == 'Q') break; else cout << "Invalid request " << endl; } cout << "Goodbye" << endl; return 0; }

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28

    - by pinaldave
    Jacob Sebastian is a SQL Server MVP, Author, Speaker and Trainer. Jacob is one of the top rated expert community. Jacob wrote the book The Art of XSD – SQL Server XML Schema Collections and wrote the XML Chapter in SQL Server 2008 Bible. See his Blog | Profile. He is currently researching on the subject of Filestream and have submitted this interesting article on the very subject. What is FILESTREAM? FILESTREAM is a new feature introduced in SQL Server 2008 which provides an efficient storage and management option for BLOB data. Many applications that deal with BLOB data today stores them in the file system and stores the path to the file in the relational tables. Storing BLOB data in the file system is more efficient that storing them in the database. However, this brings up a few disadvantages as well. When the BLOB data is stored in the file system, it is hard to ensure transactional consistency between the file system data and relational data. Some applications store the BLOB data within the database to overcome the limitations mentioned earlier. This approach ensures transactional consistency between the relational data and BLOB data, but is very bad in terms of performance. FILESTREAM combines the benefits of both approaches mentioned above without the disadvantages we examined. FILESTREAM stores the BLOB data in the file system (thus takes advantage of the IO Streaming capabilities of NTFS) and ensures transactional consistency between the BLOB data in the file system and the relational data in the database. For more information on the FILESTREAM feature, visit: http://beyondrelational.com/filestream/default.aspx FILESTREAM Wait Types Since this series is on the different SQL Server wait types, let us take a look at the various wait types that are related to the FILESTREAM feature. FS_FC_RWLOCK This wait type is generated by FILESTREAM Garbage Collector. This occurs when Garbage collection is disabled prior to a backup/restore operation or when a garbage collection cycle is being executed. FS_GARBAGE_COLLECTOR_SHUTDOWN This wait type occurs when during the cleanup process of a garbage collection cycle. It indicates that that garbage collector is waiting for the cleanup tasks to be completed. FS_HEADER_RWLOCK This wait type indicates that the process is waiting for obtaining access to the FILESTREAM header file for read or write operation. The FILESTREAM header is a disk file located in the FILESTREAM data container and is named “filestream.hdr”. FS_LOGTRUNC_RWLOCK This wait type indicates that the process is trying to perform a FILESTREAM log truncation related operation. It can be either a log truncate operation or to disable log truncation prior to a backup or restore operation. FSA_FORCE_OWN_XACT This wait type occurs when a FILESTREAM file I/O operation needs to bind to the associated transaction, but the transaction is currently owned by another session. FSAGENT This wait type occurs when a FILESTREAM file I/O operation is waiting for a FILESTREAM agent resource that is being used by another file I/O operation. FSTR_CONFIG_MUTEX This wait type occurs when there is a wait for another FILESTREAM feature reconfiguration to be completed. FSTR_CONFIG_RWLOCK This wait type occurs when there is a wait to serialize access to the FILESTREAM configuration parameters. Waits and Performance System waits has got a direct relationship with the overall performance. In most cases, when waits increase the performance degrades. SQL Server documentation does not say much about how we can reduce these waits. However, following the FILESTREAM best practices will help you to improve the overall performance and reduce the wait types to a good extend. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: Filestream

    Read the article

  • Oracle Enterprise Manager 11g is Here!

    - by chung.wu
    We hope that you enjoyed the launch event. If you missed it, you may still watch it via our on demand webcast, which is being produced and will be posted very shortly. 11gR1 is a major release of Oracle Enterprise Manager, and as one would expect from a big release, there are many new capabilities that appeal to a broad set of audience. Before going into the laundry list of new features, let's talk about the key themes for this release to put things in perspective. First, this release is about Business Driven Application Management. The traditional paradigm of component centric systems management simply cannot satisfy the management needs of modern distributed applications, as they do not provide adequate visibility of whether these applications are truly meeting the service level expectations of the business users. Business Driven Application Management helps IT manage applications according to the needs of the business users so that valuable IT resources can be better focused to help deliver better business results. To support Business Driven Application Management, 11gR1 builds on the work that we started in 10g to provide better support for user experience management. This capability helps IT better understand how users use applications and the experience that the applications provide so that IT can take actions to help end users get their work done more effectively. In addition, this release also delivers improved business transaction management capabilities to make it faster and easier to understand and troubleshoot transaction problems that impact end user experience. Second, this release includes strengthened Integrated Application-to-Disk Management. Every component of an application environment, from the application logic to the application server, to database, host machines and storage devices, etc... can affect end user experience. After user experience improvement needs are identified, IT needs tools that can be used do deep dive diagnostics for each of the application environment component, analyze configurations and deploy changes. Enterprise Manager 11gR1 extends coverage of key application environment components to include full support for Oracle Database 11gR2, Exadata V2, and Fusion Middleware 11g. For composite and Java application management, two key pieces of technologies, JVM Diagnostic and Composite Application Monitoring and Modeler, are now fully integrated into Enterprise Manager so there is no need to install and maintain separate tools. In addition, we have delivered the first set of integration between Enterprise Manager Grid Control and Enterprise Manager Ops Center so that hardware level events can be centrally monitored via Grid Control. Finally, this release delivers Integrated Systems Management and Support for customers of Oracle technologies. Traditionally, systems management tools and tech support were separate silos. When problems occur, administrators used internally deployed tools to try to solve the problems themselves. If they couldn't fix the problems, then they would use some sort of support website to get help from the vendor's support staff. Oracle Enterprise Manager 11g integrates problem diagnostic and remediation workflow. Administrators can use Oracle Enterprise Manager's various diagnostic tools to begin the troubleshooting process. They can also use the integrated access to My Oracle Support to look up solutions and download software patches. If further help is needed, administrators can open service requests from right within Oracle Enterprise Manager and track status update. Oracle's support staff, using Enterprise Manager's configuration management capabilities, can collect important configuration information about customer environments in order to expedite problem resolution. This tight integration between Oracle Enterprise Manager and My Oracle Support helps Oracle customers achieve a Superior Ownership Experience for their Oracle products. So there you have it. This is a brief 50,000 feet overview of Oracle Enterprise Manager 11g. We know you are hungry for the details. We are going to write about it in the coming days and weeks. For those of you that absolutely can't wait to find out more, you may download our software to try it out today. In fact, for the first time ever, the initial release of Oracle Enterprise Manager is available for both 32 and 64 bit Linux. Additional O/S ports will arrive in the coming weeks. Please stay tuned on the Oracle Enterprise Manager blog for additional updates.

    Read the article

  • Banshee encountered a Fatal Error (sqlite error 11: database disk image is malformed)

    - by Nik
    I am running ubuntu 10.10 Maverick Meerkat, and recently I am helping in testing out indicator-weather using the unstable buids. However there was a bug which caused my system to freeze suddenly (due to indicator-weather not ubuntu) and the only way to recover is to do a hard reset of the system. This happened a couple of times. And when i tried to open banshee after a couple of such resets I get the following fatal error which forces me to quit banshee. The screenshot is not clear enough to read the error, so I am posting it below, An unhandled exception was thrown: Sqlite error 11: database disk image is malformed (SQL: BEGIN TRANSACTION; DELETE FROM CoreSmartPlaylistEntries WHERE SmartPlaylistID IN (SELECT SmartPlaylistID FROM CoreSmartPlaylists WHERE IsTemporary = 1); DELETE FROM CoreSmartPlaylists WHERE IsTemporary = 1; COMMIT TRANSACTION) at Hyena.Data.Sqlite.Connection.CheckError (Int32 errorCode, System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.Connection.Execute (System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.HyenaSqliteCommand.Execute (Hyena.Data.Sqlite.HyenaSqliteConnection hconnection, Hyena.Data.Sqlite.Connection connection) [0x00000] in <filename unknown>:0 Exception has been thrown by the target of an invocation. at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MonoCMethod.Invoke (BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.ConstructorInfo.Invoke (System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type, Boolean nonPublic) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type) [0x00000] in <filename unknown>:0 at Banshee.Gui.GtkBaseClient.Startup () [0x00000] in <filename unknown>:0 at Hyena.Gui.CleanRoomStartup.Startup (Hyena.Gui.StartupInvocationHandler startup) [0x00000] in <filename unknown>:0 .NET Version: 2.0.50727.1433 OS Version: Unix 2.6.35.27 Assembly Version Information: gkeyfile-sharp (1.0.0.0) Banshee.AudioCd (1.9.0.0) Banshee.MiniMode (1.9.0.0) Banshee.CoverArt (1.9.0.0) indicate-sharp (0.4.1.0) notify-sharp (0.4.0.0) Banshee.SoundMenu (1.9.0.0) Banshee.Mpris (1.9.0.0) Migo (1.9.0.0) Banshee.Podcasting (1.9.0.0) Banshee.Dap (1.9.0.0) Banshee.LibraryWatcher (1.9.0.0) Banshee.MultimediaKeys (1.9.0.0) Banshee.Bpm (1.9.0.0) Banshee.YouTube (1.9.0.0) Banshee.WebBrowser (1.9.0.0) Banshee.Wikipedia (1.9.0.0) pango-sharp (2.12.0.0) Banshee.Fixup (1.9.0.0) Banshee.Widgets (1.9.0.0) gio-sharp (2.14.0.0) gudev-sharp (1.0.0.0) Banshee.Gio (1.9.0.0) Banshee.GStreamer (1.9.0.0) System.Configuration (2.0.0.0) NDesk.DBus.GLib (1.0.0.0) gconf-sharp (2.24.0.0) Banshee.Gnome (1.9.0.0) Banshee.NowPlaying (1.9.0.0) Mono.Cairo (2.0.0.0) System.Xml (2.0.0.0) Banshee.Core (1.9.0.0) Hyena.Data.Sqlite (1.9.0.0) System.Core (3.5.0.0) gdk-sharp (2.12.0.0) Mono.Addins (0.4.0.0) atk-sharp (2.12.0.0) Hyena.Gui (1.9.0.0) gtk-sharp (2.12.0.0) Banshee.ThickClient (1.9.0.0) Nereid (1.9.0.0) NDesk.DBus.Proxies (0.0.0.0) Mono.Posix (2.0.0.0) NDesk.DBus (1.0.0.0) glib-sharp (2.12.0.0) Hyena (1.9.0.0) System (2.0.0.0) Banshee.Services (1.9.0.0) Banshee (1.9.0.0) mscorlib (2.0.0.0) Platform Information: Linux 2.6.35-27-generic i686 unknown GNU/Linux Disribution Information: [/etc/lsb-release] DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10" [/etc/debian_version] squeeze/sid Just to make it clear, this happened only after the hard resets and not before. I used to use banshee everyday and it worked perfectly. Can anyone help me fix this?

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • Guest blog: A Closer Look at Oracle Price Analytics by Will Hutchinson

    - by Takin Babaei
    Overview:  Price Analytics helps companies understand how much of each sale goes into discounts, special terms, and allowances. This visibility lets sales management see the panoply of discounts and start seeing whether each discount drives desired behavior. In Price Analytics monitors parts of the quote-to-order process, tracking quotes, including the whole price waterfall and seeing which result in orders. The “price waterfall” shows all discounts between list price and “pocket price”. Pocket price is the final price the vendor puts in its pocket after all discounts are taken. The value proposition: Based on benchmarks from leading consultancies and companies I have talked to, where they have studied the effects of discounting and started enforcing what many of them call “discount discipline”, they find they can increase the pocket price by 0.8-3%. Yes, in today’s zero or negative inflation environment, one can, through better monitoring of discounts, collect what amounts to a price rise of a few percent. We are not talking about selling more product, merely about collecting a higher pocket price without decreasing quantities sold. Higher prices fall straight to the bottom line. The best reference I have ever found for understanding this phenomenon comes from an article from the September-October 1992 issue of Harvard Business Review called “Managing Price, Gaining Profit” by Michael Marn and Robert Rosiello of McKinsey & Co. They describe the outsized impact price management has on bottom line performance compared to selling more product or cutting variable or fixed costs. Price Analytics manages what Marn and Rosiello call “transaction pricing”, namely the prices of a given transaction, as opposed to what is on the price list or pricing according to the value received. They make the point that if the vendor does not manage the price waterfall, customers will, to the vendor’s detriment. It also discusses its findings that in companies it studied, there was no correlation between discount levels and any indication of customer value. I urge you to read this article. What Price Analytics does: Price analytics looks at quotes the company issues and tracks them until either the quote is accepted or rejected or it expires. There are prebuilt adapters for EBS and Siebel as well as a universal adapter. The target audience includes pricing analysts, product managers, sales managers, and VP’s of sales, marketing, finance, and sales operations. It tracks how effective discounts have been, the win rate on quotes, how well pricing policies have been followed, customer and product profitability, and customer performance against commitments. It has the concept of price waterfall, the deal lifecycle, and price segmentation built into the product. These help product and sales managers understand their pricing and its effectiveness on driving revenue and profit. They also help understand how terms are adhered to during negotiations. They also help people understand what segments exist and how well they are adhered to. To help your company increase its profits and revenues, I urge you to look at this product. If you have questions, please contact me. Will HutchinsonMaster Principal Sales Consultant – Analytics, Oracle Corp. Will Hutchinson has worked in the business intelligence and data warehousing for over 25 years. He started building data warehouses in 1986 at Metaphor, advancing to running Metaphor UK’s sales consulting area. He also worked in A.T. Kearney’s business intelligence practice for over four years, running projects and providing training to new consultants in the IT practice. He also worked at Informatica and then Siebel, before coming to Oracle with the Siebel acquisition. He became Master Principal Sales Consultant in 2009. He has worked on developing ROI and TCO models for business intelligence for over ten years. Mr. Hutchinson has a BS degree in Chemical Engineering from Princeton University and an MBA in Finance from the University of Chicago.

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.”

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Thinking Local, Regional and Global

    - by Apeksha Singh-Oracle
    The FIFA World Cup tournament is the biggest single-sport competition: it’s watched by about 1 billion people around the world. Every four years each national team’s manager is challenged to pull together a group players who ply their trade across the globe. For example, of the 23 members of Brazil’s national team, only four actually play for Brazilian teams, and the rest play in England, France, Germany, Spain, Italy and Ukraine. Each country’s national league, each team and each coach has a unique style. Getting all these “localized” players to work together successfully as one unit is no easy feat. In addition to $35 million in prize money, much is at stake – not least national pride and global bragging rights until the next World Cup in four years time. Achieving economic integration in the ASEAN region by 2015 is a bit like trying to create the next World Cup champion by 2018. The team comprises Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam. All have different languages, currencies, cultures and customs, rules and regulations. But if they can pull together as one unit, the opportunity is not only great for business and the economy, but it’s also a source of regional pride. BCG expects by 2020 the number of firms headquartered in Asia with revenue exceeding $1 billion will double to more than 5,000. Their trade in the region and with the world is forecast to increase to 37% of an estimated $37 trillion of global commerce by 2020 from 30% in 2010. Banks offering transactional banking services to the emerging market place need to prepare to repond to customer needs across the spectrum – MSMEs, SMEs, corporates and multi national corporations. Customers want innovative, differentiated, value added products and services that provide: • Pan regional operational independence while enabling single source of truth at a regional level • Regional connectivity and Cash & Liquidity  optimization • Enabling Consistent experience for their customers  by offering standardized products & services across all ASEAN countries • Multi-channel & self service capabilities / access to real-time information on liquidity and cash flows • Convergence of cash management with supply chain and trade finance While enabling the above to meet customer demands, the need for a comprehensive and robust credit management solution for effective regional banking operations is a must to manage risk. According to BCG, Asia-Pacific wholesale transaction-banking revenues are expected to triple to $139 billion by 2022 from $46 billion in 2012. To take advantage of the trend, banks will have to manage and maximize their own growth opportunities, compete on a broader scale, manage the complexity within the region and increase efficiency. They’ll also have to choose the right operating model and regional IT platform to offer: • Account Services • Cash & Liquidity Management • Trade Services & Supply Chain Financing • Payments • Securities services • Credit and Lending • Treasury services The core platform should be able to balance global needs and local nuances. Certain functions need to be performed at a regional level, while others need to be performed on a country level. Financial reporting and regulatory compliance are a case in point. The ASEAN Economic Community is in the final lap of its preparations for the ultimate challenge: becoming a formidable team in the global league. Meanwhile, transaction banks are designing their own hat trick: implementing a world-class IT platform, positioning themselves to repond to customer needs and establishing a foundation for revenue generation for years to come. Anand Ramachandran Senior Director, Global Banking Solutions Practice Oracle Financial Services Global Business Unit

    Read the article

  • Why doesn't TransactionScope work with Entity Framework?

    - by NotDan
    See the code below. If I initialize more than one entity context, then I get the following exception on the 2nd set of code only. If I comment out the second set it works. {"The underlying provider failed on Open."} Inner: {"Communication with the underlying transaction manager has failed."} Inner: {"Error HRESULT E_FAIL has been returned from a call to a COM component."} Note that this is a sample app and I know it doesn't make sense to create 2 contexts in a row. However, the production code does have reason to create multiple contexts in the same TransactionScope, and this cannot be changed. Edit Here is a previous question of me trying to set up MS-DTC. It seems to be enabled on both the server and the client. I'm not sure if it is set up correctly. Also note that one of the reasons I am trying to do this, is that existing code within the TransactionScope uses ADO.NET and Linq 2 Sql... I would like those to use the same transaction also. (That probably sounds crazy, but I need to make it work if possible). http://stackoverflow.com/questions/794364/how-do-i-use-transactionscope-in-c Solution Windows Firewall was blocking the connections to MS-DTC. using(TransactionScope ts = new System.Transactions.TransactionScope()) { using (DatabaseEntityModel o = new DatabaseEntityModel()) { var v = (from s in o.Advertiser select s).First(); v.AcceptableLength = 1; o.SaveChanges(); } //-> By commenting out this section, it works using (DatabaseEntityModel o = new DatabaseEntityModel()) { //Exception on this next line var v = (from s1 in o.Advertiser select s1).First(); v.AcceptableLength = 1; o.SaveChanges(); } //-> ts.Complete(); }

    Read the article

  • nhibernate : a different object with the same identifier value was already associated with the sessi

    - by frosty
    I am getting the following error when i tried and save my "Company" entity in my mvc application a different object with the same identifier value was already associated with the session: 2, of entity: I am using an IOC container private class EStoreDependencies : NinjectModule { public override void Load() { Bind<ICompanyRepository>().To<CompanyRepository>().WithConstructorArgument("session", NHibernateHelper.OpenSession()); } } My CompanyRepository public class CompanyRepository : ICompanyRepository { private ISession _session; public CompanyRepository(ISession session) { _session = session; } public void Update(Company company) { using (ITransaction transaction = _session.BeginTransaction()) { _session.Update(company); transaction.Commit(); } } } And Session Helper public class NHibernateHelper { private static ISessionFactory _sessionFactory; const string SessionKey = "MySession"; private static ISessionFactory SessionFactory { get { if (_sessionFactory == null) { var configuration = new Configuration(); configuration.Configure(); configuration.AddAssembly(typeof(UserProfile).Assembly); configuration.SetProperty(NHibernate.Cfg.Environment.ConnectionStringName, System.Environment.MachineName); _sessionFactory = configuration.BuildSessionFactory(); } return _sessionFactory; } } public static ISession OpenSession() { var context = HttpContext.Current; //.GetCurrentSession() if (context != null && context.Items.Contains(SessionKey)) { //Return already open ISession return (ISession)context.Items[SessionKey]; } else { //Create new ISession and store in HttpContext var newSession = SessionFactory.OpenSession(); if (context != null) context.Items[SessionKey] = newSession; return newSession; } } } My MVC Action [HttpPost] public ActionResult Edit(EStore.Domain.Model.Company company) { if (company.Id > 0) { _companyRepository.Update(company); _statusResponses.Add(StatusResponseHelper.Create(Constants .RecordUpdated(), StatusResponseLookup.Success)); } else { company.CreatedByUserId = currentUserId; _companyRepository.Add(company); } var viewModel = EditViewModel(company.Id, _statusResponses); return View("Edit", viewModel); }

    Read the article

  • Stresstesting ASP.NET/IIS with WCAT

    - by MartinHN
    I'm trying to setup a stress/load test using the WCAT toolkit included in the IIS Resources. Using LogParser, I've processed a UBR file with configuration. It looks something like this: [Configuration] NumClientMachines: 1 # number of distinct client machines to use NumClientThreads: 100 # number of threads per machine AsynchronousWait: TRUE # asynchronous wait for think and delay Duration: 5m # length of experiment (m = minutes, s = seconds) MaxRecvBuffer: 8192K # suggested maximum received buffer ThinkTime: 0s # maximum think-time before next request WarmupTime: 5s # time to warm up before taking statistics CooldownTime: 6s # time to cool down at the end of the experiment [Performance] [Script] SET RequestHeader = "Accept: */*\r\n" APP RequestHeader = "Accept-Language: en-us\r\n" APP RequestHeader = "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705)\r\n" APP RequestHeader = "Host: %HOST%\r\n" NEW TRANSACTION classId = 1 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 45117 verb = "GET" URL = "http://Url1.com" NEW TRANSACTION classId = 3 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 13662 verb = "GET" URL = "http://Url1.com/test.aspx" Does it look OK? I execute the controller with this command: wcctl -z StressTest.ubr -a localhost The Client(s) is executed like this: wcclient localhost When the client is executed, I get this error: main client thread Connect Attempt 0 Failed. Error = 10061 Has anyone in this world ever used WCAT?

    Read the article

  • Multiple database with Spring+Hibernate+JPA

    - by ziftech
    Hi everybody! I'm trying to configure Spring+Hibernate+JPA for work with two databases (MySQL and MSSQL) my datasource-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <!-- Data Source config --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}" /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <!-- JPA config --> <tx:annotation-driven transaction-manager="transactionManager" /> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocations"> <list value-type="java.lang.String"> <value>classpath*:config/persistence.local.xml</value> <value>classpath*:config/persistence.remote.xml</value> </list> </property> <property name="dataSources"> <map> <entry key="localDataSource" value-ref="dataSource" /> <entry key="remoteDataSource" value-ref="dataSourceRemote" /> </map> </property> <property name="defaultDataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="localjpa"/> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans> each persistence.xml contains one unit, like this: <persistence-unit name="remote" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> PersistenceUnitManager cause following exception: Cannot resolve reference to bean 'persistenceUnitManager' while setting bean property 'persistenceUnitManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'persistenceUnitManager' defined in class path resource [config/datasource-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation': no matching editors or conversion strategy found If left only one persistence.xml without list, every works fine but I need 2 units... I also try to find alternative solution for work with two databases in Spring+Hibernate context, so I would appreciate any solution new error after changing to persistenceXmlLocations No single default persistence unit defined in {classpath:config/persistence.local.xml, classpath:config/persistence.remote.xml} UPDATE: I add persistenceUnitName, it works, but only with one unit, still need help UPDATE: thanks, ChssPly76 I changed config files: datasource-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}"> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"> <property name="defaultPersistenceUnitName" value="pu1" /> </bean> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocation" value="${persistence.xml.location}" /> <property name="defaultDataSource" ref="dataSource" /> <!-- problem --> <property name="dataSources"> <map> <entry key="local" value-ref="dataSource" /> <entry key="remote" value-ref="dataSourceRemote" /> </map> </property> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu1" /> <property name="dataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactoryRemote" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu2" /> <property name="dataSource" ref="dataSourceRemote" /> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <bean id="transactionManagerRemote" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactoryRemote" /> </beans> persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="pu1" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${local.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${local.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> <persistence-unit name="pu2" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> </persistence> Now it builds two entityManagerFactory, but both are for Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu1 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu2 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server (but must MySQL) I suggest, that use only dataSource, dataSourceRemote (no substitution) is not worked. That's my last problem

    Read the article

  • Generate JSON object with transactionReceipt

    - by Carlos
    Hi, I've been the past days trying to test my first in-app purchse iphone application. Unfortunately I can't find the way to talk to iTunes server to verify the transactionReceipt. Because it's my first try with this technology I chose to verify the receipt directly from the iPhone instead using server support. But after trying to send the POST request with a JSON onbject created using the JSON api from google code, itunes always returns a strange response (instead the "status = 0" string I wait for). Here's the code that I use to verify the receipt: - (void)recordTransaction:(SKPaymentTransaction *)transaction { NSString *receiptStr = [[NSString alloc] initWithData:transaction.transactionReceipt encoding:NSUTF8StringEncoding]; NSDictionary *jsonDictionary = [NSDictionary dictionaryWithObjectsAndKeys:@"algo mas",@"receipt-data",nil]; NSString *jsonString = [jsonDictionary JSONRepresentation]; NSLog(@"string to send: %@",jsonString); NSLog(@"JSON Created"); urlData = [[NSMutableData data] retain]; //NSURL *sandboxStoreURL = [[NSURL alloc] initWithString:@"https://sandbox.itunes.apple.com/verifyReceipt"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:@"https://sandbox.itunes.apple.com/verifyReceipt"]]; [request setHTTPMethod:@"POST"]; [request setHTTPBody:[jsonString dataUsingEncoding:NSUTF8StringEncoding]]; NSLog(@"will create connection"); [[NSURLConnection alloc] initWithRequest:request delegate:self]; } maybe I'm forgetting something in the request's headers but I think that the problem is in the method I use to create the JSON object. HEre's how the JSON object looks like before I add it to the HTTPBody : string to send: {"receipt-data":"{\n\t\"signature\" = \"AUYMbhY ........... D0gIjEuMCI7Cn0=\";\n\t\"pod\" = \"100\";\n\t\"signing-status\" = \"0\";\n}"} The responses I've got: complete response { exception = "java.lang.IllegalArgumentException: Property list parsing failed while attempting to read unquoted string. No allowable characters were found. At line number: 1, column: 0."; status = 21002; } Thanks a lot for your guidance.

    Read the article

  • WF 4.0 can't get to resume workflow on the staging/production environment

    - by Yasmine Atta Hajjaj
    I have developed various registeration workflows using WF4.0. Each work flow has various bookmarks. I am using the registeration wf for an asp.net application. I tested the asp.net application locally and it is working fine( Starting WF, Persisting to db and resuming bookmarks). When I try to test it on the staging server, everything goes messy. I can no longer resume wfs and I get an error message : System.Runtime.DurableInstancing.InstancePersistenceCommandException was unhandled by user code Message=The execution of the InstancePersistenceCommand named {urn:schemas-microsoft-com:System.Activities.Persistence/command}LoadWorkflow was interrupted by an error. Source=System.Runtime.DurableInstancing StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Runtime.DurableInstancing.InstancePersistenceContext.OuterExecute(InstanceHandle initialInstanceHandle, InstancePersistenceCommand command, Transaction transaction, TimeSpan timeout) at System.Runtime.DurableInstancing.InstanceStore.Execute(InstanceHandle handle, InstancePersistenceCommand command, TimeSpan timeout) at System.Activities.WorkflowApplication.PersistenceManager.Load(TimeSpan timeout) at System.Activities.WorkflowApplication.LoadCore(TimeSpan timeout, Boolean loadAny) at System.Activities.WorkflowApplication.Load(Guid instanceId, TimeSpan timeout) at System.Activities.WorkflowApplication.Load(Guid instanceId) at CEO_StartUpCEORegisterationTest.LoadInstance(Guid wfInstanceId) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 64 at CEO_StartUpCEORegisterationTest.Page_Load(Object sender, EventArgs e) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 44 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Data.SqlClient.SqlException Message=Index 'NCIX_KeysTable_SurrogateInstanceId' on table 'KeysTable' (specified in the FROM clause) does not exist. Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=16 LineNumber=211 Number=308 Procedure=LoadInstance Server= State=1 StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Activities.DurableInstancing.SqlWorkflowInstanceStoreAsyncResult.SqlCommandAsyncResultCallback(IAsyncResult result) I know that this is quite verbose. But I have been banging my head against the wall for more than a week. I did search and all I came to know was to work on ms dtc. I enabled it on the staging server , I installed application server on the staging server and I am still getting the same error. I would appreciate if anyone could help with the problem. Thanks in advance :)

    Read the article

  • Membership.GetUser() within TransactionScope throws TransactionPromotionException

    - by Bob Kaufman
    The following code throws a TransactionAbortedException with message "The transaction has aborted" and an inner TransactionPromotionException with message "Failure while attempting to promote transaction": using ( TransactionScope transactionScope = new TransactionScope() ) { try { using ( MyDataContext context = new MyDataContext() ) { Guid accountID = new Guid( Request.QueryString[ "aid" ] ); Account account = ( from a in context.Accounts where a.UniqueID.Equals( accountID ) select a ).SingleOrDefault(); IQueryable < My_Data_Access_Layer.Login > loginList = from l in context.Logins where l.AccountID == account.AccountID select l; foreach ( My_Data_Access_Layer.Login login in loginList ) { MembershipUser membershipUser = Membership.GetUser( login.UniqueID ); } [... lots of DeleteAllOnSubmit() calls] context.SubmitChanges(); transactionScope.Complete(); } } catch ( Exception E ) { [... reports the exception ...] } } The error occurs at the call to Membership.GetUser(). My Connection String is: <add name="MyConnectionString" connectionString="Data Source=localhost\SQLEXPRESS;Initial Catalog=MyDatabase;Integrated Security=True" providerName="System.Data.SqlClient" /> Everything I've read tells me that TransactionScope should just get magically applied to the Membership calls. The user exists (I'd expect a null return otherwise.)

    Read the article

  • ejb testing issues with netbeans and openejb

    - by SibzTer
    I have created a netbeans 6.7 EnterpriseApplication project with ejb and war modules with a test stateless session ejb with a simple sayHello() method. I also added the openEjb library in order to unit test the ejb. Everything runs fine except that I keep getting the following error: Testsuite: com.myapp.test.NewEmptyJUnitTest Apache OpenEJB 3.1.1 build: 20090530-06:18 http://openejb.apache.org/ INFO - openejb.home = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - openejb.base = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - Configuring Service(id=Default Security Service, type=SecurityService, provider-id=Default Security Service) INFO - Configuring Service(id=Default Transaction Manager, type=TransactionManager, provider-id=Default Transaction Manager) INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Found EjbModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Beginning load: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Configuring enterprise application: classpath.ear WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant-launcher.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: xml-resolver-1.2.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: webservices-tools.jar java.lang.Exception: Could not load 1/0/com/sun/codemodel/CodeWriter.class at org.apache.xbean.finder.ClassFinder.readClassDef(ClassFinder.java:730) .... Turns out that I am getting the glassfish library webservices-tools.jar from somewhere somehow and I cant find out how to get rid of it so that I dont get the bunch of Exceptions whenever I try to run any junit tests. Has anyone faced this issue before? Can you help me resolve it please? Thanks.

    Read the article

  • Trouble with Google Finance API

    - by BANSAL MOHIT
    When i am trying to buy shares using google finance api i am getting an exception. Please help run: Enter user ID: **@gmail.com Enter user password: ** Enter transaction type: Buy Enter transaction date (yyyy-mm-dd): 2010-03-10 Enter number of shares (optional, e.g. 100.0): Enter price (optional, e.g. 141.14): 12.0 Enter commission (optional, e.g. 20.0): 23.0 Enter currency (optional, e.g. USD, EUR, JPY): USD Enter any notes: Notes Enter portfolio ID: 1 Enter ticker (EXCHANGE:SYMBOL): NASDAQ:INFY Inserting Entry at location: http://finance.google.com/finance/feeds/default/portfolios/1/positions/NASDAQ:INFY/transactions The server had a problem handling your request. com.google.gdata.util.ServiceForbiddenException: Forbidden Exception message unavailable at com.google.gdata.client.http.HttpGDataRequest.handleErrorResponse(HttpGDataRequest.java:561) at com.google.gdata.client.http.GoogleGDataRequest.handleErrorResponse(GoogleGDataRequest.java:563) at com.google.gdata.client.http.HttpGDataRequest.checkResponse(HttpGDataRequest.java:536) at com.google.gdata.client.http.HttpGDataRequest.execute(HttpGDataRequest.java:515) at com.google.gdata.client.http.GoogleGDataRequest.execute(GoogleGDataRequest.java:535) at com.google.gdata.client.Service.insert(Service.java:1347) at com.google.gdata.client.GoogleService.insert(GoogleService.java:599) at financetester.Main.insertTransactionEntry(Main.java:169) at financetester.Main.main(Main.java:81) BUILD SUCCESSFUL (total time: 1 minute 4 seconds)

    Read the article

  • Cannot turn off autocommit in a script using the Django ORM

    - by Wes
    I have a command line script that uses the Django ORM and MySQL backend. I want to turn off autocommit and commit manually. For the life of me, I cannot get this to work. Here is a pared down version of the script. A row is inserted into testtable every time I run this and I get this warning from MySQL: "Some non-transactional changed tables couldn't be rolled back". #!/usr/bin/python import os import sys django_dir = os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))) sys.path.append(django_dir) os.environ['DJANGO_DIR'] = django_dir os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' from django.core.management import setup_environ from myproject import settings setup_environ(settings) from django.db import transaction, connection cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into testtable values (\'X\')') cursor.execute('rollback') I also tried placing the insert in a function and adding Django's commit_manually wrapper, like so: @transaction.commit_manually def myfunction(): cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into westest values (\'X\')') cursor.execute('rollback') myfunction() I also tried setting DISABLE_TRANSACTION_MANAGEMENT = True in settings.py, with no further luck. I feel like I am missing something obvious. Any help you can give me is greatly appreciated. Thanks!

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >