Search Results

Search found 4921 results on 197 pages for 'django transactions'.

Page 144/197 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Is the community MySQL safe for production use?

    - by n_kips
    Or Will I need to get the enterprise version? This is because I found this on MySQL's site: If you are running a MySQL production level system, we would like to direct your attention to the product description of MySQL Enterprise Edition at: http://mysql.com/products/enterprise/ When I check the features, it seems like the community edition does not support transactions, while the enterprise version does. If it is true that the community edition is not right for production, then it seems like posgresql may be my way out, for it supports transactions and it is fully opensource. Will the sql syntax need to change (much) if I have to change? Thank you.

    Read the article

  • "Local transaction already has 1 non-XA Resource: cannot add more resources" error

    - by jthg
    After reading previous questions about this error, it seems like all of them conclude that you need to enable XA on all of the data sources. But: What if I don't want a distributed transaction? What would I do if I want to start transactions on two different databases at the same time, but commit the transaction on one database and roll back the transaction on the other? I'm wondering how my code actually initiated a distributed transaction. It looks to me like I'm starting completely separate transactions on each of the databases. Info about the application: The application is an EJB running on a Sun Java Application Server 9.1 I use something like the following spring context to set up the hibernate session factories: <bean id="dbADatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbA"/> </bean> <bean id="dbASessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbADatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> <bean id="dbBDatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbB"/> </bean> <bean id="dbBSessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbBDatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> Both of the JNDI resources are javax.sql.ConnectionPoolDatasoure's. They actually both point to the same connection pool, but we have two different JNDI resources because there's the possibility that the two groups of tables will move to different databases in the future. Then in code, I do: sessionA = dbASessionFactory.openSession(); sessionB = dbBSessionFactory.openSession(); sessionA.beginTransaction(); sessionB.beginTransaction(); The sessionB.beginTransaction() line produces the error in the title of this post - sometimes. I ran the app on two different sun application servers. On one runs it fine, the other throws the error. I don't see any difference in how the two servers are configured although they do connect to different, but equivalent databases. So the question is Why doesn't the above code start completely independent transactions? How can I force it to start independent transactions rather than a distributed transaction? What configuration could cause the difference in behavior between the two application servers? Thanks.

    Read the article

  • Problem using form builder & DOM manipulation in Rails with multiple levels of nested partials

    - by Chris Hart
    I'm having a problem using nested partials with dynamic form builder code (from the "complex form example" code on github) in Rails. I have my top level view "new" (where I attempt to generate the template): <% form_for (@transaction_group) do |txngroup_form| %> <%= txngroup_form.error_messages %> <% content_for :jstemplates do -%> <%= "var transaction='#{generate_template(txngroup_form, :transactions)}'" %> <% end -%> <%= render :partial => 'transaction_group', :locals => { :f => txngroup_form, :txn_group => @transaction_group }%> <% end -%> This renders the transaction_group partial: <div class="content"> <% logger.debug "in partial, class name = " + txn_group.class.name %> <% f.fields_for txn_group.transactions do |txn_form| %> <table id="transactions" class="form"> <tr class="header"><td>Price</td><td>Quantity</td></tr> <%= render :partial => 'transaction', :locals => { :tf => txn_form } %> </table> <% end %> <div>&nbsp;</div><div id="container"> <%= link_to 'Add a transaction', '#transaction', :class => "add_nested_item", :rel => "transactions" %> </div> <div>&nbsp;</div> ... which in turn renders the transaction partial: <tr><td><%= tf.text_field :price, :size => 5 %></td> <td><%= tf.text_field :quantity, :size => 2 %></td></tr> The generate_template code looks like this: def generate_html(form_builder, method, options = {}) options[:object] ||= form_builder.object.class.reflect_on_association(method).klass.new options[:partial] ||= method.to_s.singularize options[:form_builder_local] ||= :f form_builder.fields_for(method, options[:object], :child_index => 'NEW_RECORD') do |f| render(:partial => options[:partial], :locals => { options[:form_builder_local] => f }) end end def generate_template(form_builder, method, options = {}) escape_javascript generate_html(form_builder, method, options) end (Obviously my code is not the most elegant - I was trying to get this nested partial thing worked out first.) My problem is that I get an undefined variable exception from the transaction partial when loading the view: /Users/chris/dev/ss/app/views/transaction_groups/_transaction.html.erb:2:in _run_erb_app47views47transaction_groups47_transaction46html46erb_locals_f_object_transaction' /Users/chris/dev/ss/app/helpers/customers_helper.rb:29:in generate_html' /Users/chris/dev/ss/app/helpers/customers_helper.rb:28:in generate_html' /Users/chris/dev/ss/app/helpers/customers_helper.rb:34:in generate_template' /Users/chris/dev/ss/app/views/transaction_groups/new.html.erb:4:in _run_erb_app47views47transaction_groups47new46html46erb' /Users/chris/dev/ss/app/views/transaction_groups/new.html.erb:3:in _run_erb_app47views47transaction_groups47new46html46erb' /Users/chris/dev/ss/app/views/transaction_groups/new.html.erb:1:in _run_erb_app47views47transaction_groups47new46html46erb' /Users/chris/dev/ss/app/controllers/transaction_groups_controller.rb:17:in new' I'm pretty sure this is because the do loop for form_for hasn't executed yet (?)... I'm not sure that my approach to this problem is the best, but I haven't been able to find a better solution for dynamically adding form partials to the DOM. Basically I need a way to add records to a has_many model dynamically on a nested form. Any recommendations on a way to fix this particular problem or (even better!) a cleaner solution are appreciated. Thanks in advance. Chris

    Read the article

  • Technical Article: Oracle Magazine Java Developer of the Year Adam Bien on Java EE 6 Simplicity by Design

    - by janice.heiss(at)oracle.com
    Java Champion and Oracle Magazine Java Developer of the Year, Adam Bien, offers his unique perspective on how to leverage new Java EE 6 features to build simple and maintainable applications in a new article in Oracle Magazine. Bien examines different Java EE 6 architectures and design approaches in an effort to help developers build efficient, simple, and maintainable applications.From the article: "Java EE 6 consists of a set of independent APIs released together under the Java EE name. Although these APIs are independent, they fit together surprisingly well. For a given application, you could use only JavaServer Faces (JSF) 2.0, you could use Enterprise JavaBeans (EJB) 3.1 for transactional services, or you could use Contexts and Dependency Injection (CDI) with Java Persistence API (JPA) 2.0 and the Bean Validation model to implement transactions.""With a pragmatic mix of available Java EE 6 APIs, you can entirely eliminate the need to implement infrastructure services such as transactions, threading, throttling, or monitoring in your application. The real challenge is in selecting the right subset of APIs that minimizes overhead and complexity while making sure you don't have to reinvent the wheel with custom code. As a general rule, you should strive to use existing Java SE and Java EE services before expanding your search to find alternatives." Read the entire article here.

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • What is ODBC?

    According to Microsoft, ODBC is a specification for a database API.  This API is database and operating system agnostic due to the fact that the primary goal of the ODBC API is to be language-independent. Additionally, the open functions of the API are created by the manufactures of DBMS-specific drivers. Developers can use these exposed functions from within their own custom applications so that they can communicate with DBMS through the language-independent drivers. ODBC Advantages Multiple ODBC drivers for each DBSM Example Oracle’s ODBC Driver Merant’s Oracle Driver Microsoft’s Oracle Driver ODBC Drivers are constantly updated for the latest data types ODBC allows for more control when querying ODBC allows for Isolation Levels ODBC Disadvantages ODBC Requires DSN ODBC is the proxy between an application and a database ODBC is dependent on third party drivers ODBC Transaction Isolation Levels are related to and limited by the transaction management capabilities of the data source. Transaction isolation levels:  READ UNCOMMITTED Data is allowed to be read prior to the committing of a transaction.  READ COMMITTED Data is only accessible after a transaction has completed  REPEATABLE READ The same data value is read during the entire transaction  SERIALIZABLE Transactions have no effect on other transactions

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • New PeopleSoft Applications Search

    - by Matthew Haavisto
    As you may have seen from the PeopleTools 8.52 Release Value Proposition , PeopleTools intends to introduce a new search capability in release 8.52. We believe this feature will not only improve the ability of users to find content, but will fundamentally change the way people navigate around the PeopleSoft ecosystem. PeopleSoft applications will be delivering this new search in coming releases and feature packs. PeopleSoft Application Search is actually a framework—a group of features that provides an improved means of searching for a variety of content across PeopleSoft applications. From a user experience perspective, the new search offers a powerful, keyword-based search presented in a familiar, intuitive user experience. Rather than browsing through long menu hierarchies to find a page, data item, or transaction, users can use PeopleSoft Application Search to directly navigate to desired locations. We envision this to be similar to how people navigate across the internet. This capability may reduce or even eliminate the need to navigate PeopleSoft applications using the existing application menu system (though menus will still be available to people that prefer that method). The new search will be available at any point in an application and can be configured to span multiple PeopleSoft applications. It enables users to initiate transactions or navigate to key information without using the PeopleSoft application menus. In addition, filters and facets will enable people to narrow their search results sets, making it easier to identify and navigate to desired application content. Action menus are embedded directly in the search results, allowing users to navigate straight to specific related transactions – pre-populated with the selected search results data. PeopleSoft Applications Search framework uses Oracle’s Secure Enterprise Search as its search engine. Most Customers will benefit from the new search when it is delivered with applications. However, customers can start deploying it after a Tools-only upgrade. In this case, however, customers would have to create their own indices and implement security.

    Read the article

  • Update: GTAS and EBS

    - by jeffrey.waterman
    Provided below are updated target date timeframes for provided patches for upcoming legislative enhancements.   Dates have been pushed out from previous dates provided due to changes in Treasury mandatory dates.  Mandatory dates for GTAS and IPAC have changes since previous target dates for patches were provided.   These are target dates, not commitments to deliver functionality. Deliverable Target Timeframes for Customer Patches Comments R12 GTAS Configuration Apr 2012 Patch is available GTAS Key Processes Oct/Nov 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Nov/Dec 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC Apr/May 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes May/Jun  2013 Includes GTAS Customer and Vendor  update processes. IPAC Aug/Sep Includes modification required to IPAC to accommodate Componentized TAS and BETC. 11i GTAS Configuration May 2012 Patch is available GTAS Key Processes Nov/Dec 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Dec/Jan 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC May/Jun 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes Jun/Jul 2013 Includes GTAS Customer and Vendor  update processes. IPAC Sep/Oct 2013 Includes modification required to IPAC to accommodate Componentized TAS and BETC.

    Read the article

  • Transaction classification. Artificial intelligence

    - by Alex
    For a project, I have to classify a list of banking transactions based on their description. Supose I have 2 categories: health and entertainment. Initially, the transactions will have basic information: date and time, ammount and a description given by the user. For example: Transaction 1: 09/17/2012 12:23:02 pm - 45.32$ - "medicine payments" Transaction 2: 09/18/2012 1:56:54 pm - 8.99$ - "movie ticket" Transaction 3: 09/18/2012 7:46:37 pm - 299.45$ - "dentist appointment" Transaction 4: 09/19/2012 6:50:17 am - 45.32$ - "videogame shopping" The idea is to use that description to classify the transaction. 1 and 3 would go to "health" category while 2 and 4 would go to "entertainment". I want to use the google prediction API to do this. In reality, I have 7 different categories, and for each one, a lot of key words related to that category. I would use some for training and some for testing. Is this even possible? I mean, to determine the category given a few words? Plus, the number of words is not necesarally the same on every transaction. Thanks for any help or guidance! Very appreciated Possible solution: https://developers.google.com/prediction/docs/hello_world?hl=es#theproblem

    Read the article

  • Understanding Service Compensation part of Industrial SOA series

    - by JuergenKress
    Some of the most important SOA design patterns that we have successfully applied in projects will be described in this article. These include the Compensation pattern and the UI mediator pattern, the Common Data Format pattern and the Data Access pattern. All of these patterns are included in Thomas Erl's book, "SOA Design Patterns", and are presented here in detail, together with our practical experiences. We begin our "best of" SOA pattern collection with the Compensation pattern. Compensation is required in error situations in an SOA, as multiple atomic service operations cannot generally be linked with classic transactions this would violate the principle of loose coupling. An error situation of this sort will occur, particularly if service operations are combined into processes or new services during orchestration or by applying the Composite pattern, and the transaction bracket has to be expanded as a result. We need mechanisms to undo the effects of individual services (the status changes in the overall system) and to ensure that a consistent system state is maintained at all times, so as to preserve system integrity. For the Compensation pattern, we would like to address the following questions: Why is compensation important in relation to SOA? How is the topic of compensation linked with the topic of transactions? What are the challenges with regard to compensation... Read the full article in the Service Technology Magazine or at OTN. Share your comments and feedback on the Industrial SOA series by using the hashtag #industrialsoa. Missed an article of the Industrial SOA series visit the overview at OTN. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Industrial SOA,SOA,SOA Service Compensation,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Work Execution in EAM

    - by Annemarie Provisero
    ADVISOR WEBCAST: Work Execution in EAM PRODUCT FAMILY: Manufacturing Enterprise Asset Management July 5, 2011 at 8 am PT, 9 am MT, 11 am ET The purpose of this webcast is to discuss EAM Work Order Management. This one-hour session is ideal for Functional Users, System Administrators, Database Administrators, and Customers with a basic knowledge of EAM and who raise or manage work orders and related processes. During this webcast, Zar will cover the various types of work orders and look at all the related activities associated with work orders including: setup, operations, tasks, work order transactions, relationship and planning. TOPICS WILL INCLUDE: Work Order Types (Routine, Planned Maintenance, Rebuild, Easy) Work Order statuses and other important setups Operations and Tasks Relationships Work Order Transactions Work Order Planning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Oracle E-Business Products New Search Helpers for Guided Resolution of Customer Issues

    - by user793044
    Oracle E-Business Proactive Support has created many new guided resolution documents that you may find helpful in resolving issues in your EBS applications.  These new documents are called “Search Helpers” and they guide you through your issue to a solution.  They are meant to be an easy and fast method to finding a relevant, complete solution. Hundreds of notes and service requests were reviewed and the best solutions to these known issues were selected.  For some issues, notes were updated to better clarify the solution.  In other cases, if a note with a solution did not already exist, one was created. You start the process by selecting the scenario you have encountered.  You may have received an error message, or there may be a particular area of the application in which you have encountered an issue.  Based on your selection of the issue, the Search Helper will present one or more additional possible symptoms.  When you have selected from both of these two sections, you are then presented with one or more articles known to have fully solved this issue in the past.  Several EBS products have produced Search Helpers documents.  Take a look at Doc ID 1501724.1 for an index of the current EBS Search Helpers.  Here is an example of a Search Helper from the Receivables Transactions area: After selecting the Functional Area of "Entering / Updating Transactions" a list of Known Symptoms is presented: And, when "Transaction numbers are not in sequence" is selected, a solution link is provided for Document ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. The EBS applications that currently have published Search Helpers are: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • Is there a formula for this?

    - by Gortron
    TL/DR: Any way to work out if known numbers between a known start and ending figure should be positive or negative numbers? I am developing an application in PHP which can import and read PDFs. The PDFs are financial ones such as bank statements with records of transactions in and out of a bank account. I only have PDFs to work with, no other formats such as CSV unfortunately. I convert the PDF to HTML using pdftohtml and start parsing the data, the intended end result is an array of transactions. So far I have it working smoothly collecting dates, descriptions and balance. Converting the XML instead doesn't help. There are other pieces of transcriptional data such as debit or credit amounts. In the PDF, the credit amount is in one column and the debit amount is in another column so it is quite clear in the PDF. However, when converted to HTML, the formatting is lost and therefor I don't know if the amount was a credit or debit amount. So, my question is, given a starting balance and an ending balance and several known figures in between, is it possible for a programme to work out if those known figures in between are credit or debit amounts? I imagine there could potentially be several combinations of those known values to reach the ending balance so I'd like to apply a formula to return the correct credit/debit sequence only if its the only possible solution. If there are several ways of adding/subtracting the known values to reach the end balance, I can ask the user to look at it manually but I'd like to keep this to a minimum if possible. Possible to do, do you think? Thank you in advance for any help.

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • why are transaction monitors on decline? or are they?

    - by mrkafk
    http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. Erlang killed them all. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, "but it was small specialized side project" kind of thing). BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >