Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 49/95 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Detecting Duplicates Using Oracle Business Rules

    - by joeywong-Oracle
    Recently I was involved with a Business Process Management Proof of Concept (BPM PoC) where we wanted to show how customers could use Oracle Business Rules (OBR) to easily define some rules to detect certain conditions, such as duplicate account numbers, duplicate names, high transaction amounts, etc, in a set of transactions. Traditionally you would have to loop through the transactions and compare each transaction with each other to find matching conditions. This is not particularly nice as it relies on more traditional approaches (coding) and is not the most efficient way. OBR is a great place to house these types’ of rules as it allows users/developers to externalise the rules, in a simpler manner, externalising the rules from the message flows and allows users to change them when required. So I went ahead looking for some examples. After quite a bit of time spent Googling, I did not find much out in the blogosphere. In fact the best example was actually from...... wait for it...... Oracle Documentation! (http://docs.oracle.com/cd/E28271_01/user.1111/e10228/rules_start.htm#ASRUG228) However, if you followed the link there was not much explanation provided with the example. So the aim of this article is to provide a little more explanation to the example so that it can be better understood. Note: I won’t be covering the BPM parts in great detail. Use case: Payment instruction file is required to be processed. Before instruction file can be processed it needs to be approved by a business user. Before the approval process, it would be useful to run the payment instruction file through OBR to look for transactions of interest. The output of the OBR can then be used to flag the transactions for the approvers to investigate. Example BPM Process So let’s start defining the Business Rules Dictionary. For the input into our rules, we will be passing in an array of payments which contain some basic information for our demo purposes. Input to Business Rules And for our output we want to have an array of rule output messages. Note that the element I am using for the output is only for one rule message element and not an array. We will configure the Business Rules component later to return an array instead. Output from Business Rules Business Rule – Create Dictionary Fill in all the details and click OK. Open the Business Rules component and select Decision Functions from the side. Modify the Decision Function Configuration Select the decision function and click on the edit button (the pencil), don’t worry that JDeveloper indicates that there is an error with the decision function. Then click the Ouputs tab and make sure the checkbox under the List column is checked, this is to tell the Business Rules component that it should return an array of rule message elements. Updating the Decision Service Next we will define the actual rules. Click on Ruleset1 on the side and then the Create Rule in the IF/THEN Rule section. Creating new rule in ruleset Ok, this is where some detailed explanation is required. Remember that the input to this Business Rules dictionary is a list of payments, each of those payments were of the complex type PaymentType. Each of those payments in the Oracle Business Rules engine is treated as a fact in its working memory. Implemented rule So in the IF/THEN rule, the first task is to grab two PaymentType facts from the working memory and assign them to temporary variable names (payment1 and payment2 in our example). Matching facts Once we have them in the temporary variables, we can then start comparing them to each other. For our demonstration we want to find payments where the account numbers were the same but the account name was different. Suspicious payment instruction And to stop the rule from comparing the same facts to each other, over and over again, we have to include the last test. Stop rule from comparing endlessly And that’s it! No for loops, no need to keep track of what you have or have not compared, OBR handles all that for you because everything is done in its working memory. And once all the tests have been satisfied we need to assert a new fact for the output. Assert the output fact Save your Business Rules. Next step is to complete the data association in the BPM process. Pay extra care to use Copy List instead of the default Copy when doing data association at an array level. Input and output data association Deploy and test. Test data Rule matched Parting words: Ideally you would then use the output of the Business Rules component to then display/flag the transactions which triggered the rule so that the approver can investigate. Link: SOA Project Archive [Download]

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • synchronization web service methodologies or papers

    - by Grady Player
    I am building a web service (PHP+JSON) to sync with my iphone app. The main goals are: Backup Provide a web view for printing / sorting, manipulating. allow a group sync up and down. I am aware of the logic problems with all of these items, Ie. if one person deletes something, do you persist this change to other users, collisions, etc. I am looking for just any book or scholarly work, or even words of wisdom to address common issues. when to detect changes of data with hashes, vs modified dates, or combination. how do address consolidation of sequential ID's originating on different client nodes (can be sidestepped in my context, but it would be interesting.) dealing with collisions (is there a universally safe way to do so?). general best practices. how to structure the actual data transaction (ask for whole list then detect changes...)

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • DCOGS Balance Breakup Diagnostic in OPM Financials

    - by ChristineS-Oracle
    Purpose of this diagnostic (OPMDCOGSDiag.sql) is to identify the sales orders which constitute the Deferred COGS account balance.This will help to get the detailed transaction information for Sales Order/s Order Management, Account Receivables, Inventory and OPM financials sub ledger at the Organization level.  This script is applicable for various scenarios of Standard Sales Order, Return Orders (RMA) coupled with all the applicable OPM costing methods like Standard, Actual and Lot costing.  OBJECTIVE: The sales order(s) which are at different stages of their life cycle in one spreadsheet at one go. To collect the information of: This will help in: Lesser time for data collection. Faster diagnosis of the issue. Easy collaboration across different modules like  Order Management, Accounts Receivables, Inventory and Cost Management.  You can download the script from Doc ID 1617599.1 DCOGS Balance Breakup (SO/RMA) and Diagnostic Analyzer in OPM Financials.

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • How to spawn a character at certain point and walk to a set point

    - by Robert H.
    I am making a game where I have a background image of a neighborhood. Each location has a different number of customers that are generated to walk on sidewalks. They all walk to a specific location (like a stand or cart that sells stuff), after they get to location I want them to interact with the cart. However, if another customer is already in a sale interaction then the others get in line in order of arrival. After the transaction the customers walk off screen. Any information on how I can do this and what game engine would be needed? Any one have any idea where I should go for this. I already have my game done up through Eclipse/Java without any game engine.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Webcast: Flow Manufacturing Work Order-less Completion

    - by ChristineS-Oracle
    Webcast: Flow Manufacturing Work Order-less Completion Date: August 27, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This advisor webcast is intended for technical and functional users who want to understand the Flow Manufacturing Work Order-less Completion and its Transaction types. This presentation will include the required setups and provide details of the business process. Topics will include: Overview of Flow Manufacturing and Integration with Work In Process Overview of Work Order-less Completion Basic Setup Steps Transactions performed in Work Order-less Completion form Details & Registration: Doc ID 1906749.1

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Webcast Replay: Extreme Performance for Consolidated Workloads with Oracle Exadata

    - by kimberly.billings
    If you missed our live webcast Extreme Performance for Consolidated Workloads with Oracle Exadata last week, the replay is now available. Watch the free on-demand webcast in which Tim Shetler, Vice President of Oracle Database Product Management, and Richard Exley, Consulting Member of Technical Staff, discuss how Oracle Exadata can help you can significantly improve application performance and reduce infrastructure costs by consolidating transaction processing, data warehousing, or mixed workloads on Oracle Exadata. Note: (1) Turn off pop-up blockers if the slides do not advance automatically. (2) Slides are available for download. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Off-site Cardholder Data Storage

    - by LinuxGnut
    Is there a service or site out there that will store cardholder data for me? I don't need any kind of transaction processing or recurring billing... I just need somewhere that I can store data on until someone in my company is able to look at it. The specific need is allowing customers to input data that will be used for credit checks. Name, Address, Credit Card(s), and the such. Google Checkout, PayPal, NetSuite, and Authorize.net seem to be what everyone suggests to me, but they don't offer what I need -- they're just payment gateways.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Implementing set of processes in a stored procedure or through the code?

    - by just_name
    I want to know what's the suitable method to implement the following case (best practice). If i make a set of processes like this : 1- select data from set of DB tables. 2- loop on the selected result . 3- Make some checks on each iteration . 4- Insert the result in another table . Implementing the previous steps in a stored procedure or in a transaction through my code (asp.net) . ? Concerning the performance , security and reliability issues .

    Read the article

  • Slow on browsing web, but normal when using apt-get

    - by Arif Setyawan
    I have a problem with my fresh instalation of Ubuntu 11.10. The problem happen when i start up my browser and accessing website such google, askubuntu.com, facebook and other sites. frustate on the situation i try watch the progress in system monitor. the result from monitor shows that there is no data transfer when i try to accessing the page. firstly, i assume that it is the trouble from my ISP or my home connection. But it is not. When i try pinging google it shows transaction between my computer and the network but still null from browser. Then i tried to test the aptitude and it was work fine but still the browser won't recieve any packet from the internet. Here is the detail of my tools : Laptop : Toshiba Satellite e205 (network card atheros) Operating System : Ubuntu 11.04 dual booted with Win7 Browser : Mozilla Firefox comes up with the Ubuntu 11.10 installation Chrome : Last version (just yesterday install it) Thank You before :) [this question posted via win]

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Transactional Interceptors in Java EE 7 - Request for feedback

    - by arungupta
    Linda described how EJB's container-managed transactions can be applied to the Java EE 7 platform as a whole using a solution based on CDI interceptors. This can then be used by other Java EE components as well, such as Managed Beans. The plan is to add an annotation and standardized values in the javax.transaction package. For example: @Inherited @InterceptorBinding @Target({TYPE, METHOD}) @Retention(RUNTIME) public @interface Transactional { TxType value() default TxType.REQUIRED } And then this can be specified on a class or a method of a class as: public class ShoppingCart { ... @Transactional public void checkOut() {...} ... } This interceptor will be defined as part of the update to Java Transactions API spec at jta-spec.java.net. The Java EE 7 Expert Group needs your help and looking for feedback on the exact semantics. The complete discussion can be read here. Please post your feedback to [email protected] and we'll also consider comments posted to this entry.

    Read the article

  • Generating Landed Cost Management Charges using Custom Pricing Attributes

    - by ChristineS-Oracle
    Learn how to incorporate Custom Pricing Attributes into Landed Cost Management through a new whitepaper.  The new application, Landed Cost Management (LCM), enables exact shipment charges to be applied to incoming receipts. These charges are calculated using the Freight and Special Charges functionality from Advanced Pricing within the Pricing Transaction Entity of “Purchasing”.Advanced Pricing is very flexible in that custom attributes can be defined to derive specific charges. The way that Landed Cost Management builds these attributes is different from the processing for Advanced Pricing with Purchasing.The whitepaper can be downloaded from document Oracle Advanced Pricing White Papers, Doc ID 136687.1.

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Rollback in Oracle and SQL Server

    - by CatherineRussell
    I have an Oracle background. It was interesting to see how rollback handled in Oracle and SQL Server. There is no begin trans in Oracle.  What oracle does is it will store the data in a temporary area called the rollback segments. Untill your issue the commit command the records will be kept there. You can even rollback your update statement by issuing the rollback command. When you issue the commit command the records in the rollback segments are written to the redo log files. The same logic for insert is also applicable except that there is no mirror image of the record kept.   In SQL Server, if you want to be able to roll back statement, you neet to start your statement with a "begin tran" . Then, you can rollback a transaction, if this is needed. begin tran update Person set FirstName = 'Arthur' where PersonId = 10 -- select firstname from Person rollback

    Read the article

  • WP: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency

    - by uwes
    Consolidation in the data center is the driving factor in reducing capital and operational expense in IT today. This is particularly relevant as customers invest more in cloud infrastructure and associated service delivery. Database consolidation is a strategic component in this effort. Oracle Database 12 c introduces Oracle Multitenant , a new database consolidation model in which multiple Pluggable Databases (PDBs) are consolidated within a Container Database (CDB). While keeping many of the isolation aspects of single databases, it allows PDBs to share the system global area (SGA) and background processes of a common CDB . The white paper recently published on OTN: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency analyzes and quantifies savings in compute resources, efficiencies in transaction processing, and consolidation density of Oracle Multitenant compared to consolidated single instance databases (SIDBs) running in a bare-metal environment.

    Read the article

  • Highlighting new rows in ADF Table

    - by Sireesha Pinninti
    About This article explains how to hightlight newly inserted rows in an ADF Table without writing any extra java/javascript code.IntroductionSometimes we may wish to give more clarification to the end user by differentiating between newly inserted rows and the existing rows(i.e the rows from DB) in a table by highlighting new rows in different color as in the figure shown below. SolutionWe can achieve the same by giving following EL to inlineStyle property of every column inside af:table: #{row.row.entities[0].entityState == 0?'background-color:#307D7E;':''}ExplanationHere is the explanation for row.row.entities[0].entityState given inside EL which returns the state of the row(i.e, New, Modified, Unmodified, Initialized etc.)row - Refers to a tree node binding(instance of FacesCtrlHierNodeBinding) at runtimerow.row - Refers to an instance of row that the tree node is based onrow.row.entities[0] - Gets the Entity row at zeroth index. In most of the cases, the table will be based on single entity. If your table is based on multiple entities then the index needs to be given accordingly.row.row.entities[0].entityState - Gets Entity Object's current Entity-state in the transaction.(0 - New, Modified - 2, Unmodified - 1, Initialized - -1,  etc.,)

    Read the article

  • Immutable design with an ORM: How are sessions managed?

    - by Programmin Tool
    If I were to make a site with a mutable language like C# and use NHibernate, I would normally approach sessions with the idea of making them as create only when needed and dispose at request end. This has helped with keeping a session for multiple transactions by a user but keep it from staying open too long where the state might be corrupted. In an immutable system, like F#, I would think I shouldn't do this because it supposes that a single session could be updated constantly by any number of inserts/updates/deletes/ect... I'm not against the "using" solution since I would think that connecting pooling will help cut down on the cost of connecting every time, but I don't know if all database systems do connection pooling. It just seems like there should be a better way that doesn't compromise the immutability goal. Should I just do a simple "using" block per transaction or is there a better pattern for this?

    Read the article

  • Le CEO de Nokia dément les rumeurs de rachat de son entreprise par Microsoft, et annonce le premier produit en cours

    Le CEO de Nokia dément les rumeurs de rachat de son entreprise par Microsoft, et annonce le premier produit en cours Mise à jour du 18.03.2011 par Katleen Le rapprochement entre Microsoft et Nokia continue de faire couler beaucoup d'encre. Les rumeurs évoquent désormais un rachat du constructeur finlandais par le géant américain. Ce qui a le don d'agacer Stephen Elop, le patron de Nokia, qui ne voit d'ailleurs aucune raison qui pousserait Redmond à racheter sa compagnie. De plus, une telle transaction risquerait, selon lui, de bouleverser le marché en apportant "une bonne année d'investigations pour anti-trust, de gros problèmes, et des retards". Une grosse boulette stratégique, donc....

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >