Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 389/679 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • June 2012 Oracle Technology Network Member Offers

    - by programmarketingOTN
    Happy Friday!  Here are some NEW offers just for Oracle Technology Network (OTN) Members! Oracle Store - Save 10% on Your Next Purchase from the Oracle Store. Oracle Press - Now get 40% off select Ebook titles as well! Packt Publishing Offers - Get 25% off the print books and 35% off the eBooks listed below. Oracle SOA Infrastructure Implementation Certification Handbook (1Z0-451) Oracle BPM Suite 11g Developer's cookbook Apress Offers - Get 40% off Ebook of Beginning Database Design.Murach Offers -  Get 30% off Murach’s Oracle SQL and PL/SQL Get discount codes and links to buy for these offers at the OTN Members Discount page.

    Read the article

  • Oracle Retail Consulting and the Implementation Process, with Maria Porretta

    - by user801960
    Maria Porretta, Engagement Director, discusses Oracle Retail Consulting and its involvement in the implementation process and how it supports customers to maximize their ROI in Oracle Retail solutions. Maria explains the wide range of factors customers need to consider when preparing to implement Oracle Retail, from the solutions being utilized to the current IT infrastructures and available resources of the end user. Oracle Retail Consulting ensures a smooth and efficient process by working with customers from design right through to final implementation, and continues to work with customers to ensure they get value from their software investments and further extend investment in Oracle Retail solutions. Further information is available on our website regarding Oracle Retail Consulting.

    Read the article

  • Change a Foreign Action's Display Text

    - by Geertjan
    I want the display text on an Action on a Node to show something about the underlying object. But the Action is registered somewhere in the layer (i.e., in the registry), i.e., I have no control over it. How do I change the display text in this scenario? Here's how. Below I look in the Actions/Events folder, iterate through all the Actions registered there, look for an Action with display text starting with "Edit", change it to display something from the underlying object, wrap a new Action around that Action, build up a new list of Actions, and return those (together with all the other Actions in that folder) from "getActions" on my Node: @Override public Action[] getActions(boolean context) { List<Action> newEventActions = new ArrayList<Action>(); List<? extends Action> eventActions = Utilities.actionsForPath("Actions/Events"); for (final Action action : eventActions) { String value = action.getValue(Action.NAME).toString(); if (value.startsWith("Edit")) { Action editAction = new AbstractAction("Edit " + getLookup().lookup(Event.class).getPlace()) { @Override public void actionPerformed(ActionEvent e) { action.actionPerformed(e); } }; newEventActions.add(editAction); } else { newEventActions.add(action); } } return newEventActions.toArray(new Action[eventActions.size()]); } If someone knows of a better way, please let me know.

    Read the article

  • Evaluating and Investigating Drug Safety Signals with Public Databases Webinar

    - by Roxana Babiciu
    In this one-hour webinar, BioPharm Systems' Dr. Rodney Lemery, vice president of safety and pharmacovigilance, will review a number of public databases available to use during the evaluation and investigation of identified safety signals. The discussion will focus on the use of free and paid longitudinal healthcare databases available online. After attending this presentation, you will better understand how these data sources can be used in your daily PV work. Read more here

    Read the article

  • Oracle OpenWorld 2012 - What's New

    - by Cinzia Mascanzoni
    Oracle OpenWorld 2012 is now over. Here a summary of major announcements on Hardware and Technology. Oracle Unveils Expanded Oracle Cloud Services Portfolio Oracle Unveils New Partner Cloud Programs Oracle Announces Latest Release of Oracle Exalogic Elastic Cloud Oracle Announces Oracle Exadata X3 Database In-Memory Machine Oracle Outlines Opportunity to Transform Industries from Device to Data Center with Embedded Java Oracle Announces Oracle Solaris 11.1 Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business New Release of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere

    Read the article

  • Membership Provider Parte 1

    - by Jason Ulloa
    Asp.net ha sido una de las tecnologías creadas por Microsoft de mas rápido crecimiento por la facilidad para los desarrolladores de crear sitios web. Una de las partes de mayor importancia que tiene asp.net es el contar con el Membership Provider o proveedor de Membrecía, que permite la creación, manejo y mantenimiento de un sistema completo de control y autenticación de usuarios. Para dar inicio a la serie de post que escribiré sobre que es Membeship y cuáles son las funcionalidades principales daremos unas definiciones. Tal como se menciono anteriormente con el membership provider podemos crear un sistema de control de usuarios completos, entre las funcionalidades principales podemos encontrar: * Creación de usuarios * Almacenamiento de información en base de datos * Autenticación, bloqueos y seguimiento Otras de las ventajas que cabe resaltar, es que, algunos de los controles de asp.net ya traen "naturalmente" en sus funciones la implementación del membership provider, tal como el control "Login" o los controles de estado de usuario, lo cual nos permite que con solo arrastrarlos al diseñador estén funcionando. Membership provider es poderoso, pero su funcionalidad y seguridad se ven aumentadas cuando se integra con otros proveedores de asp.net como lo son RoleProvider y Profile Provider (estos los discutiremos en otros post). En la siguiente figura, podemos ver como se distribuyen algunoS provider creados por Microsoft Antes de iniciar con la implementación de membership debes conocer cosas básicas como el espacio de nombres al que pertenece, el cual es: System.Web.Security que se encuentra dentro del ensamblado System.Web. Algo que debe tomarse en cuenta, es que, para poder utilizar cualquiera de los miembros de la clase, debemos hacer la referencia respectiva. Por defecto, el membership provider está diseñado para trabajar directamente con SQL Server, de ahí que su nombre completo seria SQL Membership Provider. Sin embargo, debido a su gran flexibilidad podemos extenderlo a cualquier base de datos o bien modificarlo para adapatarlo a nuestras necesidades. En los siguientes posts, discutiremos como crear un proveedor personalizado utilizando Entity Framework, separando las capas de acceso y datos y cuáles son las principales funciones que podemos aplicar. En palabras básicas y sin entrar muy hondo en el tema, hemos descrito el objetivo del Membership Provider, para todos los que desean ampliar pueden hacerlo en: http://msdn.microsoft.com/es-es/library/system.web.security.membership%28v=vs.100%29.aspx

    Read the article

  • Why the R# Method Group Refactoring is Evil

    - by Liam McLennan
    The refactoring I’m talking about is recommended by resharper when it sees a lambda that consists entirely of a method call that is passed the object that is the parameter to the lambda. Here is an example: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(n => IsEven(n)); } private bool IsEven(int number) { return number%2 == 0; } } When resharper gets to n => IsEven(n) it underlines the lambda with a green squiggly telling me that the code can be replaced with a method group. If I apply the refactoring the code becomes: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(IsEven); } private bool IsEven(int number) { return number%2 == 0; } } The method group syntax implies that the lambda’s parameter is the same as the IsEven method’s parameter. So a readable, explicit syntax has been replaced with an obfuscated, implicit syntax. That is why the method group refactoring is evil.

    Read the article

  • Applications: The Mathematics of Movement, Part 2

    - by TechTwaddle
    In part 1 of this series we saw how we can make the marble move towards the click point, with a fixed speed. In this post we’ll see, first, how to get rid of Atan2(), sine() and cosine() in our calculations, and, second, reducing the speed of the marble as it approaches the destination, so it looks like the marble is easing into it’s final position. As I mentioned in one of the previous posts, this is achieved by making the speed of the marble a function of the distance between the marble and the destination point. Getting rid of Atan2(), sine() and cosine() Ok, to be fair we are not exactly getting rid of these trigonometric functions, rather, replacing one form with another. So instead of writing sin(?), we write y/length. You see the point. So instead of using the trig functions as below, double x = destX - marble1.x; double y = destY - marble1.y; //distance between destination and current position, before updating marble position distanceSqrd = x * x + y * y; double angle = Math.Atan2(y, x); //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction incrX = speed * Math.Cos(angle); incrY = speed * Math.Sin(angle); marble1.x += incrX; marble1.y += incrY; we use the following, double x = destX - marble1.x; double y = destY - marble1.y; //distance between destination and marble (before updating marble position) lengthSqrd = x * x + y * y; length = Math.Sqrt(lengthSqrd); //unit vector along the same direction as vector(x, y) unitX = x / length; unitY = y / length; //update marble position incrX = speed * unitX; incrY = speed * unitY; marble1.x += incrX; marble1.y += incrY; so we replaced cos(?) with x/length and sin(?) with y/length. The result is the same.   Adding oomph to the way it moves In the last post we had the speed of the marble fixed at 6, double speed = 6; to make the marble decelerate as it moves, we have to keep updating the speed of the marble in every frame such that the speed is calculated as a function of the length. So we may have, speed = length/12; ‘length’ keeps decreasing as the marble moves and so does speed. The Form1_MouseUp() function remains the same as before, here is the UpdatePosition() method, private void UpdatePosition() {     double incrX = 0, incrY = 0;     double lengthSqrd = 0, length = 0, lengthSqrdNew = 0;     double unitX = 0, unitY = 0;     double speed = 0;     double x = destX - marble1.x;     double y = destY - marble1.y;     //distance between destination and marble (before updating marble position)     lengthSqrd = x * x + y * y;     length = Math.Sqrt(lengthSqrd);     //unit vector along the same direction as vector(x, y)     unitX = x / length;     unitY = y / length;     //speed as a function of length     speed = length / 12;     //update marble position     incrX = speed * unitX;     incrY = speed * unitY;     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and marble (after updating marble position)     x = destX - (marble1.x);     y = destY - (marble1.y);     lengthSqrdNew = x * x + y * y;     /*      * End Condition:      * 1. If there is not much difference between lengthSqrd and lengthSqrdNew      * 2. If the marble has moved more than or equal to a distance of totLenToTravel (see Form1_MouseUp)      */     x = startPosX - marble1.x;     y = startPosY - marble1.y;     double totLenTraveledSqrd = x * x + y * y;     if ((int)totLenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because Total Len has been traveled");         timer1.Enabled = false;     }     else if (Math.Abs((int)lengthSqrd - (int)lengthSqrdNew) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New");         timer1.Enabled = false;     } } A point to note here is that, in this implementation, the marble never stops because it travelled a distance of totLenToTravelSqrd (first if condition). This happens because speed is a function of the length. During the final few frames length becomes very small and so does speed; and so the amount by which the marble shifts is quite small, and the second if condition always hits true first. I’ll end this series with a third post. In part 3 we will cover two things, one, when the user clicks, the marble keeps moving in that direction, rebounding off the screen edges and keeps moving forever. Two, when the user clicks on the screen, the marble moves towards it, with it’s speed reducing by every frame. It doesn’t come to a halt when the destination point is reached, instead, it continues to move, rebounds off the screen edges and slowly comes to halt. The amount of time that the marble keeps moving depends on how far the user clicks from the marble. I had mentioned this second situation here. Finally, here’s a video of this program running,

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Another JavaOne Latin America around the corner

    - by alexismp
    For the second year in a row, JavaOne is traveling to Latin America : São Paulo on December 6-8, 2011 at the Transamerica Expo Center. As with any such event, participants will be able to attend the Strategy, Technical and Community Keynotes, a large number of Sessions (including Hands-On Labs) which include a good number of local speakers chosen with a dedicated Call for Papers, and wander around the Exhibition Hall. Both Java EE 6 and GlassFish will be well represented in keynotes, sessions and hands-on labs. You can follow updates to this upcoming conference on Twitter and of course Register! New this year is the "Meet your Java gurus" geek bike ride that Fabiane and friends are organizing São Paulo on the Sunday prior to the conference. Sounds like fun!

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • Three Fusion Applications Communities are Now Live

    - by cwarticki
    The Fusion Application Support Team (FAST) launched three communities on the My Oracle Support Community.  These communities provide another channel for customers to get the information about Fusion Applications that they need. The three Fusion Applications communities are: ·     Technical - FA community -- covers all the Fusion Applications technology stack and technical questions from users. ·      Applications and Business Processes community -- covers all the functional questions and issues raised by users for all Fusion Applications except HCM. ·      Fusion Applications HCM community -- covers the functional questions and issues raised by users for Fusion HCM product family. Good for Our Customers Customers participating in these communities can ask questions and get timely responses from Oracle Fusion Applications experts who monitor the communities. The customers can search the Fusion Applications Community contents for information and answers. They also can collaborate with other customers and benefit from the collective experience of the community -- especially from people like you. All customers and partners are invited to join My Oracle Support Community for Fusion Applications. We believe that participating in the Fusion Applications communities can be a win-win option for everyone. We invite you to become an active part of the thriving Fusion Applications communities and experience how this interesting and insightful dialog can benefit you. How to Join the Community Navigate to http://communities.oracle.com. Click the Profile Tab to register yourself and edit your profile. ·         You can subscribe to the Fusion Applications communities by editing your Community Subscriptions. ·         You can get RSS feeds for each of your subscribed communities from the same section.

    Read the article

  • WebLogic Partner Community Newsletter November 2012

    - by JuergenKress
    Dear WebLogic partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. If you have missed the Oracle OpenWorld WebLogic, Java and ExaLogic highlights - you can now watch our community webcast on-demand. To experience and learn more about WebLogic 12c, make sure you attend one of the upcoming WebLogic 12c bootcamps. We are continuously adding many more locations to our training road-show! If you like to suggest an additional location, Please feel free to write us @wlscommunity on twitter. The key presentations from Oracle OpenWorld 2012 are published at our WebLogic Community Workspace (WebLogic Community membership required): Exalogic X3-2 launch (.pptx) & ExaLogic references 2012 (ppt) & General Session Building and Managing a Private Oracle Java & Experiences building JavaEE based PaaS Platform Compressed presentation & Oracle Enterprise Manager 12c Cloud Control Demo (Zip) & Coherence Past Present And Future (ppt)& Coherence Web Elastic Data on WebLogic 12c (zip) & Oracle Tuxedo What’s New in 12c (.pptx) & Tuxedo Java Services(.pptx). One of the newest product in the middleware family ADF Mobile & ADF Essentials is now available. Andrejus published an article on how to implement ADF Essentials on Glassfish. When you design mobile solutions, you might want to make use of the Oracle Fusion Applications user experience design patterns. We continue to promote and create joint partner marketing campaigns to upgrade iAS to WebLogic, please contact myself if you are interested! Critical patch updates have been also released for iAs and the whole middleware stack, please make sure that you implement them. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsNovember2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • WebLogic Partner Community Newsletter November 2012

    - by JuergenKress
    Dear WebLogic partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. If you have missed the Oracle OpenWorld WebLogic, Java and ExaLogic highlights - you can now watch our community webcast on-demand. To experience and learn more about WebLogic 12c, make sure you attend one of the upcoming WebLogic 12c bootcamps. We are continuously adding many more locations to our training road-show! If you like to suggest an additional location, Please feel free to write us @wlscommunity on twitter. The key presentations from Oracle OpenWorld 2012 are published at our WebLogic Community Workspace (WebLogic Community membership required): Exalogic X3-2 launch (.pptx) & ExaLogic references 2012 (ppt) & General Session Building and Managing a Private Oracle Java & Experiences building JavaEE based PaaS Platform Compressed presentation & Oracle Enterprise Manager 12c Cloud Control Demo (Zip) & Coherence Past Present And Future (ppt)& Coherence Web Elastic Data on WebLogic 12c (zip) & Oracle Tuxedo What’s New in 12c (.pptx) & Tuxedo Java Services(.pptx). One of the newest product in the middleware family ADF Mobile & ADF Essentials is now available. Andrejus published an article on how to implement ADF Essentials on Glassfish. When you design mobile solutions, you might want to make use of the Oracle Fusion Applications user experience design patterns. We continue to promote and create joint partner marketing campaigns to upgrade iAS to WebLogic, please contact myself if you are interested! Critical patch updates have been also released for iAs and the whole middleware stack, please make sure that you implement them. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsNovember2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Oracle Developer Day: Die Oracle Datenbank in der Praxis

    - by A&C Redaktion
    Im neuen Jahr finden wieder Oracle Developer Days in verschiedenen Städten statt! In dieser speziell von den Database-Kollegen zusammengestellten Veranstaltung erfahren Sie viele Tipps und Tricks aus der Praxis und werden zu folgenden Themen auf den neuesten Stand gebracht: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. Termine, Agenda,Veranstaltungsorte und Anmeldung finden Sie hier. Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos!

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • Oracle Advanced Benefits: Plan Design Maintenance for Open Enrollment

    - by Annemarie Provisero
    ADVISOR WEBCAST: Oracle Advanced Benefits: Plan Design Maintenance for Open Enrollment PRODUCT FAMILY: Oracle HCM - Benefits  July 13, 2011 at 1 pm PT, 2 pm MT, 4 pm ET This session AU gives you the information to define new and maintain all Compensation Objects used in your Benefits setup. Course highlights things to consider when getting ready for Open Enrollment or when there is a need to change compensation objects. We will review creating a new or ending an old program, plan, or option. We also review what to do when you need to move from an Unrestricted program to a Restricted one. TOPICS WILL INCLUDE: Adding or Modifying Compensation Objects Ending Compensation Objects Elements and Element Links Standard and Variable Rates Dependents and Beneficiaries Moving from Oracle Standard Benefits to Oracle Advanced Benefits A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Reminder: GlassFish 3.1 Clustering Webinar Today!

    - by alexismp
    Quick reminder for those of you that missed the GlassFish Clustering Webinar from March, we have a new session today (June 28th, 2011). The session is planned at 10:00 a.m. PT / 1:00 p.m. ET / 19.00 CT and you'll need to register first. John Clingan, Principal Product Manager for GlassFish, will walk you through the various clustering features introduced and enhanced in version 3.1. This includes the SSH-based provisioning of clusters (never log in to any machine again), the centralized administration, High Availability and smart failover, load-balancer, Domain Admin Server (DAS) performance improvements, cluster deployments and more. Other than learning about these new product features this is also your chance to ask questions to John and other GlassFish team members. See you there!

    Read the article

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >