Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 291/688 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • NetBeans Podcast #61

    - by TinuA
    Download mp3: 39 minutes – 31.6 MB Subscribe to the NetBeans Podcast on iTunes NetBeans Community News with Geertjan and Tinu What's NEW? The Smarter and NOW FASTER NetBeans IDE 7.2 available since July. Is it faster for you too? Tell us about it on Twitter! (#netbeans) NetBeans Community Day at JavaOne is BACK!!! Join the NetBeans team in San Francisco on Sunday, September 30th for a full day of sessions about how various Java EE, JavaFX, and NetBeans Platform experts are using NetBeans in the real-world. NetBeans Community Day is just the start of the fun at JavaOne 2012, check out the full listing of ALL NetBeans-related sessions at the conference. NetBeans Governance Board elections are around the corner. Nominate yourself or someone who you think can represent the interest of the NetBeans Community. Email us at nbpodcast at netbeans dot org to get on the ballot in September. Community Interview: Çagatay Çivici, PrimeFaces Çagatay Çivici is the lead architect and founder of PrimeFaces , the popular JSF component library. Find out what the project is about, its inception, how to create PrimeFaces-based application inside NetBeans IDE, and more. Learn more about PrimeFaces at NetBeans Community Day at JavaOne 2012. Dig deeper into PrimeFaces at JavaOne 2012: CON6139 - Lessons Learned in Building Enterprise and Desktop Applications with the NetBeans IDE Community Interview: Timon Veenstra, Agrosense Timon Veenstra is the architect behind Agrosense , an open-source farm management system built on the NetBeans Platform. Find out how Agrosense helps farms run more efficiently and productively, and why NetBeans is the platform of choice for Timon and the Agrosense team. Catch a demo of Agrosense at NetBeans Community Day at JavaOne 2012. API Design with Jarda Tulach Geertjan has been using the Lookup API incorrectly; Jarda sets him on the right path. *Have ideas for NetBeans Podcast topics? Send them to nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • Join us for Live Oracle Linux and Oracle VM Cloud Events in Europe

    - by Monica Kumar
    Join us for a series of live events and discover how Oracle VM and Oracle Linux offer an integrated and optimized infrastructure for quickly deploying a private cloud environment at lower cost. As one of the most widely deployed operating systems today, Oracle Linux delivers higher performance, better reliability, and stability, at a lower cost for your cloud environments. Oracle VM is an application-driven server virtualization solution fully integrated and certified with Oracle applications to deliver rapid application deployment and simplified management. With Oracle VM, you have peace of mind that the entire Oracle stack deployed is fully certified by Oracle. Register now for any of the upcoming events, and meet with Oracle experts to discuss how we can help in enabling your private cloud. Nov 20: Foundation for the Cloud: Oracle Linux and Oracle VM (Belgium) Nov 21: Oracle Linux & Oracle VM Enabling Private Cloud (Germany) Nov 28: Realize Substantial Savings and Increased Efficiency with Oracle Linux and Oracle VM (Luxembourg) Nov 29: Foundation for the Cloud: Oracle Linux and Oracle VM (Netherlands) Dec 5: MySQL Tech Tour, including Oracle Linux and Oracle VM (France) Hope to see you at one of these events!

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Floating Panels and Describe Windows in Oracle SQL Developer

    - by thatjeffsmith
    One of the challenges I face as I try to share tips about our software is that I tend to assume there are features that you just ‘know about.’ Either they’re so intuitive that you MUST know about them, or it’s a feature that I’ve been using for so long I forget that others may have never even seen it before. I want to cover two of those today - Describe (DESC) – SHIFT+F4 Floating Panels My super-exciting desktop SQL Developer and Describe DESC or Describe is an Oracle SQL*Plus command. It shows what a table or view is composed of in terms of it’s column definition. Here’s an example: SQL*Plus: Release 11.2.0.3.0 Production on Fri Sep 21 14:25:37 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> desc beer; Name Null? Type ----------------------------------------- -------- ---------------------------- BREWERY NOT NULL VARCHAR2(100) CITY VARCHAR2(100) STATE VARCHAR2(100) COUNTRY VARCHAR2(100) ID NUMBER SQL> You can get the same information – and a good bit more – in SQL Developer using the SQL Developer DESC command. You invoke it with SHIFT+F4. It will open a floating (non-modal!) window with the information you want. Here’s an example: I can see my column definitions, constratins, stats, privs, etc A few ‘cool’ things you should be aware of: I can open as many as I want, and still work in my worksheet, browser, etc. I can also DESC an index, user, or most any other database object I can of course move them off my primary desktop display The DESC panel’s are read-only. I can’t drop a constraint from within the DESC window of a given table. But for dragging columns into my worksheet, and checking out the stats for my objects as I query them – it’s very, very handy. Try This Right Now Type ‘scott.emp’ (or some other table you have), place your cursor on the text, and hit SHIFT+F4. You’ll see the EMP object open. Now click into a column name in the columns page. Drag it into your worksheet. It will paste that column name into your query. This is an alternative for those that don’t like our code insight feature or dragging columns off the connection tree (new for v3.2!) Got it? SQL Developer’s Floating Panels Ok, let’s talk about a similar feature. Did you know that any dockable panel from the View menu can also be ‘floated?’ One of my favorite features is the SQL History. Every query I run is recorded, and I can recall them later without having to remember what I ran and when. And I USUALLY use the keyboard shortcuts for this. Let your trouble float away…if only it were so easy as a right-click in the real world. But sometimes I still want to see my recall list without having to give up my screen real estate. So I just mouse-right click on the panel tab and select ‘Float.’ Then I move it over to my secondary display – see the poorly lit picture in the beginning of this post. And that’s it. Simple, I know. But I thought you should know about these two things!

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • A new version of Oracle Enterprise Manager Ops Center Doctor (OCDoctor ) Utility released

    - by Anand Akela
    In February,  we posted a blog of Oracle Enterprise Manager Ops Center Doctor aka OCDoctor Utility. This utility assists in various stages of the Ops Center deployment and can be a real life saver. It is updated on a regular basis with additional knowledge (similar to an antivirus subscription) to help you identify and resolve known issues or suggest ways to improve performance.A new version ( Version 4.00 ) of the OCDoctor is now available . This new version adds full support for recently announced Oracle Enterprise Manager Ops Center 12c including prerequisites checks, troubleshoot tests, log collection, tuning and product metadata updates. In addition, it adds several bug fixes and enhancements to OCDoctor Utility.To download OCDoctor for new installations:https://updates.oracle.com/OCDoctor/OCDoctor-latest.zipFor existing installations, simply run:# /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --updateTip : If you have Oracle Enterprise Manager Ops Center12c EC installed, your OCDoctor will automatically update overnight. Join Oracle Launch Webcast : Total Cloud Control for Systems on April 12th at 9 AM PST to learn more about  Oracle Enterprise Manager Ops Center 12c from Oracle Senior Vice President John Fowler, Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executive. Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • How-to dynamically filter model-driven LOV

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Often developers need to filter a LOV query with information obtained from an ADF Faces form or other where. The sample below shows how to define a launch popup listener configured on the launchPopupListener property of the af:inputListOfValues component to filter a list of values. <af:inputListOfValues id="departmentIdId"    value="#{bindings.DepartmentId.inputValue}"                                          model="#{bindings.DepartmentId.listOfValuesModel}"    launchPopupListener="#{PopupLauncher.onPopupLaunch}" … >         … </af:inputListOfValues> A list of values is queried using a search binding that gets created in the PageDef file of a view when a lis of value component gets added. The managed bean code below looks this search binding up to then add a view criteria that filters the query. Note: There is no public API yet available for the FacesCtrlLOVBinding class, which is why I use the internal package class it in the example. public void onPopupLaunch(LaunchPopupEvent launchPopupEvent) {   BindingContext bctx = BindingContext.getCurrent();   BindingContainer bindings = bctx.getCurrentBindingsEntry();   FacesCtrlLOVBinding lov =        (FacesCtrlLOVBinding)bindings.get("DepartmentId");   ViewCriteriaManager vcm =   lov.getListIterBinding().getViewObject().getViewCriteriaManager();             //make sure the view criteria is cleared   vcm.removeViewCriteria(vcm.DFLT_VIEW_CRITERIA_NAME);   //create a new view criteria   ViewCriteria vc =          new ViewCriteria(lov.getListIterBinding().getViewObject());   //use the default view criteria name   //"__DefaultViewCriteria__"   vc.setName(vcm.DFLT_VIEW_CRITERIA_NAME);   //create a view criteria row for all queryable attributes   ViewCriteriaRow vcr = new ViewCriteriaRow(vc);   //for this sample I set the query filter to DepartmentId 60.   //You may determine it at runtime by reading it from a managed bean   //or binding layer   vcr.setAttribute("DepartmentId", 60);   //also note that the view criteria row consists of all attributes   //that belong to the LOV list view object, which means that you can   //filter on multiple attributes   vc.addRow(vcr);             lov.getListIterBinding().getViewObject().applyViewCriteria(vc); }  Note: Instead of using the vcm.DFLT_VIEW_CRITERIA_NAME name you can also define a custom name for the view criteria.

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • ATG Live Webcast April 5: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager

    - by BillSawyer
    The next ATG Live Webcast covers one of the hottest topic areas in E-Business Suite Tools and Technology: Lifecycle Management. Angelo Rosado, Product Manager, ATG Development will lead you through using Oracle Enterprise Manager 12c and the latest E-Business Suite Plug-in to manage E-Business Suite systems. You can register for the Apr. 5, 2012 event at: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager The topics covered in this webcast will be: Manage your EBS system configurations Monitor your EBS environment's performance and uptime Keep multiple EBS environments in sync with their patches and configurations Create patches for your EBS customizations and apply them with Oracle's own patching tools Date:               Thursday, April 5, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenter:    Angelo Rosado, Product Manager, ATG DevelopmentWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:            877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99342To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597073984If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com. 

    Read the article

  • 11gR2 DB 11.2.0.1 Certified with E-Business Suite on Solaris 10 (x86-64)

    - by Steven Chan
    Oracle Database 11g Release 2 version 11.2.0.1 is now certified with Oracle E-Business Suite 11i (11.5.10.2) and Release 12 (12.0.4 or higher, 12.1.1 or higher) on Oracle Solaris on x86-64 (64-bit) running Solaris 10. This announcement includes:Oracle Database 11gR2 version 11.2.0.1 Oracle Database 11gR2 version 11.2.0.1 Real Application Clusters (RAC) Transparent Data Encryption (TDE) Column Encryption with EBS 11i and R12Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for E-Business Suite 11i and R12 Database Instances Transparent Data Encryption (TDE) Tablespace Encryption

    Read the article

  • O modelo diamante para gerenciamento de projetos

    - by fernando.galdino
    Este ano comecei a fazer o mestrado em Gestão de Projetos. No decorrer deste período estudamos vários assuntos envolvendo abordagens de gerenciamento de projetos. Uma dessas abordagens é o Modelo Diamante. Elaborada por Aaron Shenhar e Dov Dvir, e explicada em detalhes no livro “Reinventando Gerenciamento de Projetos”, trata-se de uma estrutura que permite avaliar um projeto, e com base nos resultados, permite que o gerente de projetos possa usar uma abordagem como o descrito no PMBOK (PMI), de modo a aproveitar da melhor forma possível, as boas práticas listadas. A apresentação abaixo foi realizada por mim, numa das aulas do curso. Explica com alguns detalhes, e ao mesmo tempo fornece uma visão geral, sobre o modelo NTCP, que é uma estrutura que permite avaliar um projeto em termos de novidade, incerteza tecnológica, complexidade e ritmo.   Modelo NTCP View more presentations from Fernando Galdino.

    Read the article

  • CRM at Oracle Series: Do Not Call & Do Not Email

    - by tony.berk
    Who you gonna call? Or not call! Sorry, just kidding, this isn't a movie blog! Do Not Call is an important topic for all businesses as there are government regulations that can lead to significant fines, and of course, possible damage to your brand. Oracle leverages Siebel CRM to develop an effective solution to address the Do Not Call and Email Permissible Use requirements. The application uses the Contacts functionality to manage communication preferences, which when defined, centrally synchronizes all contact records that share the same phone number and email address. Additionally, the relevant information is masked so Oracle employees cannot accidentally reach out to the contact. Therefore, the solution ensures that we are compliant with regulations, enables us to respect individuals' communication preferences and provides an audit trail of changes to their preferences. Today's CRM at Oracle slidecast discusses the requirements, highlights benefits and provides screen shots of the solution. CRM at Oracle Series: Do Not Call & Do Not Email Click here to learn more about Siebel CRM and other Oracle CRM products. Are you enjoying the CRM at Oracle Series? We are working on more topics for this year, but if there is a particular CRM area or function which you'd like to hear how Oracle implemented it internally, leave us a comment and we'll try to get it on our list.

    Read the article

  • New WebLogic Server 12.1.2 Installation and Patching Technology By Monica Riccelli

    - by JuergenKress
    WebLogic Server 12.1.2 has many new features, but the first new feature you are likely to notice is the change in installer technology. WebLogic Server and Coherence 12.1.2 are installed using Oracle Universal Installer (OUI) installer technology. We have also changed WebLogic Server patching technology from SmartUpdate to OPatch, the patching tool used to patch OUI installations. Note that installation and patching technology used for prior versions of WebLogic Server has not changed. The primary motivation for this change is to provide consistency across the Oracle stack. Prior to WebLogic Server 12.1.2, Fusion Middleware customers were required to use different technologies to install and patch, for example, Oracle Application Development Framework (ADF) with WebLogic Server. Now users can perform installation and patching across products more efficiently by using the same technologies, and by using new installation packages that simplify installation of Fusion Middleware products with WebLogic Server. Check the YouTube video that describes how to install  WebLogic Server 12.1.2 using the  OUI installer. The following WebLogic Server distributions are now available on the Oracle Technology Network (OTN)  under OTN license, and from Oracle Software Delivery Cloud (OSDC) for licensed customers: wls_121200.jar - This OUI installer package includes WebLogic Server and Coherence and is targeted at WebLogic Server users who do not require other Fusion Middleware components such as ADF. This generic installer can be used to install WebLogic Server and Coherence on any supported operating system, and is intended for development or production purposes. This is available on OTN and OSDC. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Monica Riccelli,WebLogic 12c,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Using the af:poll to refresh parts of the page periodically

    - by shay.shmeltzer
    Just a quick sample of using the af:poll components. A component that enables you to do things in a periodic fashion. For example check if something has changed on the server and update the UI. A more "modern" approach is to actually use push instead of pull, and ADF Faces will allow you to do that with ADS (here, and here). But the poll still has its place. It's quite useful for dashboard type of applications where you want periodic updates of the graphs shown on the page. As you can see it's quite simple to use the tag. I also show my lazy approach to invoking declarative operations on a data control from a backing bean without manually writing code.

    Read the article

  • Installing Multiple OWB Patches

    - by [email protected]
    When an OUBI bug requires a fix to the Oracle Warehouse Builder (OWB) code, the fix is delivered as an MDL export file that will need to be imported and deployed in OWB. If more than one bug is being patched, then a recent question came in that asked if it would be possible to import one after the next, and then do the deploy steps once? The answer is Yes, all of the imports can be done before any of objects are deployed. Once all of the objects have been deployed, then the TCL scripts that need to be rerun can be run, and then the objects that were changed can be deployed. The order that the MDL files are loaded does not matter unless the same object is in two or more MDL files. In that case, the latest MDL file should be loaded last. For example, if two MDL files both contain changes to the SPLMAP_F_RECENT_CREW mapping, and one was created on January 2, 2009 and the second one was created on March 14, 2009, then the January 2 file should be loaded first and the March 14 file should be loaded second. Note that if the MDL files are always loaded in the order that they were created by Release Services, then this will work correctly.

    Read the article

  • Failure Sucks, But Does It Have To?

    - by steve.diamond
    Hey Folks--It's "elephant in the room" time. Imagine a representative from a CRM VENDOR discussing CRM FAILURES. Well. I recently saw this blog post from Michael Krigsman on "six ways CRM projects go wrong." Now, I know this may come off defensive, but my comments apply to ALL CRM vendors, not just Oracle. As I perused the list, I couldn't find any failures related to technology. They all seemed related to people or process. Now, this isn't about finger pointing, or impugning customers. I love customers! And when they fail, WE fail. Although I sit in the cheap seats, i.e., I haven't funded any multi-million dollar CRM initiatives lately, I kept wondering how to convert the perception of failure as something that ends and is never to be mentioned again (see Michael's reason #4), to something that one learns from and builds upon. So to continue my tradition of speaking in platitudes, let me propose the following three tenets: 1) Try and get ahead of your failures while they're very very small. 2) Immediately assess what you can learn from those failures. 3) With more than 15 years of CRM deployments, seek out those vendors that have a track record both in learning from "misses" and in supporting MANY THOUSANDS of CRM successes at companies of all types and sizes. Now let me digress briefly with an unpleasant (for me, anyway) analogy. I really don't like flying. Call it 'fear of dying' or 'fear of no control.' Whatever! I've spoken with quite a few commercial pilots over the years, and they reassure me that there are multiple failures on most every flight. We as passengers just don't know about them. Most of them are too miniscule to make a difference, and most of them are "caught" before they become LARGER failures. It's typically the mid-sized to colossal failures we hear about, and a significant percentage of those are due to human error. What's the point? I'd propose that organizations consider the topic of FAILURE in five grades. On one end, FAILURE Grade 1 is a minor/miniscule failure. On the other end, FAILURE Grade 5 is a colossal failure A Grade 1 CRM FAILURE could be that a particular interim milestone was missed. Why? What can we learn from that? How can we prevent that from happening as we proceed through the project? Individual organizations will need to define their own Grade 2 and Grade 3 failures. The opportunity is to keep those Grade 3 failures from escalating any further. Because honestly, a GRADE 5 failure may not be recoverable. It could result in a project being pulled, countless amounts of hours and dollars lost, and jobs lost. We don't want to go there. In closing, I want to thank Michael for opening my eyes up to the world of "color," versus thinking of failure as both "black and white" and a dead end road that organizations can't learn from and avoid discussing like the plague.

    Read the article

  • Out-of-the-Box Spatial Dashboards Improve Utility Outage Decisions

    - by stephen.garth
    Oracle Utilities Advanced Spatial Outage Analytics leverages the capabilities of Oracle Business Intelligence with map visualization and geospatial analysis of outage data from utility network management systems, providing BI dashboards to support utility executives and other decision makers throughout the enterprise. This excellent article by Oracle's Guerry Waters, published by Directions Media, gives details. Read the article here. Get more information: - Oracle Spatial - Oracle Utilities - Oracle Business Intelligence

    Read the article

  • Oracle Developer Day im Januar 2013:" Die Oracle Datenbank in der Praxis"

    - by britta wolf
    Was steckt in den Datenbank-Editionen? Einsatzgebiete, Tipps und Tricks zum Mitnehmen, inklusive Ausblick auf neue Funktionen ... Im Rahmen des Oracle Developer Days werden Sie neben vielen Tipps und Tricks zu folgenden Themen auf den neuesten Stand gebracht: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Die kostenlose Veranstaltung findet an folgenden Terminen und Orten statt: 23.01.2013: Oracle Niederlassung Stuttgart 30.01.2013: Oracle Niederlassung Potsdam 05.02.2013: Oracle Niederlassung Düsseldorf Die Agenda und den Anmeldekontakt finden Sie hier.

    Read the article

  • Get the Latest on MySQL Enterprise Edition

    - by monica.kumar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Font Definitions */ @font-face {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0; mso-font-charset:2; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:0 268435456 0 0 -2147483648 0;}@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-1610611985 1073750139 0 0 159 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; mso-themecolor:hyperlink; text-decoration:underline; text-underline:single;}a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:purple; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoPapDefault {mso-style-type:export-only; margin-bottom:10.0pt; line-height:115%;}@page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;}div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:595597020; mso-list-type:hybrid; mso-list-template-ids:1001697690 67698689 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}@list l0:level1 {mso-level-number-format:bullet; mso-level-text:?; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-.25in; font-family:Symbol;}ol {margin-bottom:0in;}ul {margin-bottom:0in;}--Oracle just announced MySQL 5.5 Enterprise Edition. MySQL Enterprise Edition is a comprehensive subscription that includes:- MySQL Database- MySQL Enterprise Backup- MySQL Enterprise Monitor- MySQL Workbench- Oracle Premier Support; 24x7, WorldwideNew in this release is the addition of MySQL Enterprise Backup and MySQL Workbench along with enhancements in MySQL Enterprise Monitor. Recent integration with MyOracle Support allows MySQL customers to access the same support infrastructure used for Oracle Database customers. Joint MySQL and Oracle customers can experience faster problem resolution by using a common technical support interface. Supporting multiple operating systems, including Linux and Windows, MySQL Enterprise Edition can enable customers to achieve up to 90 percent TCO savingsover Microsoft SQL Server. See what Booking.com is saying:“With more than 50 million unique monthly visitors, performance and uptime are our first priorities,” said Bert Lindner, Senior Systems Architect, Booking.com. “The MySQL Enterprise Monitor is an essential tool to monitor, tune and manage our many MySQL instances. It allows us to zoom in quickly on the right areas, so we can spend our time and resources where it matters.” Read the press release for detailson technology enhancements.

    Read the article

  • Bookbindng Samples

    - by Tim Dexter
    I have finally found a home for the bookbinding samples I have put together in support of my white paper on Bookbinding. OTN has a great newish sample code site where you can create code samples to share with the community. In their own words: Welcome to the Oracle Sample Code public repository, where Oracle Technology Network members collaboratively build and share sample applications, code snippets, skins and templates, and more. Note the word 'templates' I read that as an open invitation to share your latest and greatest! If you have template samples or code snippets that you think would benefit the wider BIP community please create new code samples and let me know the link and I'll ensure they get promotion through the blog. https://www.samplecode.oracle.com/ You just need an OTN account to get started. I'll be pushing some more samples and snippets in the near future, its a great centrally managed repository. Finally, Oracle has somewhere to get code and files hosted. The two samples I have created cover the book bindng function from a couple of angles: S523: Oracle BI Publisher Bookbinding Examples - this walks you through a series of examples that show you how to create the bookbinding control files to generate the final bound document. S522: Oracle BI Publisher Bookbinding Demonstration - this is a sample J2EE application that demonstrates how to create an HTML/servlet combination to allow users to make sub document selections and then the document features e.g. TOC, page numbering, cross links, etc you would like added to the final document I'd be very interested in any feedback. Happy Binding!

    Read the article

  • Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

    - by alejandro.vargas
    Recently I received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims I was happy to have the opportunity know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" . The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments. She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

    Read the article

  • Data Warehouse Best Practices

    - by jean-pierre.dijcks
    In our quest to share our endless wisdom (ahem…) one of the things we figured might be handy is recording some of the best practices for data warehousing. And so we did. And, we did some more… We now have recreated our websites on Oracle Technology Network and have a separate page for best practices, parallelism and other cool topics related to data warehousing. But the main topic of this post is the set of recorded best practices. Here is what is available (and it is a series that ties together but can be read independently), applicable for almost any database version: Partitioning 3NF schema design for a data warehouse Star schema design Data Loading Parallel Execution Optimizer and Stats management The best practices page has a lot of other useful information so have a look here.

    Read the article

  • New Year's Resolution: Highest Availability at the Lowest Cost

    - by margaret.hamburger(at)oracle.com
    Don't miss this Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Event Date: 01/11/2011 10:00 AM Pacific Standard Time You'll learn how Oracle's Maximum Availability Architecture and Oracle Database 11g help you: Achieve the highest availability at the lowest cost Protect your systems from unplanned downtime Eliminate idle redundancy Register Now! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >