Search Results

Search found 27098 results on 1084 pages for 'oracle it services industries financial services'.

Page 582/1084 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Globe Trotters: Asian Healthcare CIOs need ‘Security Inside Out’ Approach

    - by Tanu Sood
    In our second edition of Globe trotters, wanted to share a feature article that was recently published in Enterprise Innovation. EnterpriseInnovation.net, part of Questex Media Group, is Asia's premier business and technology publication. The article featured MOH Holdings (a holding company of Singapore’s Public Healthcare Institutions) and highlighted the project around National Electronic Health Record (NEHR) system currently being deployed within Singapore.  According to the feature, the NEHR system was built to facilitate seamless exchanges of medical information as patients move across different healthcare settings and to give healthcare providers more timely access to patient’s healthcare records in Singapore. The NEHR consolidates all clinically relevant information from patients’ visits across the healthcare system throughout their lives and pulls them in as a single record. It allows for data sharing, making it accessible to authorized healthcare providers, across the continuum of care throughout the country. In healthcare, patient data privacy is critical as is the need to avoid unauthorized access to the electronic medical records. As Alan Dawson, director for infrastructure and operations at MOH Holdings is quoted in the feature, “Protecting the perimeter is no longer enough. Healthcare CIOs today need to adopt a ‘security inside out’ approach that protects information assets all the way from databases to end points.” Oracle has long advocated the ‘Security Inside Out’ approach. From operating systems, infrastructure to databases, middleware all the way to applications, organizations need to build in security at every layer and between these layers. This comprehensive approach to security has never been as important as it is today in the social, mobile, cloud (SoMoClo) world. To learn more about Oracle’s Security Inside Out approach, visit our Security page. And for more information on how to prevent unauthorized access, streamline user administration, bolster security and enforce compliance in healthcare, learn more about Oracle Identity Management.

    Read the article

  • Internationalize WebCenter Portal - Content Presenter

    - by Stefan Krantz
    Lately we have been involved in engagements where internationalization has been holding the project back from success. In this post we are going to explain how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session. As you probably know by now WebCenter Portal leverages the Localization support from Java Server Faces (JSF), in this post we will assume that the localization is controlled and enforced by switching the current browsers locale between English and Spanish. There is two main scenarios in internationalization of a content enabled pages, since Content Presenter offers both presentation of information as well as contribution of information, in this post we will look at how to enable seamless integration of correct localized version of the back end content file and how to enable the editor/author to edit the correct localized version of the file based on the current browser locale. Solution Scenario 1 - Localization aware content presentation Due to the amount of steps required to implement the enclosed solution proposal I have decided to share the solution with you in group components for each facet of the solution. If you want to get more details on each step, you can review the enclosed components. This post will guide you through the steps of enabling each component and what it enables/changes in each section of the system. Enable Content Presenter Customization By leveraging a predictable naming convention of the data files used to hold the content for the Content Presenter instance we can easily develop a component that will dynamically switch the name out before presenting the information. The naming convention we have leverage is the industry best practice by having a shared identifier as prefix (ContentABC) and a language enabled suffix (_EN) (_ES). So the assumption is that each file pair in above example should look like following:- English version - (ContentABC_EN)- Spanish version - (ContentABC_ES) Based on above theory we can now easily regardless of the primary version assigned to the content presenter instance switch the language out by using the localization support from JSF. Below java bean (oracle.webcenter.doclib.internal.view.presenter.NLSHelperBean) is enclosed in the customization project available for download at the bottom of the post: 1: public static final String CP_D_DOCNAME_FORMAT = "%s_%s"; 2: public static final int CP_UNIQUE_ID_INDEX = 0; 3: private ContentPresenter presenter = null; 4:   5:   6: public NLSHelperBean() { 7: super(); 8: } 9:   10: /** 11: * This method updates the configuration for the pageFlowScope to have the correct datafile 12: * for the current Locale 13: */ 14: public void initLocaleForDataFile() { 15: String dataFile = null; 16: // Checking that state of presenter is present, also make sure the item is eligible for localization by locating the "_" in the name 17: if(presenter.getConfiguration().getDatasource() != null && 18: presenter.getConfiguration().getDatasource().isNodeDatasource() && 19: presenter.getConfiguration().getDatasource().getNodeIdDatasource() != null && 20: !presenter.getConfiguration().getDatasource().getNodeIdDatasource().equals("") && 21: presenter.getConfiguration().getDatasource().getNodeIdDatasource().indexOf("_") > 0) { 22: dataFile = presenter.getConfiguration().getDatasource().getNodeIdDatasource(); 23: FacesContext fc = FacesContext.getCurrentInstance(); 24: //Leveraging the current faces contenxt to get current localization language 25: String currentLocale = fc.getViewRoot().getLocale().getLanguage().toUpperCase(); 26: String newDataFile = dataFile; 27: String [] uniqueIdArr = dataFile.split("_"); 28: if(uniqueIdArr.length > 0) { 29: newDataFile = String.format(CP_D_DOCNAME_FORMAT, uniqueIdArr[CP_UNIQUE_ID_INDEX], currentLocale); 30: } 31: //Replacing the current Node datasource with localized datafile. 32: presenter.getConfiguration().getDatasource().setNodeIdDatasource(newDataFile); 33: } 34: } With this bean code available to our WebCenter Portal implementation we can start the next step, by overriding the standard behavior in content presenter by applying a MDS Taskflow customization to the content presenter taskflow, following taskflow customization has been applied to the customization project attached to this post:- Library: WebCenter Document Library Service View- Path: oracle.webcenter.doclib.view.jsf.taskflows.presenter- File: contentPresenter.xml Changes made in above customization view:1. A new method invocation activity has been added (initLocaleForDataFile)2. The method invocation invokes the new NLSHelperBean3. The default activity is moved to the new Method invocation (initLocaleForDataFile)4. The outcome from the method invocation goes to determine-navigation (original default activity) The above changes concludes the presentation modification to support a compatible localization scenario for a content driven page. In addition this customization do not limit or disables the out of the box capabilities of WebCenter Portal. Steps to enable above customization Start JDeveloper and open your WebCenter Portal Application Select "Open Project" and include the extracted project you downloaded (CPNLSCustomizations.zip) Make sure the build out put from CPNLSCustomizations project is a dependency to your Portal project Deploy your Portal Application to your WC_CustomPortal managed server Make sure your naming convention of the two data files follow above recommendation Example result of the solution: Solution Scenario 2 - Localization aware content creation and authoring As you could see from Solution Scenario 1 we require the naming convention to be strictly followed, this means in the hands of a user with limited technology knowledge this can be one of the failing links in this solutions. Therefore I strongly recommend that you also follow this part since this will eliminate this risk and also increase the editors/authors usability with a magnitude. The current WebCenter Portal Architecture leverages WebCenter Content today to maintain, publish and manage content, therefore we need to make few efforts in making sure this part of the architecture is on board with our new naming practice and also simplifies the creation of content for our end users. As you probably remember the naming convention required a prefix to be common so I propose we enable a new component that help you auto name the content items dDocName (this means that the readable title can still be in a human readable format). The new component (WCP-LocalizationSupport.zip) built for this scenario will enable a couple of things: 1. A new service where a sequential number can be generate on request - service name: GET_WCP_LOCALE_CONTENTID 2. The content presenter is leveraging a specific function when launching the content creation wizard from within Content Presenter. Assumption is that users will create the content by clicking "Create Web Content" button. When clicking the button the wizard opened is actually running in side of WebCenter Content server, file executed (contentwizard.hcsp). This file uses JSON commands that will generate operations in the content server, I have extend this file to create two identical data files instead of one.- First it creates the English version by leveraging the new Service and a Global Rule to set the dDocName on the original check in screen, this global rule is available in a configuration package attached to this blog (NLSContentProfileRule.zip)- Secondly we run a set of JSON javascripts to create the Spanish version with the same details except for the name where we replace the suffix with (_ES)- Then content creation wizard ends with its out of the box behavior and assigns the Content Presenter instance the English versionSee Javascript markup below - this can be changed in the (WCP-LocalizationSupport.zip/component/WCP-LocalizationSupport/publish/webcenter) 1: //---------------------------------------A-TEAM--------------------------------------- 2: WCM.ContentWizard.CheckinContentPage.OnCheckinComplete = function(returnParams) 3: { 4: var callback = WCM.ContentWizard.CheckinContentPage.checkinCompleteCallback; 5: WCM.ContentWizard.ChooseContentPage.OnSelectionComplete(returnParams, callback); 6: // Load latest DOC_INFO_SIMPLE 7: var cgiPath = DOCLIB.config.httpCgiPath; 8: var jsonBinder = new WCM.Idc.JSONBinder(); 9: jsonBinder.SetLocalDataValue('IdcService', 'DOC_INFO_SIMPLE'); 10: jsonBinder.SetLocalDataValue('dID', returnParams.dID); 11: jsonBinder.Send(cgiPath, $CB(this, function(http) { 12: var ret = http.GetResponseText(); 13: var binder = new WCM.Idc.JSONBinder(ret); 14: var dDocName = binder.GetResultSetValue('DOC_INFO', 'dDocName', 0); 15: if(dDocName.indexOf("_") > 0){ 16: var ssBinder = new WCM.Idc.JSONBinder(); 17: ssBinder.SetLocalDataValue('IdcService', 'SS_CHECKIN_NEW'); 18: //Additional Localization dDocName generated 19: ssBinder.SetLocalDataValue('dDocName', getLocalizedDocName(dDocName, "es")); 20: ssBinder.SetLocalDataValue('primaryFile', 'default.xml'); 21: ssBinder.SetLocalDataValue('ssDefaultDocumentToken', 'SSContributorDataFile'); 22:   23: for(var n = 0 ; n < binder.GetResultSetFields('DOC_INFO').length ; n++) { 24: var field = binder.GetResultSetFields('DOC_INFO')[n]; 25: if(field != 'dID' && 26: field != 'dDocName' && 27: field != 'dID' && 28: field != 'dReleaseState' && 29: field != 'dRevClassID' && 30: field != 'dRevisionID' && 31: field != 'dRevLabel') { 32: ssBinder.SetLocalDataValue(field, binder.GetResultSetValue('DOC_INFO', field, 0)); 33: } 34: } 35: ssBinder.Send(cgiPath, $CB(this, function(http) {})); 36: } 37: })); 38: } 39:   40: //Support function to create localized dDocNames 41: function getLocalizedDocName(dDocName, lang) { 42: var result = dDocName.replace("_EN", ("_" + lang)); 43: return result; 44: } 45: //---------------------------------------A-TEAM--------------------------------------- 3. By applying the enclosed NLSContentProfileRule.zip, the check in screen for DataFile creation will have auto naming enabled with localization suffix (default is English)You can change the default language by updating the GlobalNlsRule and assign preferred prefix.See Rule markup for dDocName field below: <$executeService("GET_WCP_LOCALE_CONTENTID")$><$dprDefaultValue=WCP_LOCALE.LocaleContentId & "_EN"$> Steps to enable above extensions and configurations Install WebCenter Component (WCP-LocalizationSupport.zip), via the AdminServer in WebCenter Content Administration menus Enable the component and restart the content server Apply the configuration bundle to enable the new Global Rule (GlobalNlsRule), via the WebCenter Content Administration/Config Migration Admin New Content Creation Experience Result Content EditingContent editing will by default be enabled for authoring in the current select locale since the content file is selected by (Solution Scenario 1), this means that a user can switch his browser locale and then get the editing experience adaptable to the current selected locale. NotesA-Team are planning to post a solution on how to inline switch the locale of the WebCenter Portal Session, so the Content Presenter, Navigation Model and other Face related features are localized accordingly. Content Presenter examples used in this post is an extension to following post:https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enable_content_editing_of_iterative Downloads CPNLSCustomizations.zip - WebCenter Portal, Content Presenter Customization https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/CPNLSCustomizations.zip WCP-LocalizationSupport.zip - WebCenter Content, Extension Component to enable localization creation of files with compliant auto naminghttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/WCP-LocalizationSupport.zip NLSContentProfileRule.zip - WebCenter Content, Configuration Update Bundle to enable Global rule for new check in naming of data fileshttps://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/NLSContentProfileRule.zip

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

  • OTN Developer Days (Review) - San Juan, PR - April 29, 2010

    - by dana.singleterry
    A quick update on the San Juan, PR event. First off it was a great success with the Keynote audience of 200+. Mickey Ralat, Managing Director Oracle Caribbean, kicked off the event with a quick introduction followed by me delivering the Keynote Message - The Fusion Development Platform which is the first session in the regular OTN DD events that we run in North America. Following this session was a partner, SDT, basically marketing their services which covers the Oracle stack and then following was a very brief presentation on APEX. After this we broke out into the various tracks of Java, (APEX) DB SQL Developer, .NET on Oracle. After the breakout we ran the following sessions in the Java track: Developing with JDBC, UCP, and Java in Database, Rich Internet Applications in Web 2.0, Development Made Simple Without Coding: Developing Reusable Business Components. As expected with the various tracks, we ended up with 50 - 70 in the various sessions within the JAVA track and the audience was very impressed with the power of JDeveloper/ADF 11g and we got a number of questions from licensing cost to upgrading / integrating from Forms. As for the Forms questions, I fielded a number of them and for those I couldn't, I pointed them towards Grants resources which seemed to suffice. They were all, for the most part, unaware of the recent 11.1.1.3 release which occurred only a couple of days prior to the event. The indication was that they were going to download it and use it for the lab that was included on the DVD which we did not have the time for them to even start on. For those of you that attended the event, you can download the updated presentations as follows: Keynote - The Fusion Development Platform Rich Internet Applications in Web 2.0 Development Made Simple Without Coding - Developing Reusable Business Components

    Read the article

  • Performing a silent install of JDeveloper

    - by draikes
    Installing JDeveloper Now that you have downloaded the latest version of JDeveloper from: the Oracle technology network, you are almost ready to install it. The problem is that the GUI installer is not as accessible as it could be. However, there is an alternative called a silent install. To perform a silent install, follow these steps: Download the silent.xml file into the same folder as the JDeveloper installer. You can customize the silent.xml file by setting the folder where JDeveloper will be installed, and by setting the location where you have a 1.6 jdk installed with the accessbridge already configured. The defaults are: JDeveloper wil lbe installed at c:\jdev a jdk is installed at c:\jdk\1.6.0_25 (see instructions in the top of the silent.xml file). Open a command window and navigate to the folder where the JDeveloper installer and silent.xml files are located. Run the following command: jdevstudio11120install.exe -mode=silent -silent_xml=silent.xml -log=install.log Note: this assumes that you are installing JDeveloper 11.1.2.0.0. Change the above command to match the installer package you have. This command will start by extracting the archive then the oracle installer will launch, but you just have to wait until the command prompt returns and voila it will be installed. To run JDeveloper: Now you can use windows explorer to navigate to the %JDEV_HOME% as specified in the silent.xml file (c:\jdev unless you changed it)and drill down to: jdeveloper\jdev\bin and now you have a couple of choices. If you have a 32-bit jdk configured with the accessbridge, then run jdevw.exe, however, if you have a 64-bit jdk copnfigured with the accessbridge, you should run jdev64w.exe. For instructiosn on setting up Accessbridge 2.0.1, see my earlier post. Disclaimer: As always if something doesn't quite go as planned, and you have a problem, please feel free to contact me via email at: don dot raikes at oracle dot com.

    Read the article

  • Grant Ronald - Forms, ADF guru Budapesten!

    - by peter.nagy
    Tudom, késon szólok (blogolok : ), de mégis a lényeg akkor: Grant Ronald lesz a vendégeloadónk az Oracle hazai Technology Forum rendezvényén. Röviden róla: Grant Ronald (Senior Group Product Manager, BSc.) 1989 óta dolgozik az IT iparágban és 1997-ben csatlakozott az Oracle Support Forms/Reports/Discoverer csapatához, melynek késobb vezetoje lett. Jelenleg az Alkalmazás Fejlesztoi Eszközök (köztük Forms és JDeveloper) fejlesztésért felelos csoport tagja. Fo feladata a fejlesztési eszközök stratégiai irányának meghatározása, valamint a Forms felhasználók számára fontos migráció, Java platformra történo áttérés támogatása. Jelen pillanatban tehát meghatározó ember a JEE (ADF) evangelizációban. Ami pedig a legfontosabb Forms aspektusból, 4GL fejlesztok szemszögébol (is)! Tehát aki Forms vagy ADF fejleszto (vagy akar lenni, persze ez utóbbi) vagy egyszeruen meg akar hallgatni egy nagyszeru eloadást JEE és azon belül is Oracle vonatkozásban regisztráljon itt. Fontos! A tervezett eloadások módosulnak, de sajnos az oldalon ez még nem került frissítésre. Amint megtörténik jelzem. Logisztika: 2010. május 5, szerda Novotel Budapest Congress 1123 Budapest, Alkotás u. 63-67.

    Read the article

  • In-Application Support Made Easier

    - by matt.hicks
    With the availability of Oracle UPK 3.6.1 and Enablement Service Pack 1 for Oracle UPK 3.6.1 (Oracle Support login required for both), there are quite a few changes for content admins to absorb. In addition to the support added for dozens of application releases, patches and new target applications, we've also added features to make implementing and using In-Application Support even easier. First, the old Help Menu Integration Guides have been updated and combined into a single In-Application Support Guide. If you integrate UPK content for user assistance, or if you're interested in doing so, read the new guide! It covers all the integration steps, including a section on the new In-Application Support Configuration Utility. If you've integrated content in multiple languages, or if you've ever had to make configuration changes for UPK Help Integration, then you know how cumbersome it was to manually edit javascript files. No longer! The Player now includes a configuration utility that provides a web browser interface for setting all In-Application Support options. From the main screen, you see a list of applications covered by the published content. Clicking on an application name takes you to the edit configuration screen where you can set all Player options for that application. No more digging through the Player folders to find the right javascript file to edit. No complicated javascript syntax to make changes. And with Enablement Service Pack 1 we've added a new feature we're calling the Tabbed Gateway. The Tabbed Gateway is a top-level navigation bar for Help Integration. And all tabs, links, and text are controlled with the Configuration Utility... I think the Tabbed Gateway is a really cool and exciting feature for content launch. I can't wait to hear how your ideas for how to use it for your content. Let me know in comments or email!

    Read the article

  • Thoughts on the new JavaFX by Jim Connors

    - by Jacob Lehrbaum
    First, a brief editorial if I may.  The upcoming JavaFX 2.0 platform has been getting overwhelmingly positive reaction from the community so far.  While the public sentiment seems to be cautiously optimistic, I've heard nothing but positive reactions from everyone that I've spoken to about the platform.   In fact, many of the early adopters of JavaFX have told us directly that they are very encouraged about the direction the platform is taking.One such early adopter is Oracle's own Jim Connors.  As his day job, Jim is a principal sales consultant (basically an engineer that supports Oracle's sales efforts) in the New York area.  However, Jim also co-wrote a book with Jim Clarke and Eric Bruno on JavaFX and has spoken and conducted training sessions at events like the New York Java Developer Day, the Java Road Trip, and other events.In his thoughtful editorial, Jim discusses some of the reasons why he believes the new directions Oracle is taking JavaFX make sense, including:Better developer toolsLower barriers to adoption -> better accessibility to existing Java developersImproved performanceMore flexibility (ability to use other dynamic languages, etc)To read more about Jim's thoughts on the new JavaFX, check out his blog.  Or if you want to learn more about the JavaFX platform, pick up a copy of his book.  And if you still want to use JavaFX Script, you can check out Project Visage

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • New P6 Reporting Database R2

    - by mark.kromer
    Along with our announced GA release of P6 Analytics R1 recently, you may have noticed that when you purchase P6 Analytics, we provide a restricted use license for P6 Reporting Database R2. This represent an updated version of the previous P6 Reporting Database 6.2 and can be purchased individually on a per-CPU basis. Typically, you will want just the reporting database if you would like the P6 data warehouse components such as the ETL, data models, ODS and star schemas in order to report on that data with another reporting tool other than Oracle. The P6 Analytics solution will only work on Oracle BI (OBI). But I pasted below some examples of a simplistic matrix report that I built from the P6 Reporting Database using Microsoft SQL Server Reporting Services. This is the Report Builder tool which is very similar to other similar tools to build reports on the market today such as Crystal Reports or Oracle BI Publisher. This is an example of what you can do (in a very simple format) by using the P6 Reporting Database without P6 Analytics: Here is a quick run-down of some of the key new features in P6 Reporting Database R2 that were added as enhancements to the 6.2 version: • 4 new star schemas (improved projects star, project history, resource utilization and resource allocation) • Improved ETL performance and reliability • P6 security is inherited at the star schema level • Custom P6 project, activity & resource codes are now available as customizable dimensions in the star schemas • Time-phase data down to the data is now available from the star schemas • An updated Operational Data Store (ODS) for operational reporting that includes the WBS hierarchy • The ODS now includes daily spreads for activity and resource assignments

    Read the article

  • WebLogic Weekly for June 20th, 2011

    - by james.bayer
    Welcome the first the first edition of the WebLogic Weekly.  The WebLogic Server team has been trying to extend our community outreach to new mediums like an Oracle WebLogic Youtube Channel (how-to videos and feature showcases), Twitter (sharing WebLogic links, typically blogs), and a Facebook page to do a better job sharing information, providing learning alternatives to product documentation and perhaps most importantly collecting feedback from all of our users using the tools they prefer.  This is our attempt to provide a round-up what has been going on in WebLogic over the past week.  If you would like to have something shared here, use the #weblogic tag on tweets, post on the Oracle WebLogic facebook page, or comment on these blog entries. Blogs WebLogic Server: Listing Groups of an Authenticated User by Steve Button Weblogic, QBrowser And Topics by Eric Elzinga Weblogic, Topics And (Non)-Durable Subscribers by Eric Elzinga Database Web Service using Toplink DB Provider by Vishal Jain WebLogic Server – Use the Execution Context ID in Applications – Lessons From Hansel and Gretel by James Bayer Getting All Server’s Lifecycle State in a Domain by Jay SenSharma Steps to Move Messages From One Queue To Another Queue Using WLST (Updated Version) by Ravish Mody Events If you want to share a story of something innovative you or your organization has done with WebLogic Server or other Fusion Middleware, you could win a pass to Oracle Open World 2011 and share the story there.  See Ruma Sanyal's posting on the Application Grid blog for details.  The deadline for submissions is July 22nd, 2011.

    Read the article

  • OTN Virtual Developer Day for WebLogic Server and WebLogic Developer Broadcasts

    - by mike.lehmann
    To further move the new year of 2011 underway for WebLogic Server, quite a series of hands on technical online events and broadcasts are about to get underway from the WebLogic team. The first is Virtual Developer Day: Oracle WebLogic Server which is an online event that combines hands on labs with WebLogic Server through a series of Virtual Box images. This event will cover things like the new Java EE 6 capabilities one can use on WebLogic Server, using Maven and Hudson with WebLogic Server, developing with Web services on WebLogic Server and even upgrading from Oracle Application Server. Very technical, very hands on. And its global - multiple geographies covered.  Nice! James Bayer has put out a full agenda for this on his blog as well as links on how to register. The second is a 5 week long weekly technical broadcast under the umbrella of Accelerate Your Development with Oracle WebLogic Suite walking through topics like working with JPA, designing distributed caching strategies with WebLogic Server, advanced JMS topics and UI topics like JQuery as well restful Web services with Jersey and JAX-RS.  Again in James' blog the full agenda is available to check out if it is interesting for you to attend including a brief video introduction outlining in a bit more detail exactly what will be covered. Hopefully between these two events and the release of WebLogic Server 10.3.4 earlier in January, we are kicking off 2011 in a good fashion.  Looking forward to sharing more as we go forward in 2011.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • MySQL Connect Content Catalog Live

    - by Bertrand Matthelié
    The MySQL Connect Content Catalog is now live and you can check out the great program the content committee put together for you. We received a lot of very good submissions during the call for papers and we’d like to thank you all again for those, it was a very difficult job to choose. Overall MySQL Connect will in two days include: Keynotes, with speakers such as Oracle Chief Corporate Architect Edward Screven and Vice President of MySQL Engineering Tomas Ulin 66 conference sessions, enabling you to hear from: Oracle engineers on MySQL 5.6 new features, InnoDB, performance and scalability, security, NoSQL, MySQL Cluster…and more MySQL users and customers including Facebook, Twitter, PayPal, Yahoo, Ticketmaster, and CERN Internationally recognized MySQL community members and partners on topics such as performance, security or high availability 6 Birds-of-a-feather sessions, in which you’ll be able to engage into passionate discussions about replication, backup and other subjects, and help influence the MySQL roadmap 8 Hands-On Labs designed to give you hands-on experience about MySQL replication, MySQL Cluster, the MySQL Performance Schema…and more Demo pods about MySQL Workbench, MySQL Cluster, MySQL Enterprise Edition and other technologies and services We’ll also have networking receptions on both Saturday and Sunday evening, enabling you to discuss with the Oracle engineers developing and supporting the MySQL products, as well as with other users and customers. Additionally, you’ll have the opportunity to meet and learn from our partners in the exhibition hall. Some of the MySQL Connect speakers such as Henrik Ingo and Andrew Morgan have already blogged about their presence at MySQL Connect, and you can find more information about their sessions or their thoughts about the conference in their blogs. We also published an interview with Tomas Ulin a few weeks ago. In summary, don’t miss MySQL Connect! And you only have about 3 weeks left to register with the early bird discount and save US$500. Don’t wait, Register Now! Interested in sponsorship and exhibit opportunities? You will find more information here.

    Read the article

  • OBIEE 11.1.1.5 or above: Admin Server as a single point of failure (SPOF) is REALLY not impacting OBIEE work

    - by Ahmed Awan
    Applies To: 11.1.1.5, 11.1.1.6 Admin Server as a single point of failure (SPOF) is REALLY not impacting OBIEE work. By setting virtualize tag to true (in EM) to manage multiple LDAP providers, it is enabling failover and HA on authentication and authorization inside OBIEE.   Following are the test cases used for testing impact on OBIEE, if Admin Server is not available:   a. Test 1: Admin Server crashes and impact on OBIEE Scenario: All OBIEE components are up and running.   b. Test 2: Admin Server had not been started and impact on OBIEE. Scenario: OBIEE Server bi_server1 is started, but Admin Server isn’t   For more details on each of the above test, click here to download the Test Results   Links to Official documentations below: http://docs.oracle.com/cd/E23943_01/bi.1111/e10543/privileges.htm#BIESC6077 http://docs.oracle.com/cd/E23943_01/bi.1111/e10543/privileges.htm#BABHFFEI http://docs.oracle.com/cd/E23943_01/bi.1111/e10543/authentication.htm#BIESC6075

    Read the article

  • 4th International SOA Symposium + 3rd International Cloud Symposium by Thomas Erl - call for presentations

    - by Jürgen Kress
    At the last SOA & Cloud Symposium by Thomas Erl the SOA Partner Community had a great present. The next conference takes place April 2011 in Brazil, make sure you submit your papers. The International SOA and Cloud Symposium brings together lessons learned and emerging topics from SOA and Cloud projects, practitioners and experts. The two-day conference agenda will be organized into the following primary tracks: SOA Architecture & Design SOA & BPM Real World SOA Case Studies SOA & Cloud Security Real World Cloud Computing Case Studies REST & Service-Orientation BPM, BPMN & Service-Orientation Business of SOA SOA & Cloud: Infrastructure & Architecture Business of Cloud Computing Presentation Submissions The SOA and Cloud Symposium 2010 program committees invite submissions on all topics related to SOA and Cloud, including but not limited to those listed in the preceding track descriptions. While contributions from consultants and vendors are appreciated, product demonstrations or vendor showcases will not be accepted. All contributions must be accompanied with a biography that describes the SOA or Cloud Computing related experience of the presenter(s). Presentation proposals should be submitted by filling out the speaker form and sending the completed form to [email protected]. All submissions must be received no later than January 31, 2010. To download the speaker form, please click here. Specially we are looking for Oracle SOA Suite and BPM Suite Case Studies! For additional call for papers please visit our SOA Community Wiki.   For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Symposium,Cloud Symposium,Thomas Erl,SOA,SOA Suite,Oracle,Call for papers,OPN,BPM,Jürgen Kress

    Read the article

  • NRF Big Show 2011 -- Part 3

    - by David Dorf
    I'm back from the NRF show having been one of the lucky people who's flight was not canceled. The show was very crowded with a reported 20% increase in attendance and everyone seemed in high spirits. After two years of sluggish retail sales, things are really picking up and it was reflected in everyone's mood. The pop-up Disney Store in the Oracle booth was great and attracted lots of interest in their mobile POS. I know many attendees visited the Disney Store in Times Square to see the entire operation. It's an impressive two-story store that keeps kids engaged. The POS demonstration station, where most of our innovations were demoed, was always crowded. Unfortunately most of the demos used WiFi and the signals from other booths prevented anything from working reliably. Nevertheless, the demo team did an excellent job walking people through the scenarios and explaining how shopping is being impacted by mobile, analytics, and RFID. Big Show Links Disney uncovers its store magic Top 10 Things You Missed at the NRF Big Show 2011 Oracle Retail Stores Innovation Station at NRF Big Show 2011 (video) The buzz of the show was again around mobile solutions. Several companies are creating mobile POS using the iPod Touch, including integrations to Oracle POS for the following retailers: Disney Stores with InfoGain Victoria's Secret with InfoGain Urban Outfitters with Starmount The Gap with Global Bay Keeping with the mobile theme, the NRF release a revised version of their Mobile Blueprint at NRF. It will be posted to the NRF site very soon. The alternate payments section had a major rewrite that provides a great overview and proximity and remote payment technologies. NRF Mobile Blueprint Links New mobile blueprint provides fresh insights NRF Mobile Blueprint 2011 (slides) I hope to do some posts on some of the interesting companies I spoke with in the coming weeks.

    Read the article

  • ODI - Creating a Repository in a 12c Pluggable Database

    - by David Allan
    To install ODI 11g into an Oracle 12c pluggable database, one way is to connect using a TNS string to the pluggable database service that is executing. For example when I installed my master repository, I used a JDBC URL such as; jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mydbserver)(PORT=1522)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PDBORA12.US.ORACLE.COM)))   I used the above approach rather than the host:port:sid which is a common mechanism many users use to quickly get up and going. Below you can see the repository creation wizard in action, I used the 11g release and simply installed the master and work repository into my pluggable database. Be wise with your repository IDs, I simply used the default, but you should be aware that these are key in larger deployments. The database in 12c has much more tighter control on users and resources, so just getting the user creating with sufficient resource on tablespaces etc in 12c was a little more work. Once you have the repositories up and running, then the fun starts using the 12c features. More to come.

    Read the article

  • Upcoming MySQL Events in Europe

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }span.apple-style-span { }div.Section1 { page: Section1; } Oracle’s European MySQL team is active running many events during the upcoming couple of months. We hope to see you there - Register Now! Scale with MySQL Are you looking to scale with MySQL? On-premise or in the cloud? Leveraging SQL and NoSQL Access? Join us for a free Oracle seminar focusing on best practices for MySQL performance and scalability. April 25: London May 22: Berlin MySQL Enterprise Edition Workshop In this hands-on seminar we will present the MySQL Enterprise Edition management tools under guidance of Oracle’s MySQL experts providing hints and tips. May 8: Düsseldorf High Availability Solutions for MySQL Web-based and business critical applications must typically be available 24/7. In addition to being very costly due to lost revenue opportunities, downtime can be extremely detrimental to customer loyalty, and present regulatory issues if data is compromised. Join us for this seminar to better understand how to achieve high availability with MySQL. May 10: Helsinki May 23: Munich May 24: Baden-Dättwil (near Zürich)

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • 5 New Java Champions

    - by Tori Wieldt
    The Java Champions have nominated and accepted five new members to their group: Jonas Bonér, James Strachan, Rickard Oberg, Régina ten Bruggencate, and Clara Ko. Congratulations, and we look forward to hearing more from each of them!Jonas Bonér (Sweden) is a Java entrepreneur, programmer, teacher, speaker and author. He is an active contributor to the Open Source community; and most notably created the Akka Project, AspectWerkz Aspect-Oriented Programming (AOP) framework. James Strachan (UK) has more than 20 years experience in enterprise software development with a background in finance and middleware and is also committer on a number of open source projects, including Apache Karaf, Maven, Lift and Jersey.Rickard Oberg (Malaysia) has worked on several Open Source projects that involve JEE development, such as JBoss, XDoclet and WebWork. He has also been the principal architect of the SiteVision CMS/portal platform, where he used AOP as the foundation. Now he works for Jayway, developing the Qi4j framework and Composite Oriented Programming paradigm.Régina ten Bruggencate (Netherlands) is a senior Java developer for iProfs with 10-plus years of Java experience, mainly on enterprise applications. Régina is the current president of Duchess, and as such has the responsibility for the site and community. Duchess is a global organization for women in Java technology, currently with 350 members in over 50 countries.Clara Ko (Netherlands) is a freelance Java/J2EE professional living in Amsterdam. She has worked as a developer, architect, and project manager. She promotes the use of open source software and has led initiatives to adopt agile practices across multiple organizations. Clara is also co-founder of Duchess.The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. This interchange may be in the form of technical discussions and/or community-building activities with Oracle's Java Development and Developer Program teams. Full bios and details about the champions are on http://java-champions.java.net/.

    Read the article

  • WebCenter .NET Accelerator - Microsoft SharePoint Data via WSRP

    - by john.brunswick
    Platforms in the enterprise will never be homogeneous. As much as any vendor would enjoy having their single development or application technology be exclusively adopted by customers, too much legacy, time, education, innovation and vertical business needs exist to make using a single platform practical. JAVA and .NET are the two industry application platform heavyweights and more often than not, business users are leveraging various systems in their day to day activities that incorporate applications developed on top of both platforms. BEA Systems acquired Plumtree Software to complete their "liquid" view of data, stressing that regardless of a particular source system heterogeneous data could interoperate at not only through layers that allowed for data aggregation, but also at the "glass" or UI layer. The technical components that allowed the integration at the glass thrive today at Oracle, helping WebCenter to provide a rich composite application framework. Oracle Ensemble and the Oracle .NET Application Accelerator allow WebCenter to consume and interact with the UI layers provided by .NET applications and a series of other technologies. The beauty of the .NET accelerator is that it can consume any .NET application and act as a Web Services for Remote Portlets (WSRP) producer. I recently had a chance to leverage the .NET accelerator to expose a ASP .NET 2.0 (C#) application in the WebCenter UI (pictured above) and wanted to share a few tips to help others get started with similar integrations. I was using two virtual machines for the exercise - one with Windows Server 2003, running SharePoint and the other running WebCenter Spaces 11g. For my sample application data I ended up using SharePoint 2007 lists and calendars (MOSS 2007) to supply results using a .NET API for SharePoint.

    Read the article

  • BPM 11g and Human Workflow Shadow Rows by Adam Desjardin

    - by JuergenKress
    During the OFM Forum last week, there were a few discussions around the relationship between the Human Workflow (WF_TASK*) tables in the SOA_INFRA schema and BPMN processes.  It is important to know how these are related because it can have a performance impact.  We have seen this performance issue several times when BPMN processes are used to model high volume system integrations without knowing all of the implications of using BPMN in this pattern. Most people assume that BPMN instances and their related data are stored in the CUBE_*, DLV_*, and AUDIT_* tables in the same way that BPEL instances are stored, with additional data in the BPM_* tables as well.  The group of tables that is not usually considered though is the WF* tables that are used for Human Workflow.  The WFTASK table is used by all BPMN processes in order to support features such as process level comments and attachments, whether those features are currently used in the process or not. For a standard human task that is created from a BPMN process, the following data is stored in the WFTASK table: One row per human task that is created The COMPONENTTYPE = "Workflow" TASKDEFINITIONID = Human Task ID (partition/CompositeName!Version/TaskName) ACCESSKEY = NULL Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >