Search Results

Search found 23581 results on 944 pages for 'oracle customer success'.

Page 504/944 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • JNDI Datasource Problem on Tomcat 6, Hibernate

    - by Asuman AKYILDIZ
    Hello; I am using Tomcat 6 as application server, Struts-Hibernate and MyEclipse 6.0. My application uses JDBC driver but I should modify it to use JNDI Datasource. I followed steps as described in tomcat 6.0 howto tutorial. I defined my resource in tomcatconf: <Resource name="jdbc/ats" global="jdbc/ats" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@//localhost:1521/MISDEV" username="TEST" password="TEST" maxActive="20" maxIdle="10" maxWait="-1" validationQuery="SELECT 1 from dual" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="false"/> I gave reference in my application web.xml: <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ats</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I defined datasource-dialect in my hibernate-cfg.xml <property name="connection.datasource">java:comp/env/jdbc/ats</property> <property name="dialect">org.hibernate.dialect.Oracle9Dialect</property> But when I create hibernate session, it can not open the connection: 09:18:11,322 ERROR JDBCExceptionReporter:72 - Connections could not be acquired from the underlying database! org.hibernate.exception.GenericJDBCException: Cannot open connection I also tried to set the properties at runtime: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); It does not open connection again. But, when I use JDBC driver it works: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.connection.url", "jdbc:oracle:thin:@//localhost:1521/MISDEV"); configuration.setProperty("hibernate.connection.username", "test"); configuration.setProperty("hibernate.connection.password", "test"); configuration.setProperty("hibernate.connection.driver_class", "oracle.jdbc.OracleDriver"); configuration.setProperty("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); I have been searching for 3 days and no success. What may be de problem?

    Read the article

  • How to Open Any Folder as a Project in the NetBeans Platform

    - by Geertjan
    Typically, as described in the NetBeans Project Type Tutorial, you'll define a project type based on the presence of a file (e.g., "project.xml" or "customer.txt" or something like that) in a folder. I.e., if the file is there, then its parent, i.e., the folder that contains the file, is a project and should be opened in your application. However, in some scenarios (as with the HTML5 project type introduced in NetBeans IDE 7.3), the user should be able to open absolutely any folder at all into the application. How to create a project type that is that liberal? Here you go, the only condition that needs to be true is that the selected item in the "Open Project" dialog is a folder, as defined in the "isProject" method below. Nothing else. That's it. If you select a folder, it will be opened in your application, displaying absolutely everything as-is (since below there's no ProjectLogicalView defined): import java.beans.PropertyChangeListener; import java.io.IOException; import javax.swing.Icon; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.spi.project.ProjectFactory; import org.netbeans.spi.project.ProjectState; import org.openide.filesystems.FileObject; import org.openide.loaders.DataFolder; import org.openide.loaders.DataObjectNotFoundException; import org.openide.nodes.FilterNode; import org.openide.util.Exceptions; import org.openide.util.ImageUtilities; import org.openide.util.Lookup; import org.openide.util.lookup.Lookups; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = ProjectFactory.class) public class FolderProjectFactory implements ProjectFactory { @Override public boolean isProject(FileObject projectDirectory) { return DataFolder.findFolder(projectDirectory) != null; } @Override public Project loadProject(FileObject dir, ProjectState state) throws IOException { return isProject(dir) ? new FolderProject(dir) : null; } @Override public void saveProject(Project prjct) throws IOException, ClassCastException { // leave unimplemented for the moment } private class FolderProject implements Project { private final FileObject projectDir; private Lookup lkp; private FolderProject(FileObject dir) { this.projectDir = dir; } @Override public FileObject getProjectDirectory() { return projectDir; } @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ new Info(), }); } return lkp; } private final class Info implements ProjectInformation { @Override public Icon getIcon() { Icon icon = null; try { icon = ImageUtilities.image2Icon( new FilterNode(DataFolder.find( getProjectDirectory()).getNodeDelegate()).getIcon(1)); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return icon; } @Override public String getName() { return getProjectDirectory().getName(); } @Override public String getDisplayName() { return getName(); } @Override public void addPropertyChangeListener(PropertyChangeListener pcl) { //do nothing, won't change } @Override public void removePropertyChangeListener(PropertyChangeListener pcl) { //do nothing, won't change } @Override public Project getProject() { return FolderProject.this; } } } } Even the ProjectInformation implementation really isn't needed at all, since it provides nothing more than the icon in the "Open Project" dialog, the rest (i.e., the display name in the "Open Project" dialog) is provided by default regardless of whether you have a ProjectInformation implementation or not.

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • Two Candidates + One Job = Two Different Outcomes

    - by david.talamelli
    Recruiters have always headhunted (sidenote: I do not like this word, in general I think the type of people who use the phrase “headhunting” are the ones who are trying to sound more important than what they likely are). Any serious Recruiter engages in direct recruiting activity, it is part and parcel of the business it is not something unique. With the uptake in Social Media the past 4-5 years, we have seen an increase in the number of Recruiters proactively reaching out to people about job opportunities. We have also seen this activity increase across all levels of hire, from help desk roles to C-Level Executives. While getting approached about a role can be a nice boost to a person’s ego, do not let it give you an inflated sense of entitlement. It is The way that people handle themselves during these calls and subsequent interviews will have a large impact on their potential to land that job. Last week I spoke to two very different candidates, both about the same position and both with very different outcomes. On paper, Candidate #1 looked fantastic; they ticked many of the boxes that we were looking for. The person is working at global IT company and working in a similar role as the one we were hiring for but not in as senior as the role we had. This role would have been the perfect step to getting involved in more complex work for the person. Candidate #2 had less polished IT experience, ticked some of the boxes we were looking for and on paper in comparison to Candidate #1 was not as close a fit as Candidate #1 was. It seemed like I was comparing apples and oranges. After speaking to both candidates it turns out I was comparing apples and oranges except the person better suited for our role was not the one I was expecting it would be. The first candidate on paper looked great – they had the experience we were looking for and appeared to be just right for the role, but after talking to them, they gave me the impression that they thought the world owed them. The impression I was left with was that they did not equate success with hard work, they seemed more interested in “what is in it for me”. Rather than having a proper conversation with me, I was often cut off and asked to hurry it up when explaining our business, what we are doing, etc... . This person seemed more interested in the job title and money than how rather than think about ways to make the role successful. Candidate #2 who had limited experience, made up for any perceived lack of experience and them some with a demonstrated motivation to succeed and do the things needed to make that happen. Candidate #2 made a great first impression, they did not seem afraid of hard work and demonstrated a “team player” attitude. In talking to them they kept me engaged, listened and asked thoughtful questions that made me think this is the type of person who creates their own luck and who would thrive in a place like Oracle. Skills, capabilities, experience and a good resume can certainly get your foot in the door, but the wrong attitude or approach to work can close those opportunities just as easily. On the other hand, hard work, effort and a genuine work ethic may help open those doors that would otherwise closed for you. A resume with all the credentials gets you in the front door but that is just the beginning of the process. It is not how we start the race that is important, it’s how things end that matter most.

    Read the article

  • SQL Server Substr Equivalent

    - by Derek D.
    The oracle function equivalent to the SQL Server function of Substr is: Substring. All spelled out. This function is actually identical to Oracle’s function.DECLARE @BaseString varchar(max)SET @BaseString = 'My grandmothers pillows are blue'SELECT SUBSTRING ( @BaseString -- The base string to extract from ,4 -- Start Position ,5 -- Length of Characters )The above query returns the value ‘grand’. Related Posts:»SQL Server Contains [...]

    Read the article

  • 25 reasons to attend JavaOne 2012

    - by arungupta
    17th JavaOne is just around the corner, less than 3 weeks away! If you are still thinking about registering for the conference, here are my top 25 reasons to attend the conference: Biggest gathering of Java geeks in the world Latest and greatest content with 475 technical sessions/Birds of Feathers/Hands-on labs sessions (about 20% more from last year) Reduced number of keynotes to accommodate room for more technical content No product pitches, exclusive focus on technology (I can tell you that from my experience as a track lead) Sessions are divided in different in-depth technical tracks to focus on Java technology that most interests you Reruns of several popular sessions Experts and Practitioners-led HOLs and tutorials Rock star speakers, panelists, faculties, and instructors. Meet several Java Champions and JUG leaders from all around the world Engage with speakers and discuss with fellow developers in a casual setting with lots of networking space A complete conference dedicated for Java Embedded Extensive and fast-paced hands-on University Sessions on Sunday, learn while you are at the conference. You can register for Java University only or attend with the conference. Dukes Choice Awards recognize and celebrate the most innovative usage of the Java platform DEMOgrounds and Exhibition Hall provide extensive opportunities for networking and engagement with the biggest names in Java (dedicated hours on each day as well) Dedicated day for Java User Groups and Communities (GlassFish Community Event and NetBeans Community Day) Multiple registration packages to meet your needs Pay for 4 full conference passes and get a fifth one free Students and Bloggers get a free pass Geek Bike Ride with fellow speakers and attendees in a casual setting Greenest conference on the plane Enjoy different cuisines in the San Francisco city, take a trip to Alcatraz or Napa Valley or go running on the crooked street ;-) There are tons of tourist opportunities in/around San Francisco. Tons of parties during the conference, in the evening, late night, and early mornings. Don't forget Thirsty Bear Party! Pearl Jam and Kings of Leon at Appreciation Party Oracle Music Festival at Yerba Buena Gardens Grab the bragging rights "I have attended JavaOne"! Learn a new skill, build new connections, conceive a new idea and push the boundaries of Java in the most important educational and networking event of the year for Java developers and enthusiasts. With so much geekgasm going on during the 5 days of JavaOne, is there a reason for you to wait ? Register for the conference now! Grab your buttons, banners, and other collateral at JavaOne Toolkit. You can also send an email to [email protected]. And reach out to us using different social media channels ... As a 13 year veteran of the conference, I can tell this is some thing every Java developer must experience! I will be there, will you ?

    Read the article

  • Sprinkle Some Magik on that Java Virtual Machine

    - by Jim Connors
    GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their technology is a dynamically-typed object oriented programming language called Magik.  Like Java, Magik source code is compiled down to bytecodes that run on a virtual machine -- in this case the Magik Virtual Machine. Throughout the years, GE has invested considerable engineering talent in the support and maintenance of this virtual machine.  At the same time vast energy and resources have been invested in the Java Virtual Machine. The question for GE has been whether to continue to make that investment on its own or to leverage massive effort provided by the Java community? Utilizing the Java Virtual Machine instead of maintaining its own virtual machine would give GE more opportunity to focus on application solutions.   At last count, there are dozens, perhaps hundreds of examples of programming languages that have been hosted atop the Java Virtual Machine.  Prior to the release of Java 7, that effort, although certainly possible, was generally less than optimal for languages like Magik because of its dynamic nature.  Java, as a statically typed language had little use for this capability.  In the quest to be a more universal virtual machine, Java 7, via JSR-292, introduced a new bytecode called invokedynamic.  In short, invokedynamic affords a more flexible method call mechanism needed by dynamic languages like Magik. With this new capability GE Energy has succeeded in hosting their Magik environment on top of the Java Virtual Machine.  So you may ask, why would GE wish to do such a thing?  The benefits are many: Competitors to GE Energy claimed that the Magik environment was proprietary.  By utilizing the Java Virtual Machine, that argument gets put to bed.  JVM development is done in open source, where contributions are made world-wide by all types of organizations and individuals. The unprecedented wealth of class libraries and applications written for the Java platform are now opened up to Magik/JVM platform as first class citizens. In addition, the Magik/JVM solution vastly increases the developer pool to include the 9 million Java developers -- the largest developer community on the planet. Applications running on the JVM showed substantial performance gains, in some cases as much as a 5x speed up over the original Magik platform. Legacy Magik applications can still run on the original platform.  They can be seamlessly migrated to run on the JVM by simply recompiling the source code. GE can now leverage the huge Java community.  Undeniably the best virtual machine ever created, hundreds if not thousands of world class developers continually improve, poke, prod and scrutinize all aspects of the Java platform.  As enhancements are made, GE automatically gains access to these. As Magik has little in the way of support for multi-threading, GE will benefit from current and future Java offerings (e.g. lambda expressions) that aim to further facilitate multi-core/multi-threaded application development. As the JVM is available for many more platforms, it broadens the reach of Magik, including the potential to run on a class devices never envisioned just a few short years ago.  For example, Java SE compatible runtime environments are available for popular embedded ARM/Intel/PowerPC configurations that could theoretically host this software too. As compared to other JVM language projects, the Magik integration differs in that it represents a serious commercial entity betting a sizable part of its business on the success of this effort.  Expect to see announcements not only from General Electric, but other organizations as they realize the benefits of utilizing the Java Virtual Machine.

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Java Spotlight Episode 104: Devoxx 4 Kids

    - by Roger Brinkley
    Stephan Jannsen talks about the new Devoxx 4 Kids that he launched this last weekend in Belgium. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News WebSocket JSR Early Draft (JSR 356) JAX-RS 2 Public Draft (JSR 339) JMS2, JAX-RS 2, WebSocket, JSON integrated in GlassFish 4 Promoted Builds Java EE 7 Revised Scope - Q2 2013 JavaOne Content Available for Free Please try Oracle's Java Uninstall Applet OpenJDK Community and Project Scorecard Experimental new utility to detect issues in javadoc comments PermGen Elimination project is promoting JDK bug migration milestone: JIRA now the system of record Project Jigsaw: On the next train New OpenJDK Projects: ThreeTen & Project Sumatra Events Oct 15-17, JAX London, London, United Kingdom Oct 20, Devoxx 4 Kids Français, Brussels, Belgium Oct 22-23, Freescale Technology Forum - Japan, Tokyo, Japan Oct 23-25, EclipseCon Europe, Ludwigsburg, Germany Oct 30-Nov 1, Arm TechCon, Santa Clara, United States of America Oct 31, JFall, Hart van Holland, Netherlands Nov 2-3, JMaghreb, Rabat, Morocco Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Feature InterviewStephan Janssen is a serial entrepreneur that has founded several successful organizations such as the Belgian Java User Group (BeJUG) in 1996, JCS Int. in 1998, JavaPolis in 2002 and now Parleys.com in 2006. He has been using Java since its early releases in 1995 with experience of developing and implementing real world Java solutions in the finance and manufacturing industries. Today Stephan is the CTO of the Java Competence Center at RealDolmen. He was selected by BEA Systems as the first European (independent) BEA Technical Director. He has also been recognized by the Server Side as one of the 54 Who is Who in Enterprise Java 2004. Sun has recognized in 2005 his efforts for the Java Community and has engaged him in the Java Champion project. He has spoken at numerous Java and JUG conferences around the world.Devoxx 4 KidsNew to Java Programming Center -- Young Developers What’s Cool "Here is the draft proposal to add a public Base64 utility class for JDK8." Default methods for jdk8: request for code review Raspberry Pi Model B now ships with 512MB of RAM JDuchess roadshow on the Island of Java. Nety and Mila from Meruvian.First week roadshowSecond week roadshowThird week part 1

    Read the article

  • LCM says Smart List import is complete but actually its not...Here are probable reasons

    - by RahulS
    First of all some basics of smart list: Administrators use Smart Lists to create custom drop-down lists that users access from data form cells. When clicking in cells whose members are associated with a Smart List (as a member property), users select items from drop-down lists instead of entering data. Users cannot type in cells that contain Smart Lists. Smart Lists display in cells as down arrows that expand when users click into the cells. Below link will give you more information on Smart Lists: http://download.oracle.com/docs/cd/E17236_01/epm.1112/hp_admin/enum_pg.html I got a simple query today, "The LCM process generates and indicates a status of "Complete", however, the 3,018 records do not appear in the Planning application. No error exists in the log to identify the problem." Things which can be checked in this case: 1. Spaces are not allowed in the Entry Name, 2. Spaces are allowed in the Entry Label, 3. The name must start with an alpha character or underscore, 4. Valid characters for the remaining part of the name must be alpha, numeric, or an underscore, 5. Enter a name that is unique within the smart list, 6. I am not sure about the limits but I have seen 22,000 members loaded fine, 7. ID for every entry should be unique,  8. IDs need not to be consecutive, Ex: It was go from 1 to 100 then 500 to 900 then 1900 to 4500 etc. While import .xml file using LCM there were no errors in the foundation and LCM migration logs, but when checked in the HyS9PlanningSysErr.log, few errors were found:Ex: The name Data_Coord_(Prod)_ACS is invalid, The name Sr_Dir_b+Medcd_Gvt_Rel_Sls_Mkt is invalid, The name entered is invalid. Enter a name that is unique within the smart list, Also, we can Load Smart List dimensions and Smart List dimension entries using the /DS:HSP_SMARTLISTS parameter in outlineload utility: OutlineLoad /A:acpt /U:admin /M /I:c:/smartlist_create1.csv /DS:HSP_SMARTLISTS /L:c:/OutlineLogs/outlineLoad.log /X:c:/OutlineLogs/outlineLoad.exc SmartList Name, Operation, Label, Display Order, Missing Label, Use Form Missing Label, Entry ID, Entry Name, Entry Label SL1,addsmartlist,SL1Label,,,,,, SL1,addEntry,,,,,,entry1,entrylabel1 SL1,addEntry,,,,,,entry2,entrylabel2 Cheers..!!! Rahul S. http://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • How to integrate Java ME SDK 3.2 with NetBeans

    - by SungmoonCho
    Many people like to use Java ME SDK with IDEs. We provided instructions on how to integrate the SDK with NetBeans through the download page, and also through the release note, however, let me explain it here once again with some screen shots. 1. Download Java ME SDK and NetBeans plugin from here. 2. Install Java ME SDK first. You will have the emulator and the runtime on your machine. Also please unarchive the NetBeans plugin somewhere. 3. Launch NetBeans. 4. Go to "Tools" - "Plugins". 5. Check out the "Installed" tab. Check "Show details". If you see the previous version of Java ME SDK Tools installed already. Check those to uninstall them. 6. Go to "Settings" tab. 7. Click "Add", and provide the location of NetBeans plugin. In my case, it is "file:/C:/Users/sungcho/Downloads/nb-me-sdk-plugins-uc/updates.xml". Don't forget to add "updates.xml" at the end. 8. Click "Okay" 9. Click "Available Plugins" tab. 10. If you scroll down, you will see three Java ME SDK Tools. Check "Java ME SDK Tools" plugin. Also check others as you desire. 11. Follow the instruction and install them. 12. Restart NetBeans 13. That is it. Done. Now you will see Oracle Java ME SDK 3.2 in your Java Platform list.

    Read the article

  • SOASuite 11.1.1.4 : Error Logging into BPM11g Composer?

    - by angelo.santagata
    Hey all, I’ve just installed SOA Suite 11.1.1.4 and noticed a few funnies which people might hit, thankfully each of them have an easy solution.   1. Some applications are installed but dont appear to work? If when you install SOASuite you may notice that the following applications dont appear to work, however they do appear as deployments in Weblogic Server Console e.g.  SOA Composer (composer), FMW Welcome Page Application (11.1.0.0.0) and some of the adaptors. If they appear in the deployments list state as “installed” and not Active, then its likely that they haven't been targeted to a specific server.   The solution is to target the application the desired managed server , e.g. AdminServer in a development environment. This is done by selecting the application, tab “Targets”, select all components, Button[change Targets] and select the appropriate server. This change can be done without restarting the Weblogic Server 2. You might find that when you try to log into the BPM Composer at http://machine:7001/bpm/composer , the login screen will appear but you cant log in. The error log might mention the following   The solution to this is two fold, a) When creating the domain, avoid using 127.0.0.1 as the Listener address, or “Any Addresses”, if this is a development machine create an alias in your /etc/hosts file and then use this alias in the domain creation wizard. e.g.   my host file contains an entry     mypc     127.0.0.1   And in the Fusion middleware Configuration wizard   Twill then work!   if it still doesnt you can try setting the ServerURL attribute to http://mypc in the SoaInfraMBean instead of blank. This is accomplished by using Enterprise Manager. Use the System MBean Browser to navigate to Application Defined MBeans->oracle.as.soainfra.config->[ server]->SoaInfraConfig->soa-infra. Then changing the value of 'ServerURL' to http://mypc   Failing that give support a call….

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Welcome To The Nashorn Blog

    - by Homma
    ??? ??? jlaskey ??? Nashorn Blog ????????????? https://blogs.oracle.com/nashorn/entry/welcome_to_the_nashorn_blog ???????? ?? ??????????????Nashorn ????????????????????????????????????????????????????????????????????????????? Nashorn ?????????????????????????????????????????????????????? ????? JavaOne ??????????Nashorn ???????????????????????????????? Georges Saab ????????????????????????????????????????????????????????????????? JavaOne ????????????????????????????????? ?Nashorn: JVM ??? JavaScript ????????????? ?????????(????????)??????????????????????????????????????????????????JVM ???????????????????????????????????????????????????????????????? ?Nashorn: JVM ?? JavaScript? ??? Nashorn ???????????????????????????????????????????????????Nashorn ???????????????????????????250 ??????????????????????????Twitter ? Sam Pullara ??? Mustache.js ???????(Rhino ? 20 ?????)???????????NetBeans ? John Ceccarelli ??? Nashorn ? Netbeans ??????????????????????????????? Q & A ???????????????? ?Nashorn JavaScript Team ???? Michel ? Attila ? Marcus ??? Q & A ??????????????(??????????????????????????)?????????? Node.jar ??????????Nashorn + Node.jar ???????????????????????????????? Node.jar ? Akhil ??????????????????? ?Nashorn ? Node ? Java Persistence? Doug Clarke ? Akhil ?????????????????(????????? ???????)?????? Q & A ??????????? 80 ??????? ?????? Node.jar ????????Doug ? Nashorn + JPA ??????? ?????????????Nashorn ????????????????? ????????????????? : Nashorn ? Java ???????? Attila ??????? Dynalink ? Nashorn ??????????????????????????????????????????????????????JVM ???????????????????????????????????? ????JavaOne ?? Java ??????????? JVM ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????? Nashorn ???? how to ?????????????????????????????????????????????? ??????????!

    Read the article

  • Comparing Isis, Google, and Paypal

    - by David Dorf
    Back in 2010 I was sure NFC would make great strides, but here we are two years later and NFC doesn't seem to be sticking. The obvious reason being the chicken-and-egg problem.  Retailers don't want to install the terminals until the phones support NFC, and vice-versa. So consumers continue to sit on the sidelines waiting for either side to blink and make the necessary investment.  In the meantime, EMV is looking for a way to sneak into the US with the help of the card brands. There are currently three major solutions that are battling in the marketplace.  All three know that replacing mag-stripe alone is not sufficient to move consumers.  Long-term it's the offers and loyalty programs combined with tendering that make NFC attractive. NFC solutions cross lots of barriers, so a strong partner system is required.  The solutions need to include the carriers, card brands, banks, handset manufacturers, POS terminals, and most of all lots of merchants.  Lots of coordination is necessary to make the solution seamless to the consumer. Google Wallet Google's problem has always been that only the Nexus phone has an NFC chip that supports their wallet.  There are a couple of additional phones out there now, but adoption is still slow.  They acquired Zavers a while back to incorporate digital coupons, but the the bulk of their users continue to be non-NFC.  They have taken an open approach by not specifying particular payment brands.  Google is piloting in San Francisco and New York, supporting both MasterCard PayPass and stored value. I suppose the other card brands may eventually follow.  There's no cost for consumers or merchants -- Google will make money via targeted ads. Isis Not long after Google announced its wallet, AT&T, Verizon, and T-Mobile announced a joint venture called Isis.  They are in the unique position of owning the SIM in the phones they issue.  At first it seemed Isis was a vehicle for the carriers to compete with the existing card brands, but Isis later switched to a generic wallet that supports the major card brands.  Isis reportedly charges issuers a $5 fee per customer per year.  Isis will pilot this summer in Salt Lake City and Austin. PayPal PayPal, the clear winner in the online payment space beyond traditional credit cards, is trying to move into physical stores.  After negotiations with Google to provide a wallet broke off, PayPal decided to avoid NFC altogether, at least for now, and focus on payments without any physical card or phone.  By avoiding NFC, consumers don't need an NFC-enabled phone and merchants don't need a new reader.  Consumers must enter their phone number and PIN in the merchant's existing device, or they can enter their PIN in the PayPal inStore app running on their phone, then show the merchant a unique barcode which authorizes payment. Paypal is free for consumers and charges a fee for merchants.  Its not clear, at least to me, how PayPal handles fraudulent transactions and whether the consumer is protected. The wildcard is, of course, Apple.  Their mobile technologies set the standard, so incorporating NFC chips would certainly accelerate adoption of many payment solutions.  Their announcement today of the iOS Passbook is a step in the right direction, but stops short of handling payments. For those retailers that have invested in modern terminals, it seems the best strategy is to support all the emerging solutions and let the consumers choose the winner.

    Read the article

  • JavaOne User Group Sunday

    - by Tori Wieldt
    Before any "official" sessions of JavaOne 2012, the Java community was already sizzling. User Group Sunday was a great success, with several sessions offered by Java community members for anyone wanting to attend. Sessions were both about Java and best practices for running a JUG. Technical sessions included "Autoscaling Web Java Applications: Handle Peak Traffic with Zero Downtime and Minimized Cost,"  "Using Java with HTML5 and CSS3," and "Gooey and Sticky Bits: Everything You Ever Wanted to Know About Java." Several sessions were about how to start and run a JUG, like "Getting Speakers, Finding Sponsors, Planning Events: A Day in the Life of a JUG" and "JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group." Badr ElHouari and Faiçal Boutaounte presented the session "Why Communities Are Important and How to Start One." They used the example of the Morocco JUG, which they started. Before the JUG, there was no "Java community," they explained. They shared their best practices, including: have fun, enjoy what you are doing get a free venue to have regular meetings, a University is a good choice run a conference, it gives you visibility and brings in new members students are a great way to grow a JUG Badr was proud to mention JMaghreb, a first-time conference that the Morocco JUG is hosting in November. They have secured sponsors and international speakers, and are able to offer a free conference for Java developers in North Africa. The session also included a free-flowing discussion about recruiters (OK to come to meetings, but not to dominate them), giving out email addresses (NEVER do without permission), no-show rates (50% for free events) and the importance of good content (good speakers really help!). Trisha Gee, member of the London Java Community (LJC) was one of the presenters for the session "Benefits of Open Source." She explained how open sourcing the LMAX Disruptor (a high performance inter-thread messaging library) gave her company LMAX several benefits, including more users, more really good quality new hires, and more access to 3rd party companies. Being open source raised the visibility of the company and the product, which was good in many ways. "We hired six really good coders in three months," Gee said. They also got community contributors for their code and more cred with technologists. "We had been unsuccessful at getting access to executives from other companies in the high-performance space. But once we were open source, the techies at the company had heard of us, knew our code was good, and that opened lots of doors for us." So, instead of "giving away the secret sauce," by going open source, LMAX gained many benefits. "It was a great day," said Bruno Souza, AKA The Brazilian Java Man, "the sessions were well attended and there was lots of good interaction." Sizzle and steak!

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • Portal Server comparisons / TCoO

    - by Scott
    We have a client whom is looking to incorporate Oracle Portal into our next release. I'm newer to this team, but the team is currently working with Apache, so whichever Portal Server we choose will likely incur a bit of a learning curve. Is there any comparison (not marketing) out there which discusses the differences in the servers and/or the total cost of ownership on them? With 5 developers, installing RAD becomes expensive, which I'd assume they'd wish to move onto us with the change to Oracle Portal and WebSphere.

    Read the article

  • SmartView 11.1.2.2.103 - Support for MS Office 64 added

    - by THE
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 (thanks to Nancy, who shared this with me)  New for Smart View v11.1.2.2.103, Patch 14362638,   Microsoft Office 64-bit is now supported:  Information for 64-Bit Microsoft Office Installations: In this release, Smart View supports the 64-bit version on Microsoft Office. If you use 64-bit Office, please note the following: Oracle provides separate Smart View installation files for 64-bit and 32-bit Office systems. . smartview-x64.exe is the file for 64-bit Office installations. smartview.exe is the file for 32-bit Office installations. The 64-bit version of Smart View pertains only to the 64-bit version of Microsoft Office and not to the version of the operating system. Customers with 64-bit operating systems and the 32-bit version of Microsoft Office should install the 32-bit version of Smart View. You cannot install the 64-bit version of Smart View from EPM Workspace (13530466). Although Planning Offline is supported for 64-bit operating systems, it is not supported for 64-bit Smart View installations. If you use Planning Offline with Smart View, you must use the 32-bit version of Smart View and the 32-bit version of Microsoft Office. In 64-bit versions of Excel 2010 SP1, the presence of Smart View functions may cause Excel to terminate abruptly and may prevent Copy Data Point and Paste Data Point functions from working. This is a Microsoft issue, and a service request has been filed with Microsoft. Workaround: Until the Microsoft fix, use the 32-bit version of Smart View. (13606492) The Smart View function migration utility is not supported on 64-bit Office. (14342207) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >