Search Results

Search found 37908 results on 1517 pages for 'anne carlson (oracle development)'.

Page 88/1517 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Summary of the Solaris 11 webcast's livechat QnA session

    - by Karoly Vegh
    This is a followup post to the previous summary on the "What's new with Solaris 11 since the launch" webcast. That webcast has had a chatroom for a live Questions and Answers session running. I went through the archive of those and compiled a list of some of the (IMHO) most relevant and most frequently asked questions, I'd like to share. This is the first part, covering the QnA of Session I and II of the webcast, in a followup post we can have a look of the rest of the sessions if required - let me know in the comments. Also, should you have questions, as usual, feel free to ask those there, too.  ...and here come the answered questions:  When will Exadata be based on Solaris in place of Oracle Enterprise Linux?Exadata offers both Solaris 11 or Oracle Enterprise Linux.  The choice can be made at deployment time based on your OS needs.What are all other benefits and futures avilable in solaris 11 (cloud O.S.) compared to cloud based Red Hat Linux and Windows?suggest you check out our cloud white paper for a view of this. Also the OTN Solaris 11 page has some good articles. Here are the links:  http://www.oracle.com/technetwork/server-storage/solaris11/documentation/o11-106-sol11-cloud-501066.pdf http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.htmlWill 11.1 have a more complete IPS respository for Oracle and FOSS software?Yes, we are adding additional packages to the various package repositories. Since Solaris 11 was launched, both the Oracle Solaris Studio tools as well as Oracle Solaris Cluster have been made available along with numerous new FOSS packages. We will continue to be adding additional Oracle products and open source packages in the future. Will Exadata be based on Sparc in place of intel-amd x86 in next future ?We can't publically discuss futures, but we actually have a SPARC version of Exadata today, it's called SuperCluster, this is such a powerfull multipurpose system that it actually have multiple personalities built into one system: Exadata, Exalogic, and it can be a general purpose platform if you want. Have I understood this right? Livepatching KSplice-style is coming to Solaris 11 too?We're looking at that for certain types of Solaris patches in the future.Will there be a security framework like SST/JASS for Solaris 11?We can't talk about the future projects on a public forum, but we recognize the need for SST/JASS and want to address this as soon as possible. On the other side there are a whole bunch of "best practices" that are now embedded into Solaris 11 by default, so out of the box Solaris 11 should already address part of what SST/JASS gave you. (For example we did a lot of work on improving the auditing performance so that we can now have it turned on by default). On x86 can install VirtualBox in a Zone and use that to host other OSes.Yes, this was one of the first things we made sure would work when we acquired VirtualBox when we were still Sun Microsystems. If I have a Solaris 11 Control Domain on a T-series, can I run a Solaris 10 Ldom with Solaris 8 branded containers?Yes, you can.Is Oracle Solaris free or do we need to purchase?Solaris is free, the entitlement to run it comes either with a Sun system (new or historical) or for 3rd party systems the entitlement comes with a support contract. Note that for production use you will be expected to get a support contract. If you don't want to use the Solaris system (Sun or 3rd party) for production use (i.e. development) you can get an OTN license on the Oracle Technical Network website. Will encryption and deduplication both work on a share?This should work at the same time. What approaches does Solaris use to monitor usage?There are many different tools in Solaris to monitor usage. The main ones are the "stats" (vmstat, mpstat, prstat, ...), the kstat interface, and DTrace (to get details you couldn't see before). And then there are layered tools that can interface with these tools (Ops Center, BMC, CA, Tivoli, ...) Apart little-endian, big-endian how is it easy to port Solaris applications on Sparc to x86 and vice-versa ?Very easy. Except for certain hardware specific applications (those that utilize hardware specific drivers), all of the same Oracle Solaris APIs exist for all architectures. Is IPS based patching aware of the fact that zones can reside on ZFS and move from one physical server to another ?IPS is definitely aware of zones and uses ZFS to support boot environments for non-global zones in the same way that's used for the global zone. With respect to moving a zone from one physical server to another, Solaris 11 supports to the same zone attach/deattach method that was introduced in Solaris 10. Is vnic support in Ldoms planned?This is currently being investigated for a future LDOM release. Is it possible with the new patching system to build a system later with the same patch level as a system built a few months earlier?Yes, you can choose/define exactly which version should go to the system and it will always put the same bits in place. The technical answer is that you choose the version of the "entire" package you want on the system and the rest flows from there. Is it in the plans to allow zones to add/remove zpools to running zones dynamically in future updates?Work in this area is currently under investigation. Any plans to realese Solaris 11 source code? i.e. opensolaris?We currently can't comment on publicly releasing the source code. If you need/want this access please let your Oracle account team know. What about VirtualBox and Solaris11 for virtualization?Solaris 11 works great with VirtualBox, as both a client and a host system. Will Oracle DB software eventually be supplied as IPS packages? When?We don't have a date yet but this is actively being worked on. What are the new artifacts in Oracle Solaris 11 than the previous versions?There are quite a few actually. The best start is to look at our "Evaluate Solaris 11" page, and there you also can find a Transition Guide. http://www.oracle.com/technetwork/server-storage/solaris11/overview/evaluate-1530234.html So, this seems just like RedHat's YUM environment?IPS offers certain features beyond those in YUM or other packaging systems. For example, IPS works with ZFS and Solaris Boot Environments to provide a safe environment for software lifecycle management so that changes can be reverted by switching to an older boot environment. With Zones on solaris 11, can I do paravirtualitation?The great thing about zones is you don't *need* paravirtualization. You're making the same direct kernel calls that you would outside of a zone.  It's an incredibly significant performance win over hypervisor-based virtualization. Are zones/containers officially supported to run Oracle Databases?  EBIZ?Hi Calvin, the answer is yes, here is the support matrix for DB:  http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html I've found some nasty bugs in Solaris 11 (one of which today) that have been fixed in community forks (i.e., Illumos). Will Oracle ever restart collaboration with the community?We continue to work with the community, just not as open on all projects as we did before (For example IPS is an open project) and the source of more than half of the Solaris packages is posted on our opensource websites. I can't comment on what we will do in the future. And with regards to bugs please file them through the support organization and we will get them resolved. Is zpool vdev removal on-the fly now possible ?This issue is actively being investigated although we don't have a date for when this feature will be available. Is pgstat now the official replacement for corestat ?It's intended to provide similar functionality Where are the opensource website?For Oracle Solaris, visit http://www.oracle.com/technetwork/opensource/systems-solaris-1562786.html As a cloud-scale virtualization, is it going to be easier to move zones between machines? maybe even automatic in case of a hardware failure?Hi Gashaw, we already have customers that have implemented what they refer to as "flying zones" that they can move around very easily. They use Solaris Cluster to do this. What about VMware vMotion like feature?We have secure live migration with both Logical Domains on SPARC T series systems, and with Oracle VM on x86 systems. When running Solaris 10/11 on an enterprise server with a lot of zones, what are best practises commands to show the system is running fine? (has enough hardware resources). For example CPU / Memory / I/O / system load. What are the recommended values?For Solaris 11, look into the new zonestat(1M) command that provides a great deal of information about zone utilization. In addition, there is new work underway in providing additional observability in areas such as per-zone file system I/O. Java optimizations done with Solaris 11? For X86 platforms too? Where can I find more detail about this?There is lots of work that go into optimizing Java for Oracle Solaris 10 & 11 on both SPARC and x86. See http://www.oracle.com/technetwork/articles/servers-storage-dev/solarisforjavadevelop-168642.pdf What is meant by "ZFS Shadow Migration"?It's a way to migrate data from another file system to ZFS: http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html Is flash archive available with S11?Flash archive is not.  There is a procedure for disaster recovery, and we're working on a modern archive-based deployment tool for a future update.  The disaster recovery tool is here: http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-091-sol-dis-recovery-489183.html  You can also use Distribution Constructor to build common golden images. Will solaris 11 be available on the ODA soon?The idea's under evaluation -- we'll share your interest with the team. What steps can be taken to ensure that breaches of security are identified quickly?There are a number of tools, including the "bart" tool and "pkg verify" to ensure that software has not been compromised.  Solaris Audit can also be used to detect unauthorized access.  You can also use Immutable Zones to protect against compromise.  There are a wide variety of security tools, and I've covered only a few. What is the relation from solaris to java 7 speed optimization?There is constant work done between the Oracle Solaris and Java teams on performance optimizations. See http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html for examples. What is the difference in the Solaris 11 installation compared to solaris 10 ? where i can find the document describing basic repository concepts ?The best place to start is: http://www.oracle.com/technetwork/server-storage/solaris11/index.html Hope you found the post useful. For questions, input, requests for the second half of the QnA, please find the comment section below.  -- charlie  

    Read the article

  • Best of OTN - Week of August 17th

    - by CassandraClark-OTN
    Architect CommunityThe Top 3 most popular OTN ArchBeat video interviews of all time: Oracle Coherence Community on Java.net | Brian Oliver and Randy Stafford [October 24, 2013] Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how developers can actively participate in product development through Coherence Community open projects. Visit the Coherence Community at: https://java.net/projects/coherence. The Raspberry Pi Java Carputer and Other Wonders | Simon Ritter [February 13, 2014] Oracle lead Java evangelist Simon Ritter talks about his Raspberry Pi-based Java Carputer IoT project and other topics he presented at QCon London 2014. Hot Features in Oracle APEX 5.0 | Joel Kallman [May 14, 2014] Joel Kallman (Director, Software Development, Oracle) shares key points from his Great Lakes Oracle Conference 2014 session on new features in Oracle APEX 5.0. Friday Funny from OTN Architect Community Manager Bob Rhubart: Comedy legend Steve Martin entertains dogs in this 1976 clip from the Carol Burnette show. Database Community OTN Database Community Home Page - See all tech articles, downloads etc. related to Oracle Database for DBA's and Developers. Java Community JavaOne Blog - JRuby and JVM Languages at JavaOne!  In this video interview, Charles shared the JRuby features he presented at the JVM Language Summit. He'll be at JavaOne read the blog to see all the sessions. Java Source Blog - IoT: Wearables! Wearables are a subset of the Internet of Things that has gained a lot of attention. Learn More. I love Java FaceBook - Java Advanced Management Console demo - Watch as Jim Weaver, Java Technology Ambassador at Oracle, walks through a demonstration of the new Java Advanced Management Console (AMC) tool. Systems Community OTN Garage Blog - Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11? - Read and learn the answer. OTN Garage FaceBook - Securing Your Cloud-Based Data Center with Oracle Solaris 11 - Overview of the security precautions a sysadmin needs to take to secure data in a cloud infrastructure, and how to implement them with the security features in Oracle Solaris 11.

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • The 'desktops' move to Oracle

    - by [email protected]
    The move to Oracle has been most interesting.  Here we have an organization who are interested in what they are interested in.  Not so much in things that aren't 'core'. The legacy Sun desktop products are things that Oracle is interested in.  To that end there are some changes coming to policies and products - and from my perspective they are all good. Very good. One of the changes to the Product suite is that we are now referred to as part of the Virtualization team, falling under Oracle's Chief Corporate Archtiect, Edward Screven.  Edward says that the Products were a 'gem' found inside the great pile of stuff that was Sun. Another change is that while StarOffice/Open Office has been certainly endorsed by Oracle, and it also falls under Edward's purview, and here has been a push on to use it as opposed to... well... you know.    It is not, however, part of the Virtualization team's product suite any more. There are some other really interesting changes coming that you will hear about quite soon.  The big message for today, though, is that Sun Rays, Secure Global Desktop, VirtualBox, and Oracle VDI software are all still alive and kicking and moving forward.  Infact, at the Oracle earnings call last week, Charles Phillips announced more significant wins with Sun Rays in the US Federal Governmnet space.  He could have talked about all kinds of legacy Sun products, but chose to mention Sun Rays in the first Quarterly statement since the acquisition of Sun - you should see this as a very good sign indeed. More soon - until then...

    Read the article

  • Oracle Tutor: Learn Tutor in the comfort of your own home or office

    - by emily.chorba(at)oracle.com
    The primary challenge for companies faced with documenting policies and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Procedure documentation is a critical success component for supporting corporate governance or other regulatory compliance initiatives and when implementing or upgrading to a new business application. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format)Ease of access by remote workers (web-enabled) Oracle University is offering Live Virtual Tutor classes! The class lasts four days, starts on Tuesday and finishes on Friday. This course is an introduction to the Oracle Tutor suite of products. It focuses on the Policy and Procedure writing feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. The next three classes are scheduled for: April 19 - 22 May 31 - June 3 July 5 - 8 You will learn to: Write procedures Create procedure Flowcharts Write support documents Create Impact Analysis Reports Create Role-base Employee Manuals Deploy online Employee Manuals on an Intranet Enjoy learning Tutor in your local environment. Start the sign up process from this link Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & BPM

    Read the article

  • Partner Webcast - Oracle VM Server for SPARC

    - by dmitry.nefedkin(at)oracle.com
    Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:RO; mso-fareast-language:EN-US;} March 17th, 9am CET  (10am EET)Oracle VM Server for SPARC (previously called Sun Logical Domains) provides highly efficient, enterprise-class virtualization capabilities for Oracle's SPARC T-Series servers. Oracle VM Server for SPARC allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by SPARC T-Series servers and the Oracle Solaris operating system. And all this capability is available at no additional cost. Agenda Overview of VM technologies from Oracle LDoms introduction Values and benefits Feature details LDoms demo Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. To register, please click here For any questions please contact [email protected].

    Read the article

  • Web 2.0 Extension for ASP.NET

    - by Visual WebGui
    ASP.NET is now much extended to support line of business and data centric applications, providing Web 2.0 rich user interfaces within a native web environment. New capabilities allowed by the Visual WebGui extension turn Visual Studio into a rapid development tool for the web, leveraging the wide set of ASP.NET web infrastructures runtime and extending its paradigms to support highly interactive applications. Taking advantage of the ASP.NET infrastructures Using the native ASP.NET ISAPI filter: aspnet_isapi...(read more)

    Read the article

  • Including additional DLL’s in an MSBuild script for Module Packaging

    - by Chris Hammond
    Late last year I created a blog post and video about a new version of the module development template that I released on Codeplex . This new template uses MSBuild scripts instead of NANT scripts to automate the packaging process for the modules built with the template. The MSBuild script works well out of the box, to package your module you simple change into RELEASE mode and then execute the build. If your project contains references to DLLs (in the website’s BIN folder) that you also need to package...(read more)

    Read the article

  • New Visual Studio 2012 Project Templates for DotNetNuke

    - by Chris Hammond
    Earlier this month Microsoft put the bits up for Visual Studio 2012 RTM out on MSDN Subscriber downloads, and during the first two weeks of September they will officially be releasing Visual Studio 2012. I started working with VS2012 late in the release candidate cycle, doing some DNN module development using my templates at http://christoctemplate.codeplex.com . These templates work fine in Visual Studio 2012 from my testing, but they still face the same problem that they had in Visual Studio 2008...(read more)

    Read the article

  • Das Oracle Healthcare-Team - Unterwegs auf der MEDICA 2012

    - by Anne Manke
    Am 14.11.2012 öffnet die diesjährige Medica ihre Pforten und das Oracle Healthcare Team (Daniela Wahrmann und Anne Manke) ist vor Ort, um Kunden, Partner und Dienstleister zu treffen. Sie sind auch da? Dann lassen Sie uns doch bei einem Kaffee über aktuelle Themen und Trends, Kritikpunkte oder zukünftige Projekte zu sprechen. Eine super Gelegenheit, sich persönlich kennenzulernen oder einen persönlichen Kontakt zu vertiefen.  Rufen Sie uns an oder mailen Sie uns. Wir freuen uns auf Sie! 

    Read the article

  • Instructions on how to configure a WebLogic Cluster and use it with Oracle Http Server

    - by Laurent Goldsztejn
    On October 17th I delivered a webcast on WebLogic Clustering that included a demo with Apache as the proxy server.  I realized that many steps are needed to set up the configuration I used during the demo.  The purpose of this article is to go through these steps to show how quickly and easily one can define a new cluster and then proxy requests via an Oracle Http Server (OHS). The domain configuration wizard offers the option to create a cluster.  The administration console or WLST, the Weblogic scripting tool can also be used to define a new cluster.  It can be created at any time but the servers that will participate in it cannot be in a running state. Cluster Creation using the configuration wizard Network and architecture requirements need to be considered while choosing between unicast and multicast. Multicast Vs. Unicast with WebLogic Clustering is of great help to make the best decision between the two messaging modes.  In addition, Configure Cluster offers details on each single field displayed above. After this initial configuration page, individual servers could be assigned to this newly created cluster although servers can be added later to the cluster.  What is not recommended is for the Admin server to participate in a cluster as the main purpose of the Admin server is to perform the bulk of the processing for the domain.  Servers need to stop before being assigned to a cluster.  There is also no minimum number of servers that have to participate in the cluster. At this point the configuration should be done and the cluster created successfully.  This can easily be verified from the console. Each clustered managed server can be launched to join the cluster.   At startup the following messages should be logged for each clustered managed server: <Notice> <WeblogicServer> <BEA-000365> <Server state changed to STARTING> <Notice> <Cluster> <BEA-000197> <Listening for announcements from cluster using messaging_mode cluster messaging> <Notice> <Cluster> <BEA-000133> <Waiting to synchronize with other running members of cluster_name>  It's time to try sending requests to the cluster and we will do this with the help of Oracle Http Server to play the role of a proxy server to demonstrate load balancing.  Proxy Server configuration  The first step is to download Weblogic Server Web Server Plugin that will enhance the web server by handling requests aimed at being sent to the Weblogic cluster.  For our test Oracle Http Server (OHS) will be used.  However plug-ins are also available for Apache Http server, Microsoft Internet Information Server (IIS), Oracle iPlanet Webserver or even WebLogic Server with the HttpClusterServlet. Once OHS is installed on the system, the configuration file, mod_wl_ohs.conf, will need to be altered to include Weblogic proxy specifics. First of all, add the following directive to instruct Apache to load the Weblogic shared object module extracted from the plugins file just downloaded. LoadModule weblogic_module modules/mod_wl_ohs.so and then create an IfModule directive to encapsulate the following location block so that proxy will be enabled by path (each request including /wls will be directed directly to the WebLogic Cluster).  You could also proxy requests by MIME type using MatchExpression in the Location block. <IfModule weblogic_module> <Location /wls>    SetHandler weblogic-handler    PathTrim /wls    WebLogicCluster MS1_URL:port,MS2_URL:port    Debug ON    WLLogFile        c:/tmp/global_proxy.log     WLTempDir        "c:/myTemp"    DebugConfigInfo  On </Location> </IfModule> SetHandler specifies the handler for the plug-in module  PathTrim will instruct the plug-in to trim /w ls from the URL before forwarding the request to the cluster. The list of WebLogic Servers defined in WeblogicCluster could contain a mixed set of clustered and single servers.  However, the dynamic list returned for this parameter will only contain valid clustered servers and may contain more servers if not all clustered servers are listed in WeblogicCluster. Testing proxy and load balancing It's time to start OHS web server which should at this point be configured correctly to proxy requests to the clustered servers.  By default round-robin is the load balancing strategy set by WebLogic. Testing the load balancing can be easily done by disabling cookies on your browser given that a request containing a cookie attempts to connect to the primary server. If that attempt fails, the plug-in attempts to make a connection to the next available server in the list in a round-robin fashion.  With cookies enabled, you could use two different browsers to test the load balancing with a JSP page that contains the following: <%@ page contentType="text/html; charset=iso-8859-1" language="java"  %>  <%  String path = request.getContextPath();   String getProtocol=request.getScheme();   String getDomain=request.getServerName();   String getPort=Integer.toString(request.getLocalPort());   String getPath = getProtocol+"://"+getDomain+":"+getPort+path+"/"; %> <html> <body> Receiving Server <%=getPath%> </body> </html>  Assuming that you name the JSP page Test.jsp and the webapp that contains it TestApp, your browsers should open the following URL: http://localhost/wls/TestApp/Test.jsp  Each browser should connect to a different clustered server and this simple JSP should confirm that.  The webapp that contains the JSP needs to be deployed to the cluster. You can also verify that the load is correctly balanced by looking at the proxy log file.  Each request generates a set of log entries that starts with : timestamp ================New Request: Each request is associated with a primary server and a secondary server if one is available.  For our test request, the following entries should appear in the log as well:Using Uri /wls/TestApp/Test.jsp After trimming path: '/TestApp/Test.jsp' The final request string is '/TestApp/Test.jsp' If an exception occurs, it should also be logged in the proxy log file with the prefix:timestamp *******Exception type   WeblogicBridgeConfig DebugConfigInfo enables runtime statistics and the production of configuration information.  For security purposes, this parameter should be turned off in production. http://webserver_host:port/path/xyz.jsp?__WebLogicBridgeConfig will display a proxy bridge page detailing the plugin configuration followed by runtime statistics which could help in diagnosing issues along with the analyzing of the proxy log file.  In our example the url would be: http://localhost/wls/TestApp/Test.jsp?__WebLogicBridgeConfig  Here is how the top section of the screen can look like: The bottom part of the page contains runtime statistics, here is a snippet of it (unrelated with the previous JSP example).   This entire plugin configuration should be very similar with other web servers, what varies is the name of the proxy server configuration file. So, as you can see, it only takes a few minutes to configure a Weblogic cluster and get servers to join it. 

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • What are the tools used by modern desktop/"native" application developers? [closed]

    - by kunjaa
    Besides the usual editor and debugger, what do the modern desktop (windows and linux) application developers use for their development. I am more interested in profilers, code analyzers, memory analyzers, packaging tools, GUI frameworks, libraries and any other handy tools and secrets that you couldnt live without. For example, as a web application developer, I have my Firebug and its extensions, Wireshark, jQuery and its extensions, client side and server side mvc frameworks, selenium tests, jsfiddle etc. Edit : Ok let us constrain this by saying you are using C++

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • Extend Your Applications Your Way: Oracle OpenWorld Live Poll Results

    - by Applications User Experience
    Lydia Naylor, Oracle Applications User Experience Manager At OpenWorld 2012, I attended one of our team’s very exciting sessions: “Extend Your Applications, Your Way”. It was clear that customers were engaged by the topics presented. Not only did we see many heads enthusiastically nodding in agreement during the presentation, and witness a large crowd surround our speakers Killian Evers, Kristin Desmond and Greg Nerpouni afterwards, but we can prove it…with data! Figure 1. Killian Evers, Kristin Desmond, and Greg Nerpouni of Oracle at the OOW 2012 session. At the beginning of our OOW 2012 journey, Greg Nerpouni, Fusion HCM Principal Product Manager, told me he really wanted to get feedback from the audience on our extensibility direction. Initially, we were thinking of doing a group activity at the OOW UX labs events that we hold every year, but Greg was adamant- he wanted “real-time” feedback. So, after a little tinkering, we came up with a way to use an online survey tool, a simple QR code (Quick Response code: a matrix barcode that can include information like URLs and can be read by mobile device cameras), and the audience’s mobile devices to do just that. Figure 2. Actual QR Code for survey Prior to the session, we developed a short survey in Vovici (an online survey tool), with questions to gather feedback on certain points in the presentation, as well as demographic data from our participants. We used Vovici’s feature to generate a mobile HTML version of the survey. At the session, attendees accessed the survey by simply scanning a QR code or typing in a TinyURL (a shorthand web address that is easily accessible through mobile devices). Killian, Kristin and Greg paused at certain points during the session and asked participants to answer a few survey questions about what they just presented. Figure 3. Session survey deployed on a mobile phone The nice thing about Vovici’s survey tool is that you can see the data real-time as participants are responding to questions - so we knew during the session that not only was our direction on track but we were hitting the mark and fulfilling Greg’s request. We planned on showing the live polling results to the audience at the end of the presentation but it ran just a little over time, and we were gently nudged out of the room by the session attendants. We’ve included a quick summary below and this link to the full results for your enjoyment. Figure 4. Most important extensions to Fusion Applications So what did participants think of our direction for extensibility? A total of 94% agreed that it was an improvement upon their current process. The vast majority, 80%, concurred that the extensibility model accounts for the major roles involved: end user, business systems analyst and programmer. Attendees suggested a few supporting roles such as systems administrator, data architect and integrator. Customers and partners in the audience verified that Oracle‘s Fusion Composers allow them to make changes in the most common areas they need to: user interface, business processes, reporting and analytics. Integrations were also suggested. All top 10 things customers can do on a page rated highly in importance, with all but two getting an average rating above 4.4 on a 5 point scale. The kinds of layout changes our composers allow customers to make align well with customers’ needs. The most common were adding columns to a table (94%) and resizing regions and drag and drop content (both selected by 88% of participants). We want to thank the attendees of the session for allowing us another great opportunity to gather valuable feedback from our customers! If you didn’t have a chance to attend the session, we will provide a link to the OOW presentation when it becomes available.

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • Opportunities in Development in our Swedish office

    - by anca.rosu
    Hi everyone, my name is Henrik and I joined the JRockit group in 2004. Before that my background was Microsoft, as both a Test Competence lead and as a Program Manager. As an Engineering Manager at Oracle I lead a team of 11 developers. I focus on people management and the daily operations of the department with a heavy focus on interaction and dependencies between the groups and departments here at the Stockholm development site. I also make sure my team deliver on our commitments. I would like to give you a brief summary of the Oracle JRockit team: -The development group in Stockholm delivers several products for the Oracle Fusion Middleware stack. Our main products are JRockitVE which allows you to run a Java Virtual Machine without an operating system, the JRockit Java Virtual Machine which is the default jvm for all Oracle middleware products, and the JRockit MissionControl, a set of tools that allows developers to monitor their applications at runtime and perform advanced latency analysis as well as in-production memory leak detection etc. -The office has several departments focusing on different aspects of the product development process, not only to build features and test them but everything from building the infrastructure needed to automatically build and test the products to sustaining engineering that tracks down bugs in customer systems and provide them with patches. Some inspirational lines around what the Oracle JRockit group can offer you in terms of progress, development and learning: - It is a unique chance to get insight and experience building enterprise class software for one of the worlds largest software companies. Here there are almost unlimited possibilities for the right candidate to learn about silicon features and how to implement support for this in software, and to compile optimizations. The position will also give insight into the processes needed to produce software at this level in the industry. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Development,Sweden,Jrockit,Java,Virtual Machine,Oracle Fusion Middleware,software

    Read the article

  • 13 MORE Things from the Oracle Social Summit You Should Know

    - by Mike Stiles
    In our previous blog, we started giving those of you who couldn’t make it just a sampling of the valuable takeaways from the first annual Oracle Social Summit, held Nov 14 and 15 in Las Vegas. And while yes, 13 items is a pretty healthy sampling, we wanted to go the extra mile and give you 13 more, an indication of just how much great information came out of it.  Follow the arrow, and come on in as if you were there with us. 1. Weber Shandwick takes a 70/20/10 approach when advising clients how to allocate resources to paid social opportunities. 70% of spend should go toward paid opportunities the agency and client both know work, 20% should go toward paid social the agency knows works, and 10% should go toward experimentation. (Matt Dickman – Weber Shandwick) 2. By 2017, the technically competent CMO will spend more on IT than the CIO. (Gartner Study) 3. CIOs are focused on infrastructure. As the roles of the CMO and CIO continue coming together, those CIOs have to make a very conscious decision to get CMOs what they need. 4. It’s now harder for brands to differentiate based on product. The advantage will go to the brands that are successful in garnering customer trust. 5. More and more, enterprise software is going to start looking like the software consumers are used to seeing and using. 6. You will see brands prioritizing mobile and dropping investments in www, HTML, POS systems, etc. 7. The social graph has to be added to brands’ customer data for a more holistic view. Customers will give you the information you need if the reward is appropriate. 8. Viacom did a study that showed viewers are most honest on social. Not so much on surveys or other feedback vehicles. 9. How are you determining your influencers? Influence isn’t about reach. It’s about getting people to change behavior. 10. A mix of skills is becoming critically important in a social staff. It shouldn’t be a mixture of several disciplines, not just a bunch of “social experts.” 11. If senior management isn’t engaged, the social team is forced into guessing what might be considered a “success” by the C-suite. 12. Mobile customization will be getting big investments from brands in 2013. Brands need to provide shoppers utility, not just information. 75% will use mobile this holiday season to avoid in-store madness. 13. Data becomes information, information becomes insight, and insight becomes actionable. The Oracle Social Summit brought together brands, agencies, Oracle social experts and industry thought leaders to take a serious look at where social stands today, and where it’s headed in the near future. Given the speed of social’s evolution, attending such events (or at least reading nifty summary blogs) is a good investment in making sure your enterprise isn’t falling gradually behind.

    Read the article

  • Oracle Announces Release of PeopleSoft HCM 9.1 Feature Pack 2

    - by Jay Zuckert
    Big things sometimes come in small packages.  Today Oracle announced the availability of PeopleSoft HCM 9.1 Feature Pack 2 which delivers a new HR self service user experience that fundamentally changes the way managers and employees interact with the HCM system.  Earlier this year we reviewed a number of new concept designs with our Customer Advisory Boards.  With the accelerated feature pack development cycle we have adopted, these innovations are  now available to all 9.1 customers without the need for an upgrade.   There are no new products that need to be licensed for the capabilities below. For more details on Feature Pack 2, please see the Oracle press release. Included in Feature Pack 2 is a new search-based menu-free navigation that allows managers to search for employees by name and take actions directly from the secure search results.  For example, a manager can now simply type in part of an employee’s first or last name and receive meaningful results from documents related to performance, compensation, learning, recruiting, career planning and more.   Delivered actions can be initiated directly from these search results and the actions are securely tied to HCM security and user role.  The feature pack also includes new pages that will enable managers to be more productive by aggregating key employee data into a single page.  The new Manager Dashboard and Talent Summary provide a consolidated view of data related to a manager’s team and individual team members, respectively.   The Manager Dashboard displays information relevant to their direct reports including team learning, objective alignment, alerts, and pending approvals requiring their attention.  The Talent Summary provides managers with an aggregated view of talent management-related data for an individual employee including performance history, salary history, succession options, total rewards, and competencies.   The information displayed in both the Manager Dashboard and Talent Summary is configurable by system administrators and can be personalized by each of your managers. Other Feature Pack 2 enhancements allow organizations to administer Matrix or Dotted-Line Relationship Management, which addresses the challenge of tracking and maintaining project-based organizations that cut across the enterprise and geographic regions.  From within the Company Directory and Org Viewer organization charts, managers now have access to manager self-service transactions from related actions.  More than 70 manager and employee self-service transactions have been tied into the related action framework accessible from Org Viewer, Manager Dashboard, Talent Summary and Secure Enterprise Search (SES) results.  In addition to making it easier to access manager self-service transactions, the feature pack delivers streamlined transaction pages making everyday tasks such as promoting an employee faster and more efficient. With the delivery of PeopleSoft HCM 9.1 Feature Pack 2, Oracle continues to deliver on its commitment to our PeopleSoft customers.  With this feature pack, HCM 9.1 customers will be able to deploy the newest functionality quickly, without a major release upgrade, and realize added value from their existing PeopleSoft investment.    For customers newly deploying 9.1, a new download with all of Feature Pack 2  will be available early next year.   This will aslo include recertified upgrade paths from 8.8, 8.9 and 9.0, for customers in the upgrade process.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • PaaS, DBaaS and the Oracle Database Cloud Service

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As with many widely hyped areas, there is much more variation within the broad spectrum of products referred to as “Cloud” that is immediately apparent. This variation is evident in one of the key misunderstandings about the Oracle Database Cloud Service. People could be forgiven for thinking that the Database Cloud Service was a Database-as-a-Service (DBaaS), but this is actually not true. The Database Cloud Service is a Platform-as-a-Service, which presents a different user and developer interface and has a different set of qualities. A good way to think about the difference between these two varieties of Cloud offerings is that you, the customer, have to deal with things at the level of the offering, but not for anything below it. In practice, this means that you do not have to deal with hardware or system software, including installation and maintenance, for DBaaS. You also do not have much control over configuration of these options. For PaaS, you don’t have to deal with hardware, system software, or database software – and also do not have control over these levels in the stack. So you cannot modify configuration parameters for the database with the Database Cloud Service – your interface is through SQL and PL/SQL, with Application Express, included in the Database Cloud Service, or through JDBC for Java apps running in the Java Cloud Service, or through RESTful Web Services. You will notice what is not mentioned there – SQL*Net. You cannot access your Oracle Database Cloud Service by changing an entry in the TNSNames file and using SQL*Net. So the effort involved in migrating an existing Oracle Database in your data center to the Database Cloud Service may be prohibitive. The good news is that Application Express and the RESTful Web Services wizard in the Database Cloud Service allow you to develop new applications very quickly, and, of course, the provisioning of the entire Database Cloud Service takes only minutes.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >