Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 176/539 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • June is going to be a busy month!

    - by Monica Kumar
    Who says things slow down in summer? Well, maybe for school kids, but certainly not for Oracle's Virtualization team! June is turning out to be one of the busiest months for us. We are going to be participating in a number of industry events. If you happen to be at any of these, please stop by the Oracle booth and our session/s. Let's go through a run down of these events. 1. 13th Annual Call Center Week June 4-8 Ceasar's Palace, Las Vegas  Event website You're now wondering...why are we at this call center show. It's really simple, Oracle's Desktop Virtualization solutions offer the best way for call center to reliably and securely access enterprise apps using a variety of endpoint devices such as an iPad or a Sun Ray Client. Provisioning new employees becomes a breeze. We'll be jointly showcasing our solution with Oracle's CRM team. Come check us out.  2. Gartner Infrastructure & Management, Florida June 5-7 Orlando, FL  Event website Oracle is a Premier sponsor of the Gartner IOM Summit this June 5 – 7, 2012 in Orlando, FL.  Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. 3. Cloud Expo East Check out our website for details of our participation. Stop by at booth 511 to talk to our Cloud, Virtualization and Big Data experts. In addition, we're delivering a number of sessions at Cloud Expo. The one I want to highlight is the following: Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder Abstract: As virtualization adoption progresses beyond server consolidation, this is also transforming how enterprise applications are deployed and managed in an agile environment. The traditional method of business-critical application deployment where administrators have to contend with an array of unrelated tools, custom scripts to deploy and manage applications, OS and VM instances into a fast changing cloud computing environment can no longer scale effectively to achieve response time and desired efficiency. Oracle VM and Oracle Virtual Assembly Builder allow applications, associated components, deployment metadata, management policies and best practices to be encapsulated into ready-to-run VMs for rapid, repeatable deployment and ease of management. Join us in this Cloud Expo session to see how Oracle VM and Oracle Virtual Assembly Builder allow you to deploy complex multi-tier applications in minutes and enables you to easily onboard existing applications to cloud environments.  Get your free Cloud Expo pass now!  We're offering complimentary VIP Gold Passes. Go to https://www.blueskyz.com/v3/Login.aspx?ClientID=19&EventID=56&sg=177, click “Continue” if you are a New User or log-in if you have already created an account. Once there, you can view the Agenda or Register for Cloud Expo. To register - fill out the basic business card questions and then enter oracleVIPgold in the Priority Code field to change the price from $2,000 to $0. 4. CiscoLive 2012  June 10-14 San Diego, CA Event website Our Oracle VM and Oracle Linux experts will talk about joint collaboration with Cisco on UCS. We'll also highlight customer use cases. 5. Gartner Infrastructure & Operations Management Summit, EMEA Dates: June 11-12 Frankfurt, Germany Event website Meet experts from our Virtualization and Linux team in EMEA. Stop by our booth and find out what's new in Oracle VM Server for x86 and Oracle Linux. June is going to be busy.

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • Sprinkle Some Magik on that Java Virtual Machine

    - by Jim Connors
    GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their technology is a dynamically-typed object oriented programming language called Magik.  Like Java, Magik source code is compiled down to bytecodes that run on a virtual machine -- in this case the Magik Virtual Machine. Throughout the years, GE has invested considerable engineering talent in the support and maintenance of this virtual machine.  At the same time vast energy and resources have been invested in the Java Virtual Machine. The question for GE has been whether to continue to make that investment on its own or to leverage massive effort provided by the Java community? Utilizing the Java Virtual Machine instead of maintaining its own virtual machine would give GE more opportunity to focus on application solutions.   At last count, there are dozens, perhaps hundreds of examples of programming languages that have been hosted atop the Java Virtual Machine.  Prior to the release of Java 7, that effort, although certainly possible, was generally less than optimal for languages like Magik because of its dynamic nature.  Java, as a statically typed language had little use for this capability.  In the quest to be a more universal virtual machine, Java 7, via JSR-292, introduced a new bytecode called invokedynamic.  In short, invokedynamic affords a more flexible method call mechanism needed by dynamic languages like Magik. With this new capability GE Energy has succeeded in hosting their Magik environment on top of the Java Virtual Machine.  So you may ask, why would GE wish to do such a thing?  The benefits are many: Competitors to GE Energy claimed that the Magik environment was proprietary.  By utilizing the Java Virtual Machine, that argument gets put to bed.  JVM development is done in open source, where contributions are made world-wide by all types of organizations and individuals. The unprecedented wealth of class libraries and applications written for the Java platform are now opened up to Magik/JVM platform as first class citizens. In addition, the Magik/JVM solution vastly increases the developer pool to include the 9 million Java developers -- the largest developer community on the planet. Applications running on the JVM showed substantial performance gains, in some cases as much as a 5x speed up over the original Magik platform. Legacy Magik applications can still run on the original platform.  They can be seamlessly migrated to run on the JVM by simply recompiling the source code. GE can now leverage the huge Java community.  Undeniably the best virtual machine ever created, hundreds if not thousands of world class developers continually improve, poke, prod and scrutinize all aspects of the Java platform.  As enhancements are made, GE automatically gains access to these. As Magik has little in the way of support for multi-threading, GE will benefit from current and future Java offerings (e.g. lambda expressions) that aim to further facilitate multi-core/multi-threaded application development. As the JVM is available for many more platforms, it broadens the reach of Magik, including the potential to run on a class devices never envisioned just a few short years ago.  For example, Java SE compatible runtime environments are available for popular embedded ARM/Intel/PowerPC configurations that could theoretically host this software too. As compared to other JVM language projects, the Magik integration differs in that it represents a serious commercial entity betting a sizable part of its business on the success of this effort.  Expect to see announcements not only from General Electric, but other organizations as they realize the benefits of utilizing the Java Virtual Machine.

    Read the article

  • Nominations now open for the Oracle FMW Excellence Awards 2014

    - by Greg Jensen
    2014 Oracle Excellence Award NominationsWho Is the Innovative Leader for Identity Management? •    Is your organization leveraging one of Oracle’s Identity and Access Management solutions in your production environment?•    Are you a leading edge organization that has adopted a forward thinking approach to Identity and Access Management processes across the organization?•    Are you ready to promote and highlight the success of your deployment to your peers? •    Would you a chance to win FREE registration to Oracle OpenWorld 2014? Oracle is pleased to announce the call for nominations for the 2014 Oracle Excellence Awards: Oracle Fusion Middleware Innovation.  The Oracle Excellence Awards for Oracle Fusion Middleware Innovation honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across nine distinct categories, including Identity and Access Management.  Oracle customers, who feel they are pioneers in their implementation of at least one of the Oracle Identity and Access Management offerings in a production environment or active deployment, should submit a nomination.  If submitted by June 20th, 2014, you will have a chance to win a FREE registration to Oracle OpenWorld 2014 (September 28 - October 2) in San Francisco, CA.  Top customers will be showcased at Oracle OpenWorld and featured in Oracle publications.   The  Identity and Access Management Nomination Form Additional benefits to nomineesNominating your organization opens additional opportunities to partner with Oracle such as:•    Promotion of your Customer Success StoriesProvides a platform for you to share the success of your initiatives and programs to peer groups raising the overall visibility of your team and your organization as a leader in security•    Social Media promotion (Video, Blog & Podcast)Reach the masses of Oracle’s customers through sharing of success stories, or customer created blog content that highlights the advanced thought leadership role in security with co-authored articles on Oracle Blog page that reaches close to 100,000 subscribers. There are numerous options to promote activities on Facebook, Twitter and co-branded activities using Video and Audio. •    Live speaking opportunities to your peersAs a technology leader within your organization, you can represent your organization at Oracle sponsored events (online, in person or webcasts) to help share the success of your organizations efforts building out your team/organization brand and success. •    Invitation to the IDM Architect ForumOracle is able to invite the right customers into the IDM Architect Forum which is an invite only group of customers that meet monthly to hear technology driven presentations from their own peers (not from Oracle) on today’s trends.  If you want to hear privately what some of the most successful companies in every industry are doing about security, this is the forum to be in. All presentations are private and remain within the forum, and only members can see take advantage of the lessons gained from these meetings.  To date, there are 125 members. There are many more advantages to partnering with Oracle, however, it can start with the simple nomination form for Identity and Access Management category of the 2014 Oracle Excellence Award Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

  • At the Java DEMOgrounds - Oracle Java ME Embedded Enables the “Internet of Things”

    - by Janice J. Heiss
    I caught up with Oracle’s Robert Barnes, Senior Director, Java Product Management, who was demonstrating a new product from Oracle’s Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime optimized for microcontrollers and other resource-constrained devices. Oracle’s Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228).“What we are showing here is the Java ME Embedded 3.2 that we announced last week,” explained Barnes. “It’s the start of the 'Internet of Things,’ in which you have very very small devices that are on the edge of the network where the sensors sit. You often have a middle area called a gateway or a concentrator which is fairly middle to higher performance. On the back end you have a very high performance server. What this is showing is Java spanning all the way from the server side right down towards the type of chip that you will get at the sensor side as the network.” Barnes explained that he had two different demos running.The first, called the Solar Panel System Demo, measures the brightness of the light.  “This,” said Barnes, “is a light source demo with a Cortex M3 controlling the motor, on the end of which is a sensor which is measuring the brightness of the lamp. This is recording the data of the brightness of the lamp and as we move the lamp out of the way, we should be able using the server to turn the sensor towards the lamp so the brightness reading will go higher. This sends the message back to the server and we can look at the web server sitting on the PC underneath the desk. We can actually see the data being passed back effectively through a back office type of function within a utility environment.” The second demo, the Smart Grid Response Demo, Barnes explained, “has the same board and processor and is still using Java ME embedded with a different app on top. This is a demand response demo. What we are seeing within the managing environment is that people want to track the pricing signals of the electricity. If it’s particularly expensive at any point in time, they may turn something off. This demo sets the price of the electricity as though this is coming from the back of the server sending pricing signals to my home.” The demo had a lamp and a fan and it was tracking the price of electricity. “If I set the price of the electricity to go over 5 cents, then the device will turn off,” explained Barnes. “I can go into my settings and, in this case, change the price to 50 cents and we can wait a minus and the lamp will go off. When I change the pricing signal so that it is lower, the lamp will come back on. The key point is that the Java software we have running is the same across all the different devices; it’s a way to build applications across multiple devices using the same software. This is important because it fixes peak loading on the network and can stops blackouts.” This demo brought me back to a prior decade when Sun Microsystems first promoted  Jini technology, a version of Java that would put everything on the network and give us the smart home. Your home would be automated to tell you when you were out of milk, when to change your light bulbs, etc. You would have access to the web and the network throughout your home.It’s interesting to see how technology moves over time – from the smart home to the Internet of Things.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Meet Matthijs, Dutch Inside Sales Representative for Oracle Direct

    - by Maria Sandu
    Today we would like to share some information around the Dutch Core Technology team in Malaga. Matthijs is one of the team members who decided to relocate from the Netherlands to Malaga to join Oracle Direct two years ago. Matthijs: “For the past two years I have been working as an Oracle Direct Core Technology Inside Sales representative for Named Accounts in the Netherlands, based in Malaga, Spain. In my case, working for the Dutch OD Core Technology team means that I am responsible for the Account Management of Larger companies in the Travel & Transportation and the Manufacturing, Retail & Distribution sector. I work together with the Oracle Field Account Managers and our Field Sales Management in the Netherlands where I am often the main point of contact for customers. This means that I deal with their requests and I manage their various issues, provide solutions and suggestions based on the Oracle Core Technology portfolio. I work on interesting projects with end-customers, making financial proposals and building business cases. It is a very interesting sales environment and for the last two years I improved my skills substantially. This month I will finish my Inside Sales career in Malaga to move to a position within Field Sales in the Netherlands. Oracle Direct has proven to be a great stepping stone for my career. Boost your personal development One of the reasons for joining Oracle was to boost my personal & career development. You can choose from various different trainings to follow all over Europe which enable you to reach both your personal and professional goals. Furthermore, you can decide your own career path and plan the steps necessary to achieve your goal. Many people aim to grow into Field Sales in their native countries, Business Development or Sales Management, but there are many possibilities once you decide to join Oracle. Overall, working at Oracle means working for an international company and one of the worldwide leaders in Enterprise Hardware & Software. Here you get all the tools necessary to develop yourself personally & professionally. Another great advantage of working for Oracle Direct is working from our office in Malaga, Southern Spain where we have over 400 employees from many countries across EMEA. It is a truly international environment! Working and living in Spain gives you an excellent opportunity to learn Spanish and of course enjoy the Spanish lifestyle, cuisine, beaches and much, much more!” Interview day Utrecht If you are inspired by the story of Matthijs and would like to explore the opportunity to join the Technology Sales team for the Dutch market in Malaga, let us know! We will organise an Interview day in the Oracle office in Utrecht on the 18th and 19th of September. We currently have multiple openings in the Core Technology team that focus on selling our Database portfolio in the Dutch market. We are looking for native Dutch speakers with a Bachelors degree, 2-5 years sales experience (ideally in IT) who are willing to relocate to Malaga for at least 2 years! For more information please contact [email protected] or [email protected].

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Do Great Work

    - by user12601034
    Have you ever attended an online conference and actually had a desire to attend all of it?? Yesterday I attended the first day of the Great Work MBA program, sponsored by Box of Crayons and hosted by Michael Bungay Stanier. The topic of the day was “Grounding Yourself,” and the day featured five speakers on five different topics. I have to admit that I started the first session with kind of a “blech” feeling that I didn’t really want to participate, but for some reason I did. So I listened to the first session, and I was hooked. I ended up listening to all of the sessions for the day, and I had some great take-aways from the sessions – my highlights included: The opposite of bravery isn’t fear, it’s settling. In essence, you need to be brave in order to accomplish anything. If you’re settling, you’re not being brave, and your accomplishments will likely be lackluster. Bravery requires confidence and permission. You need to work at being brave by taking small wins, build them up and then take slightly larger risks. Additionally, you need to “claim your own crown.” Nobody in the business world is going to give you permission to be a guru in X – you need to give yourself permission to become a guru in X and then do it. Fall in love with obstacles. Everyone is going to face some form of failure. One way to deal with this is to fall in love with solving the puzzle of obstacles. You don’t have to hit it if you can go around it. Understanding purpose brings out the best in people and the best people. As a leader, drawing in people who are passionate and highly motivated about their work creates velocity for your organization. Being clear about purpose is the first step in doing this. You must own your own story. Everything about you creates a “unique you” that is distinct from everyone else. As you take ownership of this, it becomes part of your strength. It’s not a strength if you’re running away from it. Focus on what’s right. Be aware of your tendency to interpret a situation a certain way and differentiate between helpful and unhelpful interpretations. Three questions for how to think differently: 1) Why? 2) Who says so? 3) What would happen if? These three questions can help you build alternative perspectives and options that can increase resiliency. Even though this first day was focused on “Grounding Yourself,” I see plenty of application in the corporate environment for both individuals and leaders of teams. To apply these highlights to my work environment, I would do the following: Understand the purpose – of my company, of my team and of my role on the team. If I know the purpose, I know what I need to bring to the table to make me, my team and my company successful. Declare your goals…your BEHAGS (big, hairy, audacious goals).Have the confidence to declare what you and/or your team is going to accomplish.Sure, you might have to re-state those goals down the line, but you can learn from that as well. Get creative about achieving your goals.Break down your obstacles by asking yourself what is going to stop you from achieving your goals and then, for each obstacles, ask those three questions:Why?Who says so? What would happen if? Focus on what’s right.I had a manager who asked us to write status reports every week.“Status” consisted of 1) What did I accomplish; 2) What will I accomplish next week; 3) How can my manager help me.The focus on our status report was always “what’s right”(“what’s wrong” was always a conversation at the point in time it was needed). I’m normally a skeptic of online webcasts/conferences, and I normally expect to take away maybe one or two ideas. I’m really glad, however, that I took the time to listen to all of the sessions yesterday, and I hope that my take-aways inspire you to think about how you might do great work also. --

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Gnome-shell fails to load on 12.10

    - by Githlar
    I'm usually the one answering questions, but in this I'm throughly stumped! My Setup: Ubuntu 12.10 (Dist upgrade form 12.04) ATI M96 [Mobility Radeon HD 4650] Upon the first installation of 12.10 I had all kinds of issues getting the Legacy ATI drivers to install (I guess the source for the drivers isn't kosher with kernel 3.5). So, I added the repository ppa:makson96/fglrx - which has a version of the ATI source patched to work with kernel 3.5. After installation of fglrx-legacy from that PPA, gnome-shell and all my graphics work fine... until today. The Problem I unsuspended my computer today and the screen was black (not off, the black from the gnome lock screen). I'd move my mouse/hit a key and the background would flash and then it'd go back to black. Restarted via VT1 Logged into Gnome (gnome-shell) session, but no gnome-shell! Investigation: First, I went to VT1 and tried export DISPLAY=:0;gnome-shell --replace. It appeared to work fine, switch back to X and nothing. Went back to VT1 and saw this error message: JS ERROR: !!! Exception was: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO JS ERROR: !!! message = '"Object 0x7fc748129c30 is not a subclass of (null), it's a xO"' JS ERROR: !!! fileName = '"/usr/share/gnome-shell/js/ui/tweener.js"' JS ERROR: !!! lineNumber = '218' JS ERROR: !!! stack = '"()@/usr/share/gnome-shell/js/ui/tweener.js:218 wrapper()@/usr/share/gjs-1.0/lang.js:204 ()@/usr/share/gjs-1.0/lang.js:145 ()@/usr/share/gjs-1.0/lang.js:239 init()@/usr/share/gnome-shell/js/ui/tweener.js:49 init()@/usr/share/gnome-shell/js/ui/environment.js:96 @<main>:1 "' Window manager warning: Log level 32: Execution of main.js threw exception: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO Note: Everywhere it says "it's a xO", xO is actually garbled and changes every time (I'm thinking memory corruption?) This error is thrown by line 96 of /usr/share/gnome-shell/js/ui/environment.js: tweener.Init() Did a purge of fglrx-legacy, reboot, reinstall fglrx-legacy, reboot... same thing. Did a ppa-purge of ppa:gnome3-team/gnome3, and reinstalled gnome-shell and ubuntu-desktop from the standard repositores... same thing. I'm really at a loss here. I love gnome-shell and after using it for nearly a year now gnome classic just seems so archaic. Additional Information Apt log from the day I first suspended my machine (these are upgrades from the gnome3-team/gnome3 ppa and ubuntu-wine/ppa ppa): Start-Date: 2012-11-24 17:30:28 Commandline: aptdaemon role='role-commit-packages' sender=':1.618' Install: gkbd-capplet:amd64 (3.6.0-0ubuntu1), gnome-control-center-unity:amd64 (1.0-0ubuntu1~ubuntu12.10.1) Upgrade: nautilus:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), libgnome-control-center1:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), wine1.5-i386:i386 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), wine1.5:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), gnome-settings-daemon:amd64 (3.4.2-0ubuntu14, 3.6.3-0ubuntu1~ubuntu12.10.1), gnome-control-center-data:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), gnome-accessibility-themes:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), gnome-themes-standard:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), wine1.5-amd64:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), nautilus-data:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), gnome-control-center:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), libnautilus-extension1a:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1) End-Date: 2012-11-24 17:31:32 fglrxinfo (driver appears to be working): display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 4650 OpenGL version string: 3.3.11653 Compatibility Profile Context Does anybody have any further ideas?

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Try the Oracle Database Appliance Manager Configurator - For Fun!

    - by pwstephe-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you would like to get a first hand glimpse of how easy it is to configure an ODA, even if you don’t have access to one, it’s possible to download the Appliance Manager Configurator from the Oracle Technology Network, and run it standalone on your PC or Linux/Unix  workstation. The configurator is packaged in a zip file that contains the complete Java environment to run standalone. Once the package is downloaded and unzipped it’s simply a matter of launching it using the config command or shell depending on your runtime environment. Oracle Appliance Manager Configurator is a Java-based tool that enables you to input your deployment plan and validate your network settings before an actual deployment, or you can just preview and experiment with it. Simply download and run the configurator on a local client system which can be a Windows, Linux, or UNIX system. (For Windows launch the batch file config.bat for Linux/Unix environments, run  ./ config.sh). You will be presented with the very same dialogs and options used to configure a production ODA but on your workstation. At the end of a configurator session, you may save your deployment plan in a configuration file. If you were actually ready to deploy, you could copy this configuration file to a real ODA where the online Oracle Appliance Manager Configurator would use the contents to deploy your plan in production. You may also print the file’s content and use the printout as a checklist for setting up your production external network configuration. Be sure to use the actual production network addresses you intend to use it as this will only work correctly if your client system is connected to same network that will be used for the ODA. (This step is not necessary if you are just previewing the Configurator). This is a great way to get an introductory look at the simple and intuitive Database Appliance configuration interface and the steps to configure a system. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • MSBuild (.NET 4.0) access problems

    - by JMP
    I'm using Cruise Control .NET as my build server (Windows 2008 Server). Yesterday I upgraded my ASP.NET MVC project from VS 2008/.NET 3.5 to VS 2010/.NET 4.0. The only change I made to my ccnet.config's MSBuild task was the location of MSBuild.exe. Ever since I made that change, the build has been broken with the error: MSB4019 - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This file does, in fact, exist in the location specified (I solved a problem similar to this when setting up the build server for VS2008/.NET 3.5 by copying the files from my dev environment to my build environment). So I RDP'ed into the build machine and opened a command prompt, used MSBUILD to attempt to build my project. MSBUILD returns the error: MSB3021 - Unable to copy file "obj\debug....dll". Access to the path 'bin....dll' is denied. Since I'm running MSBUILD from the command prompt, logged in with an account that has administrative privileges, I'm assuming that MSBUILD is running with the same privileges that I have. Next, I tried to copy the file that MSBUILD was attempting to copy. In this case, I get the UAC dialog that makes me click the [Continue] button to complete the copy. I'd like to avoid installing Visual Studio 2010 on my build machine, can anyone suggest other fixes I might try?

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >