Search Results

Search found 31504 results on 1261 pages for 'oracle xml gateway'.

Page 264/1261 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • April 12 EBS Webcast: Value Chain Planning 12.1.3.6 Rapid Planning Enhancements

    - by Oracle_EBS
    ADVISOR WEBCAST: 12.1.3.6 Rapid Planning EnhancementsPRODUCT FAMILY: Value Chain Planning April 12, 2012 at 11 am ET, 9 am MT, 8 am PT This one-hour session is recommended for functional users who work on the implementation of Oracle Rapid Planning, and Consultants interested in the latest Oracle Rapid Planning features and enhancements available through VCP version 12.1.3.6. This webcast will discuss Safety Stock Calculation Using Quantities, Substitution Logic, and RP-CP Collaboration.TOPICS WILL INCLUDE: Provide insight on the latest enhancements that are available in Oracle Rapid Planning. Learn what is available in this version compared to earlier versions. Version changes. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1

    Read the article

  • Value Chain Planning in Las Vegas

    - by Paul Homchick
    Several Oracle Value Chain Planning experts will be presenting at the Mandalay Bay Convention Center in Las Vegas, for Collaborate 2010- April 18th- 22nd, 2010. We have five sessions as follows: Monday, April 19, 1:15 pm - 2:15 pm, Breakers H, Roger Goossens Oracle VCP Vice President Leveraging Oracle Value Chain Planning for Your Planning Business Transformation Monday, April 19, 3:45 pm - 4:45 pm, Breakers I, Scott Malcolm, Oracle VCP Development Complex Supply Chain Planning Made Easy: Introducing Oracle Rapid Planning Tuesday, April 20, 2:00 pm - 3:00 pm, Breakers I, John Bermudez, Oracle VCP Strategy Synchronize Your Financial and Operating Plans with Oracle Integrated Business Planning Wednesday, April 21, 10:30 am - 11:30 am, Breakers I, Vikash Goyal, Oracle VCP Strategy Oracle Demantra: What's New? Wednesday, April 21, 2:15 pm - 3:15 pm, Mandalay Bay Ballroom A, Roger Goossens Oracle VCP Vice President Value Chain Planning for JD Edwards EnterpriseOne We will also be in the demogrounds, so stop by to see the latest VCP innovations from Oracle and talk to our experts.

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • Today in the OTN Lounge (Thursday October 4, 2012)

    - by Bob Rhubart
    Here's a quick rundown of today's activities in the OTN Lounge: OTN Lounge hours today: 8:00 am - 2:00pm 9:00 am - 1:00 pm RAC Attack Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) The OTN Lounge is located in the Howard St. Tent, between 3rd and 4th, directly between Moscone North and Moscone South. Access to the OTN Lounge requires an Oracle OpenWorld or JavaOne conference badge.

    Read the article

  • UKOUG Application Server & Middleware SIG Meeting

    - by JuergenKress
    Date: Wednesday 10th Oct 2012 Time: 09:00 - 16:00 Location: Reading Venue: Oracle, Thames Valley Park, Reading Agenda: 09:00 Registration and Coffee 10:00 Welcome Application Server & Middleware Committee 10:10 Oracle Support Updates Nick Pounder, Oracle Customer Services 10:30 OpenWorld 2012 - News Round-up for Middleware Admins Simon Haslam, Veriton Limited 11:00 Coffee break 11:20 Oracle Single-Sign on to Oracle Access Manager Migration Rob Otto, Oracle Consulting Services UK 12:05 Supporting Fusion Middleware through First Failure Capture (theory) Greg Cook, Oracle 12:50 Lunch and Network 13:35 Deputy Chair Elections UKOUG 13:45 Supporting Fusion Middleware through First Failure Capture (demos) Greg Cook, Oracle 14:15 Networking session including tea/coffee 14:45 Real Life WebLogic Performance Tuning: Tales and Techniques from the Field Steve Millidge, C2B2 Consulting Limited 15:30 WLST: WebLogic's Swiss Army Knife Simon Haslam, Veriton Limited 15:45 AOB and Close For details please visit the registration page. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: UK user group,Simon Haslam,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • top tweets WebLogic Partner Community – March 2012

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity PeterPaul ? RT @JDeveloper: EJB 3 Deployment guide for WebLogic Server Version: 10.3.4.0 dlvr.it/1J5VcV Andrejus Baranovskis ?Open ADF PopUp on Page Load fb.me/1Rx9LP3oW Sten Vesterli ? RT @OracleBlogs: Using the Oracle E-Business Suite SDK for Java on ADF Applications ow.ly/1hVKbB <- Neat! No more WS calls Java Buddy ?JavaFX 2.0: Example of MediaPlay java-buddy.blogspot.com/2012/03/javafx… Georges Saab Build improvements coming to #openJDK for #jdk8 mail.openjdk.java.net/pipermail/buil… NetBeans Team Share your #Java experience! JavaOne 2012 India call for papers: ow.ly/9xYg0 GlassFish ? GlassFish 3.1.2 Screencasts & Videos – bit.ly/zmQjn2 chriscmuir ?G+: New blog post: ADF Runtimes vs WLS versions as of JDeveloper 11.1.1.6.0 – bit.ly/y8tkgJ Michael Heinrichs New article: Creating a Sprite Animation with JavaFX blog.netopyr.com/2012/03/09/cre… Oracle WebLogic ? #WebLogic Devcast Webinar Series for March: Enterprise Java Scale Out, JPA, Distributed Grid Data Cache bit.ly/zeUXEV #Coherence Andrejus Baranovskis ?Extending Application Module for ADF BC Proxy User DB Connection fb.me/Bj1hLUqm OTNArchBeat ? Oracle Fusion Middleware on JDK 7 | Mark Nelson bit.ly/w7IroZ OTNArchBeat ? Java Champion Jonas Bonér Explains the Akka Platform bit.ly/x2GbXm Adam Bien ? (Java) FX Experience Tools–Feels Like Native Mac App: FX Experience Tools application comes with a native Mac O… bit.ly/waHF3H GlassFish ? GlassFish new recruit and Eclipse integration progress – bit.ly/y5eEkk JDeveloper & ADF Prototyping ADF Libraries dlvr.it/1Hhnw0 Eric Elzinga ?Oracle Fusion Middleware on JDK 7, bit.ly/xkphFQ ADF EMG ? Working with ADF in Arabic, Hebrew or other right-to-left-written language? Oracle UX asks for your help. groups.google.com/forum/?fromgro… Java ? A simple #JavaFX Login Form with a TRON like effect ow.ly/9n9AG JDeveloper & ADF ? Logging in Oracle ADF Applications dlvr.it/1HZhcX OTNArchBeat ? Oracle Cloud Conference: dates and locations worldwide bit.ly/ywXydR UK Oracle User Group ? Simon Haslam, ACE Director present on #WebLogic for DBAs at #oug_ire2012 j.mp/zG6vz3 @oraclewebcenter @oracleace #dublin Steven Davelaar ? Working with ADF and not a member of ADF EMG? You miss lots of valuable info, join now! sites.google.com/site/oracleemg… Simon Haslam @MaciejGruszka: Oracle plans to provide Forms & Reports plug-in for OVAB next year to help deployment. #ukoug MW SIG GlassFish ? Introducing JSR 357: Social Media API – bit.ly/yC8vez JAX London ? Are you coming to Java EE workshops by @AdamBien at JAX Days? Save £100 by registering today. #jaxdays #javaee jaxdays.com WebLogic Community ?Welcome to our Munich WebLogic 12c Bootcamp in Munich! If you also want to attend a training register for the Community oracle.com/partners/goto/… chriscmuir ? My first webcast for Oracle! (be kind) Basing ADF Business Component View Objects on More that one Entity Object bit.ly/ArKija OTNArchBeat ? Oracle Weblogic Server 12c is available on Oracle Solaris 11 (SPARC and x86) bit.ly/xE3TLg JDeveloper & ADF ? Basing ADF Business Component View Objects on More that one Entity Object – YouTube dlvr.it/1H93Qr OTNArchBeat ? Application-Driven Virtualization with Oracle Virtual Assembly Builder | Ronen Kofman bit.ly/wF1C1N Oracle WebLogic ? Steve Button’s blog: WebLogic Server Singleton Services ow.ly/1hOu4U Barbara Ann May ?@oracledevtools: New update: #NetBeans IDE 7.1.1, with support for #GlassFish 3.1.2 bit.ly/mOLcQd #java #developer OTNArchBeat ? Using Coherence with JDeveloper: bit.ly/AkoEQb WebLogic Community ? WebLogic Partner Community Newsletter February 2012 wp.me/p1LMIb-f3 GlassFish ? GlassFish 3.1.2 – new Podcast episode : bit.ly/wc6oBE Frank Nimphius ?Cool! Open JDeveloper 11.1.1.5, go help–>check for updates. First thing shown is that 11.1.1.6 is available. Never miss a new release Adam Bien ?5 Minutes (Video) With Java EE …Or With NetBeans + GlassFish: This screencast covers a 5-minute development of a… bit.ly/xkOJMf WebLogic Community ? Free Oracle WebLogic Certification Application Grid Implementation Specialist wp.me/p1LMIb-eT OTNArchBeat ?Oracle Coherence: First Steps Using Clusters and Basic API Usage | Ricardo Ferreira bit.ly/yYQ3Wz GlassFish ? JMS 2.0 Early Draft is here – bit.ly/ygT1VN OTNArchBeat ? Exalogic Networking Part 2 | The Old Toxophilist bit.ly/xuYMIi OTNArchBeat ?New Release: GlassFish Server 3.1.2. Read All About It! | Paul Davies bit.ly/AtlGxo Oracle WebLogic ?OTN Virtual Developer Day: #WebLogic 12c & #Coherence ost-conference on-demand page live with bonus #Virtualbox lab – bit.ly/xUy6BJ Oracle WebLogic ? Steve Button’s blog: WebLogic Server 11g (10.3.6) Documentation ow.ly/1hJgUB Lucas Jellema ? Just published an article on the AMIS blog: technology.amis.nl/2012/03/adf-11… ADF 11g – programmatically sorting rich table columns. Java Certification ? New Course! Learn how to create mobile applications using Java ME: bit.ly/xZj1Jh Simon Haslam ? @MaciejGruszka WebLogic 12c can run against 11g domain config without changes …and can rollback to 11. #ukoug MW SIG Justin Kestelyn ? Learn Advanced ADF, free and online bit.ly/wEKSRc WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress,WebLogic 12c

    Read the article

  • New Exadata Book Available Soon

    - by Rob Reynolds
    Oracle Press is set to released the first book on data warehouse performance and Exadata on March 14th. Achieving Extreme Performance with Oracle Exadata , by my colleagues Rick Greenwald, Robert Stackowiak, Maqsood Alam, and Mans Bhuller will be available at your favorite booksellers next week. I've seen a sneak peak of the content in this book and its a great way to fully grasp the power of Exadata and how to best apply it to achieve extreme data warehouse performance. From the publisher's description: Achieving Extreme Performance with Oracle Exadata and the Sun Oracle Database Machine is filled with best practices for deployments, hardware sizing, architecting the database machine environments for maximum availability, and backup and recovery. Oracle Database 11gR2 features used within these offerings, as well as migration options and paths for Oracle and non-Oracle databases to Oracle Exadata are covered. This Oracle Press guide also discusses architecture, administration, maintenance, monitoring, and tuning of Oracle Exadata Storage Servers and the Sun Oracle Database Machine. If your company is considering Exadata, or if you need more horsepower out of your data warehouse, I highly recommend grabbing a copy of this book next week.

    Read the article

  • Open World 2012

    - by jeffrey.waterman
    For those of you fortunate enough to be attending this year's Oracle OpenWorld here is a sessions I recommend carving time out of your hectic schedule to attend: Public Sector General Session (session ID#: GEN8536) Wednesday, October 3, 10:15 a.m.–11:15 a.m., Westin San Francisco, Metropolitan III Room Speakers, Mark Johnson, SVP Oracle Public Sector; Peter Doolan, CTO Oracle Public Sector; Robert Livingston, founding partner of Livingston Group and former member of the US Congress. Join Mark Johnson for an update on Oracle in government. Mark will be joined by Peter Doolan and Robert Livingston to discuss current topics facing governments and how Oracle can help organizations achieve their goals. I'll be posting more interesting sessions as I peruse the conference agenda over the next week or so.  If you see an interesting session, please feel free to share your suggestions in the comments section.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Mega Trends 4 Financial Services, 21 maggio 2014

    - by Claudia Caramelli-Oracle
    Oracle ha sponsorizzato questo evento dedicato alle Banche e al mondo assicurativo. Il tema principale è stato cercare di capire come esplorare il futuro per migliorare il coinvolgimento dei clienti e le innovazioni in questo mercato. Oracle ha avuto l'opportunità di incontrare i Direttori Generali e i CxO delle più importanti banche italiane, internazionali e assicurazioni in oltre quattro momenti diversi: 1. Cena executive il 20 maggio2. Sessione plenaria3. Sessione parallela con il tema: Social & Digital Engaging4. CRM & Dig Data IntelligenceL'hashtag #mt4financialservices  ha visto un grosso movimento su Twitter: questo dimostra come le tematiche di cui si è discusso durante l'evento devono e trovano un reale riscontro in quello che è il mercato di riferimento. C'è interesse e soprattutto il mercato aspetta solo di essere ingaggiato in queste modalità! Per maggiori informazioni scrivi a Silvia Valgoi

    Read the article

  • 2 nics. 2 Defaults Gateways

    - by andre.dias
    Here is my scenario: i have this server with 2 nics, each one with different IPs, connected to differents routers. Almost everything is configured whe way i need. Traffic coming from eth0 exits using eth0, traffic coming from eth1 exits using eth1. And there is a default gateway configured. $route: default IP 0.0.0.0 UG 0 0 0 eth0 With this configuration, the traffic generated in the server is going out using eth0 (lynx www.google.com for example). The problem is: the Internet link from eth0 went down today. The traffic coming from eth1 was ok...no problem. But the traffic generated in the server was a problem...the default gateway was out...no access do the Internet anymore (no more lynx www.google.com) So i added a new default gateway configuration, pointing to eth1. For 30 minutes i kept that way...2 default gateways, but just one was "working"...and everything was working just fine. But then i removed de eth0 gateway entry because, well, 2 default gateways is kind of weird. My question: is there any problem on keeping these 2 default gateways, one for each? So i don´t need to do nothing when one link go down again? $route: default IP1 0.0.0.0 UG 0 0 0 eth0 default IP2 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • How Are Businesses Advancing with the Experience Revolution?

    - by Charles Knapp
    Businesses worldwide are operating in a new era. Customers are taking charge of their relationships with brands, and the customer experience has become the most important differentiator and driver of business value. Where is the experience heading? And how can businesses take advantage of the customer experience revolution? Find out from the experts at a one-of-a-kind event: the Oracle Customer Experience Summit at Oracle OpenWorld, San Francisco, October 3-5. Our featured speakers are global visionaries including Seth Godin, George Kembel from the Stanford d:School, Bruce Temkin, Kerry Bodine and Paul Hagen from Forrester, and Gene Alvarez from Gartner. Featured industry leaders will include speakers from Athene Group, Bazaarvoice, Comcast, Consortium of Service Innovation, Haworth, Intuit, KPN, Marriott, Nikon, Quicksilver, Royal Caribbean, SapientNitro, Southwest, Stryker, Stuart Concannon, and Twilio. Featured speakers from Oracle will include Oracle President Mark Hurd, Anthony Lye, David Vap, Brian Curran, John Kembel, and Matthew Banks. So, please join us at the Customer Experience Summit at the Oracle OpenWorld Conference.

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • Enterprise 2.0: Expectations vs. Reality

    - by kellsey.ruppel(at)oracle.com
    If you haven't heard it already, check out the podcast interview that Enterprise 2.0 expert John Brunswick did with Bob Rhubart, host of ArchBeat Podcast. Listen as John discusses some of the expectations vs. reality when it comes to Enterprise 2.0. Listen to Part 1 Listen to Part 2 Listen to Part 3 You can connect with John Brunswick and learn more about Enterprise 2.0 at the following links: John's Homepage Twitter: @johnbrunswick Linked In Oracle Fusion ECM Blog AIIM Enterprise 2.0 Blog Enterprise 2.0 Workbench (Youtube) JSP and Beyond (ebook) OTN Technical Articles: Extending the Business Value of SOA through Business Process Management Unlocking the Value of Enterprise 2.0 Collaboration and Authoring Tools Principles of Natural Participation And here are some additional links if you are interested in learning more about Bob Rhubart and ArchBeat: ArchBeat blog ArchBeat Podcast Oracle Architect Community Mix Profile Linked In FriendFeed Twitter: @brhubart

    Read the article

  • Vorsprung für Partner – auch beim Support

    - by Alliances & Channels Redaktion
    Solider Support ist für Oracle eine Selbstverständlichkeit, das ist nichts Neues. Aber wussten Sie auch, dass Oracle Support für Partner besondere Konditionen und Tools anbietet? Der Weg dorthin ist ganz einfach: Loggen Sie sich in das OPN-Portal ein. Über den Klickpfad „Partner with Oracle“, „Get startet“, „Levels and Benefits“ und „View all benefits“ gelangen Sie zu einer Übersicht, welches Level welche Support Benefits mit sich bringt. Als Partner erhalten Sie eine eigene Oracle Partner SI Nummer, sprich einen Support Identifier, der den Zugriff auf die Wissensdatenbank, technische Unterlagen, den Patch Download Bereich und verschiedene Communities im Support Portal „My Oracle Support“ eröffnet. Zudem haben Sie selbstverständlich die Möglichkeit, Service Request (SR) Pakete zu kaufen. Je nach Partner Level verfügen Sie über eine bestimmte Menge an freien Service Requests. Deren Zahl können Sie mit jeder weiteren Spezialisierung vermehren. Und: Beim Support-Einkauf für den Eigenbedarf erhalten unsere Partner einen Preisnachlass. Ein Blick ins OPN-Portal lohnt sich also auch in Support-Fragen!

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

  • Heterogén adatelérés OWB-vel: ODI EE Enterprise ETL

    - by Fekete Zoltán
    Az elozo ketto blogbejegyzéshez kapcsolódva felmerül a kérdés: Hogyan lehet az Oracle Warehouse Builderrel heterogén adatforrásokat elérni? Ajánlott olvasmány: Oracle Warehouse Builder 11gR2: OWB ETL Using ODI Knowledge Modules Természetesen az OWB az Oracle Database Heterogeneous Services-zel ODBC-vel illetve Oracle Gateway-k alkalmazásával eddig is lehetett mindenféle ODBC kompatibilis továbbá mainframe-es adatbázisokat elérni. Oracle Database Gateways: MS SQL Server, Sybase, Teradata, Informix, ODBC, DRDA, APPC, WebSphere MQ, DB2, DB2/400. A megfelelo Application Adapters megvásárlásával lehet csatlakozni az OWB-vel például a következo forrásokhoz: SAP, Oracle E-Business Suite, Peoplesoft, Siebel, Oracle Customer Data Hub (CDH), Universal Customer Master (UCM), Product Information Management (PIM). Az OWB 11gR2-tol kezdve az OWB tudja használni az Oracle Data Integrator Knowledge moduljait a heterogén adatelérésre, ez JDBC-vel illetve más heterogén elérési módokkal. Ajánlott olvasmány: Oracle Warehouse Builder 11gR2: OWB ETL Using ODI Knowledge Modules Letöltés: Oracle Warehouse Builder. BTW az OWB Java-s kliens szoftver Linux-on és Windows-on is használható. A szerver oldal pedig természetesen az Oracle adatbázisban fut: Solaris, Linux, HP-UX, AIX, Windows operációs rendszereken.

    Read the article

  • Extensible Metadata in Oracle IRM 11g

    - by martin.abrahams
    Another significant change in Oracle IRM 11g is that we now use XML to create the tamperproof header for each sealed document. This article explains what this means, and what benefit it offers. So, every sealed file has a metadata header that contains information about the document - its classification, its format, the user who sealed it, the name and URL of the IRM Server, and much more. The IRM Desktop and other IRM applications use this information to formulate the request for rights, as well as to enhance the user experience by exposing some of the metadata in the user interface. For example, in Windows explorer you can see some metadata exposed as properties of a sealed file and in the mouse-over tooltip. The following image shows 10g and 11g metadata side by side. As you can see, the 11g metadata is written as XML as opposed to the simple delimited text format used in 10g. So why does this matter? The key benefit of using XML is that it creates the opportunity for sealing applications to use custom metadata. This in turn creates the opportunity for custom classification models to be defined and enforced. Out of the box, the solution uses the context classification model, in which two particular pieces of metadata form the basis of rights evaluation - the context name and the document's item code. But a custom sealing application could use some other model entirely, enabling rights decisions to be evaluated on some other basis. The integration with Oracle Beehive is a great example of this. When a user adds a document to a Beehive workspace, that document can be automatically sealed with metadata that represents the Beehive security model rather than the context model. As a consequence, IRM can enforce the Beehive security model precisely and all rights configuration can actually be managed through the Beehive UI rather than the IRM UI. In this scenario, IRM simply supports the Beehive application, seamlessly extending Beehive security to all copies of workspace documents without any additional administration. Finally, I mentioned that the metadata header is tamperproof. This is obviously to stop a rogue user modifying the metadata with a view to gaining unauthorised access - reclassifying a board document to a less sensitive classifcation, for example. To prevent this, the header is digitally signed and can only be manipulated by a suitably authorised sealing application.

    Read the article

  • 3 Performance Presentations from SAE added to the portal

    - by uwes
    The following three presentation have been added to eSTEP portal: Oracle's Systems Performance Oct 2012 Update Oracle Leads the Way on Realistic Sizing Oracle's Performance: Oracle SPARC SuperCluster All presentations are created by Brad Carlile, Sr. Director Strategic Applications Engineering, SAE. How to get to the presentations: URL: http://launch.oracle.com/ Email Address: <provide your email address>Access URL/Page Token: eSTEP_2011To get access push Agree button on the left side of the page. Click on eSTEP Download (tab band on the top) ---> presentations at right hand side or Click on Miscellaneous (menu on left hand side) ---> presentations at right hand side

    Read the article

  • How to setup Mac server to use two gateways

    - by Brady
    I recently asked this question: How to set Mac server to use different Gateway for internet bound traffic The answer given works but has presented me with another issue that I didnt make clear in that question. Here is my network layout as it stands: At the moment outside staff members use some services on the existing internet 1 link. Those services are hosted by the Mac server. If I change the gateway of the Mac server to the second modem those outside staff lose visabilty on those services. Now I dont know how to go about solving this issue. I want the second link to be used when the Mac server goes to rsync data offsite but everything else use link one. How do I do this? Thanks Scott EDIT: This has been resolved by setting the default gateway on the Mac server to 192.168.1.254 Thus leaving everything on the network as it was before. but to get the Mac server to use the other link for rsync I've added a route to the Mac server to route traffic to the rsync server through the second gateway. sudo route add -net {server IP's}/{Netmask} 192.168.1.1 I've awarded the answer to gravyface for pointing me to a post on how to make this route persistant in Mac

    Read the article

  • The Future of the Database Begins

    - by Thanos Terentes Printzios
    For more than three-and-a-half decades, Oracle has defined database innovation. With our leading technologies, Oracle customers have been able to out-think and out-perform their competition. Soon organizations will be able to do that even faster.With the introduction of the Oracle Database In-Memory Option it will be possible to perform TRUE real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine your sales team being able to know the total sales they have made as of right now -- not last week, or even last night, but right now.Imagine innovation that accelerates business decision making to real-time speeds. That's the power of Oracle Database In-Memory.Watch Larry Ellison to find out what this and the other new features of Oracle Database 12c will do for you. Register Now for the Live Webcast

    Read the article

  • Why partners should visit the new AutoVue Knowledge Zone

    - by [email protected]
    Learn more about AutoVue and connect with your peers to distinguish your offerings, seize opportunities, and to increase your sales! Explore the latest releases, integration solution offerings, marketing assets, partner enablement tools, events and latest partner initiatives by clicking through the tabs - Why Partner, Develop, Sell, and Connect. Knowledge Zones are designed to accelerate the partner's knowledge about Oracle solutions, as well as provide new opportunities to collaborate with the entire Oracle partner ecosystem. The AutoVue Knowledge Zone, launched in March 2010, is continuously being updated with the latest information to better equip and enable our partners to sell AutoVue solutions. Get all the information you always wanted to convert your sales opportunities into wins. Check out and bookmark the AutoVue Knowledge Zone now!

    Read the article

  • Die Tape Library, die mitwächst

    - by A&C Redaktion
    Mit der Storage Tek SL150 Modular Tape Library hat Oracle eine Archiv-Lösung entwickelt, die zusammen mit dem Unternehmen wachsen kann. Die Ziele waren hoch gesteckt: Die neue Bandbibliothek sollte nicht nur extrem skalierbar, sondern auch günstig sein, denn sie ist als Einstiegs-Library für kleinere, wachsende und mittelständische Firmen gedacht. Zum Launch der Tape Library legt Oracle beeindruckende Zahlen und Fakten vor: - 75% günstiger in der Anschaffung, als vergleichbare Produkte - platzsparend durch 40% höhere Dichte - höchste Sicherheitsstandards - erweiterbar von 30 auf bis zu 300 Slots, und damit 900 Terabyte - einfache Bedienung dank intuitiver Benutzeroberfläche auf Basis der Oracle Fusion Middleware und Oracle Linux - die Installation dauert nur 30 Minuten - unterstützt viele verschiedene Systemumgebungen Partner haben die Möglichkeit, zu diesem neuen Mitglied der Oracle Produktfamilie eigene Support Services anzubieten. Details zu den Resell und Support Anforderungen finden Sie hier (mit OPN-Login): SL150 Produktübersicht Partner Support Option mit StorageTek SL150 Modular Tape Library FAQ - Partner Support Option mit StorageTek SL150 Modular Tape Library Auch die englischsprachige Pressemitteilung zum Launch bietet ausführliche Informationen und Details, von den Maßen bis zum Energieverbrauch, finden Sie hier im Storage Tek SL150 Data Sheet. Natürlich wollen wir Ihnen die ersten Stimmen aus der deutschsprachigen Fachpresse zur Storage Tek SL 150 nicht vorenthalten: SpeicherguideIT SecCityIT AdministratorDOAG

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >