Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 232/679 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Key announcements from Oracle Openworld - Video series

    - by Javier Puerta
    If you missed Oracle Openworld now you have the opportunity to watch a series of four 15-min webcasts with the key announcements, explained by EMEA key executives. Oracle OpenWorld I, OMN - Part 1 OPENWORLD I: Oracle's Cloud. interview with Alan HartwellGaye Hudson and Steve Walker, EMEA Corporate Communications take a look at Oracle's announcements leading up to Oracle Open World and talk to Alan Hartwell, VP Sales, Engineered Solutions, Exadata, Exalogic about Oracle's cloud offering. Oracle Open World II , OMN Part 2 OPENWORLD II: Engineered Systems with Alan HartwellGaye Hudson, VP Corporate Communications, EMEA talks to Alan Hartwell, VP Sales, Engineered Solutions, Exadata, Exalogic about Oracle's Engineered Systems, parallel hardware and software; Exalytics, Big Data Appliance & Enterprise Manager. Oracle OpenWorld III, OMN Part 3 OPENWORLD III: HW with John Abel, Storage with Luc Gheysens Gaye Hudson and Steve Walker talk to John Abel, Chief Technology Architect, Oracle Server and Storage, EMEA about SPARC SuperCluster and T4; and to Luc Gheysens, Senior Director, Storage Sales Specialist, EMEA about ZFS Storage and Pillar Axiom 600. Oracle OpenWorld IV, OMN Part 4 OPENWORLD IV: Oracle Fusion Applications with Noel ColoeGaye Hudson, VP Corporate Communications, EMEA talks to Noel Coloe, Head of Western Europe Applications Sales Development about Oracle Fusion Applications, a new paradigm in Enterprise applications.

    Read the article

  • Camunda BPM 7.0 on WebLogic 12c

    - by JuergenKress
    If we go on tour together with Oracle I think we have to have camunda BPM running on the Oracle WebLogic application server 12c (WLS in short). And one of our enterprise customers asked - so I invested a Sunday and got it running (okay - to be honest - I needed quite some help from our Java EE server guru Christian). In this blog post I give a step by step description how run camunda BPM on WLS. Please note that this is not an official distribution (which would include a sophisticated QA, a comprehensive documentation and a proper distribution) - it was my personal hobby. And I did not fire the whole test suite agains WLS - so there might be some issues. We will do the real productization as soon as we have a customer for it (let us know if this is interesting for you). Necessary steps After installing and starting up WLS (I used the zip distribution of WLS 12c by the way) you have to do: Add a datasource Add shared libraries Add a resource adapter (for the Job Executor using a proper WorkManager from WLS) Add an EAR starting up one process engine Add a WAR file containing the REST API Add other WAR files (e.g. cockpit) and your own process applications Actually that sounds more work to do than it is ;-) So let's get started: Add a datasource Add a datasource via the Administration Console (or any other convenient way on WLS - I should admit that personally I am not the WLS expert). Make sure that you target it on your server - this is not done by default: Read the full article here. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Camunda,BPM,JavaEE7,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Attention Extension Developers: Your input wanted!

    - by John 'JB' Brock
    Your Input Wanted! I've posted a lot of different topics throughout 2011, and would really like to provide info that is most important to you, the extension developer, as we head for 2012. What are the most important areas that you want to learn more about? Post your requests for examples and topics in the comments section. Let me know what you are struggling with, or something that you worked out, but it took way to long to figure out.  I'll take the list and do my best to provide samples over the coming months. Please provide the version of JDeveloper that you want the topic to cover. Remember: 11gR1 = 11.1.1.x (e.g. 11.1.1.5.0) 11gR2 = 11.1.2.x (e.g. 11.1.2.1.0) Thanks in advance for your comments and suggestions.  Let's get the JDev Extension community going in 2012! --jb John "JB" BrockOracle Product Manager - JDev ESDK

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Gartner Market Share: Oracle is #1 in PPM for WW revenues

    - by Melissa Centurio Lopes
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Sylvie Mackenzie, PMP The Gartner report published March 2014, Market Share: All Software Markets, Worldwide, 2013 shows Oracle as the leader in the Project and Portfolio Management space, with a market share of 22.5% and growth rate of 4.9%. Gartner WW PPM Vendor Share, 2013

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • New Computer

    - by Matt Christian
    Last night I received my computer that was ordered with my tax return money.  Here are the specs of my old computer: - Pentium 4 Processor - 3-4 GB RAM - ~256 GB HDD space (2 drives) - nVidia card (AGP 8x) Sorry I can't be more specific, my memory is gone :p  Here are the new computer specs (mostly): - 2.8ghz Pentium i7 quadcore - 6 GB RAM - 1 TB HDD space (1 drive) - 1 GB Radeon card (PCI-X) I also got a new monitor (22" Asus with HDMI) so will be using my 19" widescreen as a secondary monitor. If I remember I'll hop on here and post the specifics later on...

    Read the article

  • Silverlight Cream for May 18, 2010 -- #864

    - by Dave Campbell
    In this Issue: Jesse Liberty, Chris Koenig, Kyle McClellan, Kunal Chowdhury(-2-), Tim Heuer, and Jonathan van de Veen. Shoutout: René Schulte has posted a SLARToolkit Beginner's Guide Erik Mork and the Sparkling Podcast crew posted Silverlight Week – Silverlight Android? John Papa opens up a dialog: Ask the Experts on Silverlight TV ... get your questions answered! From SilverlightCream.com: Windows Phone 7 For Silverlight Programmers Jesse Liberty's starting a series on WP7, so you obviously don't want to miss this... source, commentary, external links, how-to's... what more could you ask for?? WP7 Part 3: Navigation Chris Koenig is revamping his WP7 application to use Community Megaphone instead of Nerd Dinner and in this episode 3 he's looking into Navigation ... definitely good stuff here. RIA Services Authentication Out-Of-Browser Kyle McClellan has code up demonstrating how to get around the fact that the Browser networking stack handles cookies differently than the client networking stack used OOB, and achieve forms authentication OOB. How to work with the Silverlight BusyIndicator? Kunal Chowdhury has a post up talking about the busy indicator and how to use it to show an active indicator while disabling other content. Drag and Drop Operation in Silverlight ListBox In a second entry, Kunal Chowdhury has a nice long post displaying drag-and-drop within and between ListBox controls. Silverlight 4 Tools, WCF RIA Services and Themes Released As usual, Tim Heuer has a great post up about the new releases not only for those with 'clean' machines, but also instructions for those that have been playing along. Advanced printing in Silverlight 4 Just after a post on printing yesterday, Jonathan van de Veen has a post up at SilverlightShow on printing as well, and is demonstrating fitting the text to the page and printing multiple pages. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Problem with WCF-SQL Adapter

    - by Paul Petrov
    When using WCF receive adapter with SQL binding in Polling mode please be aware of the following problem. Problem: At some regular but seemingly random intervals the application stops processing new requests, places a lock on the database and prevent other application from accessing it. Initially it looked like DTC issue, as it was distributed transaction that stalled most of the time. Symptoms: Orchestration instances in Dehydrated state, receive location not picking up new messages, exclusive locks on database tables, errors in DTC trace. Cause: Microsoft has confirmed that there is a bug in the WCF-SQL adapter. In the receive adapter binding configuration there's receiveTimeout property set to 10 minutes by default. If during this period data is not found in the table the adapter would start new thread and allocate more memory without releasing old resources. Thus if there's no new data in the table for a long time a new thread will be created in the host instance every 10 minutes until it reaches threshold (1000) and then there's no threads left for this host instance and it can't start/complete any tasks. Then this host instance won't be able to do anything. If other artifacts are hosted in the instance they will suffer consequences as well. Solution: - Set receiveTimeout to the maximum time 24.20:31:23.6470000. - Place WCF-SQL receive locations in separate host to provide its own thread pool and eliminate impact on other processes - Ensure WCF-SQL dedicated host instances are restarted at interval less or equal to receiveTimeout to flush threads and memory - Monitor performance counters Process/Thread Count/BTSNTSvc{n} for thread count trend and respond to alert if it grows by restarting host instance If you use WCF-SQL Adapter in the Notification mode then make sure to remove sqlAdapterInboundTransactionBehavior otherwise this location will exhibit the same issue. In this case though, setting receiveTimeout doesn't help and new thread will be created at default intervals (10 min) ignoring maximum setting.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • Oracle Fusion Tap ya está disponible en la Apple Store Apps!

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Ya puedes solicitar las aplicaciones de Oracle en la nube para el iPad y obtener acceso instantáneo a los datos de su empresa. Ahora tú y tus empleados pueden ser más eficaces en sus puestos de trabajo en cualquier lugar y a cualquier hora. Y no se trata sólo de utilizar las aplicaciones de la empresa en cualquier dispositivo móvil. Nos tomamos el poder y la comodidad del dispositivo tablet más popular , el iPad, y junto con los últimos avances en las aplicaciones empresariales basadas en la nube para traerle Tap Oracle Fusion. Es innovador, es fácil de usar y ,lo mejor de todo, es que es de Oracle, el nombre en que usted puede confiar con sus proyectos en cloud.Nos esforzamos en ser altamente productivos y lograr cosas de forma incremental. Nuestros dispositivos móviles nos permiten aprovechar las oportunidades que de otro modo podrían haber escapado porque no estábamos conectados. Con Tap Fusion de Oracle puede conectarse, analizar y trabajar, cuándo y cómo quiera.Una fuerza de trabajo fácil y ágil ya no es el futuro. Está aquí y ahora y Oracle está tomando la delantera con Tap Fusion! Descárgatelo ya!

    Read the article

  • Lorem Ipsum&ndash;Generating in Word 2010

    - by Shawn Cicoria
    Well, apparently I missed this hidden feature having used the Lorem Ipsum website for some time, but if you enter the following in blank Word document – you’ll get 10 paragraphs of generated text: =Lorem(10) Such as: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. Nunc viverra imperdiet enim. Fusce est. Vivamus a tellus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Proin pharetra nonummy pede. Mauris et orci. Aenean nec lorem. In porttitor. Donec laoreet nonummy augue. Suspendisse dui purus, scelerisque at, vulputate vitae, pretium mattis, nunc. Mauris eget neque at sem venenatis eleifend. Ut nonummy. Fusce aliquet pede non pede. Suspendisse dapibus lorem pellentesque magna. Integer nulla. Donec blandit feugiat ligula. Donec hendrerit, felis et imperdiet euismod, purus ipsum pretium metus, in lacinia nulla nisl eget sapien. Donec ut est in lectus consequat consequat. Etiam eget dui. Aliquam erat volutpat. Sed at lorem in nunc porta tristique. Proin nec augue. Quisque aliquam tempor magna. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nunc ac magna. Maecenas odio dolor, vulputate vel, auctor ac, accumsan id, felis. Pellentesque cursus sagittis felis.

    Read the article

  • Announcement: Oracle Database Appliance 2.4 patch update now available

    - by uwes
    The Oracle Database Appliance 2.4 patch is now available from My Oracle Support (MOS).  If you search for the Oracle Database Appliance 2.4.0.0.0 Kit under Patches it will display the newly uploaded bundles. The patch highlights include: Normal redundancy (double-mirroring) option providing 6TB of usable storage Enhanced Diagnostics - Trace File Analyzer and ODACHK Also, if you review the README, you may see content that says:        "The grid infrastructure and database patching, both are rolling upgradable. During our patching, we patch the node 1 first and when completed, we patch the node 2." I would like to clarify that the 'infrastructure' updates (OS, Firmware, ILOM, etc) will require a  short downtime of the ODA while it is applied.  When you update the grid infrastructure (--gi), the appliance manager verifies that the infrastructure was updated so you cannot just patch the GI without first updating the infrastructure. The high level update patch steps include (but not limited to): Download patch update to your ODA The --infra (infrastructure) is updated and ODA Databases are down and the ODA is/may be rebooted ODA and GI/Databases are restarted Issue the command to update the Grid Infrastructure/databases (The order of the steps are completed automatically and you cannot control when the nodes are brought up and down during the patching) Node 1 -- shutdown databases and GI Node 1 -- patch GI/database Node 1 -- bring up databases and GI Node 2 -- shutdown databases and GI Node 2 -- patch GI/database Node 2 -- bring up databases and GI A replay from Friday's with Sohan on the 2.4 release can be found here.  The PDF of the presentation is here. The Data Sheet, WP, and 2.4 Configurator are available on the ODA OTN site.

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Implementing SOA & Security with Oracle Fusion Middleware in your solution – partner webcast September 20th 2012

    - by JuergenKress
    Security was always one of the main pain points for the IT industry, and new security challenges has been introduced with the proliferation  of the service-oriented approach to building modern software. Oracle Fusion Middleware provides a wide variety of features that ease the building service-oriented solutions, but how these services can be secured? Should we implement the security features in each and every service or there’s a better way? During the webinar we are going to show how to implement non-intrusive declarative security for your SOA components by introducing the Oracle product portfolio in this area, such as Oracle Web Services Manager and Oracle Enterprise Gateway. Agenda: SOA & Web Services basics: quick refresher Building your SOA with Oracle Fusion Middleware: product review Common security risks in the Web Services world SOA & Web Services security standards Implementing Web Services Security with the Oracle products Web Services Security with Oracle – the big picture Declarative end point security with Oracle Web Services Manager Perimeter Security with Oracle Enterprise Gateway Utilizing the other Oracle IDM products for the advanced scenarios Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Send your questions and migration/upgrade requests [email protected] Visit regularly our ISV Migration Center blog or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. All content is made available through our YouTube - SlideShare - Oracle Mix. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Technorati Tags: ISV migration center,SOA,IDM,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Announcing: Oracle's Sun Flash Accelerator F80 PCIe Card

    - by uwes
    Ramp Up Your Server Performance with Oracle's Sun Flash Accelerator F80 PCIe Card! Oracle’s Sun Flash Accelerator F80 PCIe Card accelerates IO-starved applications and server performance by reducing storage latencies and increasing I/O throughput for greater productivity and business response! Sun Flash Accelerator F80 PCIe Card offers the following: Helps servers and their applications run faster and more efficient, while reducing power and space With 800GB capacity, delivers 2x the capacity of the previous F40 Flash Card for less than half the $/GB Accelerates I/O constrained databases with increased IOPS and consistent low-latency response timers Current and planned server support includes: The F80 is currently supported in Oracle’s SPARC T4-1, T4-2 and X4-2L servers.  SPARC T5, M5, M6 and Fujitsu M10 server support is planned for December 2013 (Preliminary only) Please read the Sales Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle.com Flash Page Oracle Technology Network Flash Page

    Read the article

  • Getting Started with Oracle Fusion Procurement

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you hear about the unique advantages of Fusion Procurement, learn about the scope of the first release and discover how Fusion Procurement modules can be used to complement and enhance your existing Procurement solutions.

    Read the article

  • Siebel-Oracle Integrations

    Aseem Chandra, Senior Director, Industry Strategy talks to Cliff about how customers can benefit now from packaged integration of Siebel CRM with Oracle back office applications, and the roadmap to upgrade to Fusion Applications.

    Read the article

  • Sorry about the wait.

    - by Ratman21
    In the last two days have been trying remove “Iolo System Mechanic Professional” (With anti-virus and FireWall) from 3 of the 5 pc’s we have (3 lap tops and two Desk tops) as it was going to expire on the 13th.   So I could replace them with a free anti-virus (AVG) and just use the windows fire wall. I have been using the same set up on one of my desk tops (XP Pro) for 8 months and one of the Lap tops (Vista) for 5 months.   The problem was that System Mechanic did not want to go. Even after using the uninstall option on the desk top (my main PC, well its that because has the larger of all the PC’s hard drives but, is the oldest and runs XP home) and using Ccleaner to try and remove it.  It was still showing up as there and after I went a head and tried installing AVG and ran it. I found that the TCP/IP module was missing.  So no internet, I had to restore the PC back to the 1st to get the module back and then install AVG (after making sure window firewall was back on. I didn’t check that on the first try). Got the PC back to normal, very late last night. Only one of the two lap tops was easy but, even at that there are still some parts of System Mechanic on it but, AVG and firewall are working.   I may try an hunt down parts of System Mechanic on it and delete them on this lap top. Which was what finally had to do on the one of the Lap tops (also XP Home) as it would not uninstall after I restored the PC back to the 4th. So delete, delete, delete and Ccleaner (one dl file would not delete though). And I just finish installing AVG and now running a scan on the lap top. So all of this took two days (well three counting today). I started late Friday night and just finishing up now.   I only started this switch over after I had finished my Job search for day on Friday.   As for blogging on Tuesday, Wednesday and Thursday, I was busy and by the end of the day was too tired to blog, that and was hung up still on that 2nd dare of The Love Dare. So I cleaned the house, while she was out of the house. I mean, I cleaned, not just vacuumed house I cleaned the kitchen counter tops and the sinks. Did the dishes and some of the laundry over two of the those days.   As to the third day of Love Dare which is “Love is not selfish” and the dare “Whatever you put your time, energy, and money into will become more important to you. It’s hard to care for something you are not investing in. Along with restraining from negative comments, buy your spouse something that says, I was thinking of you today.”   Being on a very limited income, a lot of normal guy buying for girls is out (for one thing, the comment why did you waste our money on flowers, etc, etc, would come up. Not from me though). So that one is on hold till money issues are not a problem (no that does not mean never). The 4th day “Love is thoughtful” and the dare “Contact your spouse sometime during the business of the day. Have no agenda other than asking how he or she is doing and if there is anything you could do for them”.   I did this dare while I was still working with census last week and trying to do the dares. Well I start my CCNA classes Monday the 15th and I move on to the next Love Dare day “Love is not rude”.

    Read the article

  • ArchBeat Link-o-Rama for November 15, 2012

    - by Bob Rhubart
    WLST Starting and Stopping a WebLogic Environment | Rene van Wijk Oracle ACE Rene van Wijk explores how to start a server with as little input as possible. Developing and Enforcing a BYOD Policy | Darin Pendergraft Darin Pendergraft's post includes links to a recent Mobile Access Policy Survey by SANS as well as registration information for a Nov 15 webcast featuring security expert Tony DeLaGrange from Secure Ideas, SANS instructor, attorney and technology law expert Ben Wright, and Oracle IDM product manager Lee Howarth. Cloud Integration White Paper Now Available |Bruce Tierney Bruce Tierney shares an overview of Cloud Integration - A Comprehensive Solution, a new white paper he co-authored with David Baum, Rajesh Raheja, Bruce Tierney, and Vijay Pawar. My iPad & This Cloud Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why the Cloud is making it possible for him to use his iPad for tasks previously relegated to his laptop, and why this same scenario is likely to play out for a great many people. 3 steps to a cloud database strategy that works | InfoWorld "Every day, cloud-based databases add more features, decrease in cost, and become better at handling prime-time business," says InfoWorld blogger David Linthicum. "However, enterprise IT is reluctant to move data to public clouds, citing the tried-and-true excuses of security, privacy, and compliance. Although some have valid points, their reasons often boil down to 'I don't wanna.'" Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available | Elke Phelps "The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server," says Elke Phelps. "You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install)." Thought for the Day "A good plan executed today always beats a perfect plan executed tomorrow." — George S. Patton (November 11, 1885 - December 21, 1945) Source: SoftwareQuotes.com

    Read the article

  • Announcing Oracle Knowledge 8.5: Even Superheroes Need Upgrades

    - by Chris Warner
    It’s no secret that we like Iron Man here at Oracle. We've certainly got stuff in common: one of the world’s largest technology companies and one of the world’s strongest technology-driven superheroes. If you've seen the recent Iron Man movies, you might have even noticed some of our servers sitting in Tony Stark’s lab. Heck, our CEO made a cameo appearance in one of the movies. Yeah, we’re fans. Especially as Iron Man is a regular guy with some amazing technology – like us. But Like all great things even Superheroes need upgrades, whether it’s their suit, their car or their spacestation. Oracle certainly has its share of advanced technology.  For example, Oracle acquired InQuira in 2011 after years of watching the company advance the science of Knowledge Management.  And it was some extremely super technology.  At that time, Forrester’s Kate Leggett wrote about it in ‘Standalone Knowledge Management Is Dead With Oracle's Announcement To Acquire InQuira’ saying ‘Knowledge, accessible via web self-service or agent UIs, is a critical customer service component for industries fielding repetitive questions about policies, procedures, products, and solutions.’  One short sentence that amounts to a very tall order.  Since the acquisition our KM scientists have been hard at work in their labs. Today Oracle announced its first major knowledge management release since its acquisition of InQuira: Oracle Knowledge 8.5. We’ve put a massively-upgraded supersuit on our KM solution because we still have bad guys to fight. And we are very proud to say that we went way beyond our original plans. So what, exactly, did we do in Oracle Knowledge 8.5? We did what any high-tech super-scientist would do. We made Oracle Knowledge smarter, stronger and faster. First, we gave Oracle Knowledge a stronger heart: Certified on Oracle technologies, including Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud. Huge scaling and performance improvements. Then we gave it a better reach: Improved iConnect functionality that delivers contextualized knowledge directly into CRM applications. Better content acquisition support across disparate sources. Enhanced Language Support including Natural Language search support for 16 Languages. Enhanced Keyword Search for 23 authoring languages, as well as enhanced out-of-the-box industry ontologies covering 14 languages. And finally we made Oracle Knowledge ridiculously smarter: Improved Natural Language Search and a new Contextual Answer Delivery that understands the true intent of each inquiry to deliver the best possible answers. AnswerFlow for Guided Navigation & Answer Delivery, a new application for guided troubleshooting and answer delivery. Knowledge Analytics standardized on Oracle’s Business Intelligence Enterprise Edition. Knowledge Analytics Dashboards optimized search and content creation through targeted, actionable insights. A new three-level language model "Global - Language - Locale" that provides an improved search experience for organizations with a global footprint. We believe that Oracle Knowledge 8.5 is the most sophisticated KM solution in existence today and we’ve worked very hard to help it fulfill the promise of KM: empowering customers and employees with deep insights wherever they need them. We hope you agree it’s a suit worth wearing. We are continuing to invest in Knowledge Management as it continues to be especially relevant today with the enterprise push for peer collaboration, crowd-sourced wisdom, agile innovation, social interaction channels, applied real-time analytics, and personalization. In fact, we believe that Knowledge Management is a critical part of the Customer Experience portfolio for success. From empowering employee’s, to empowering customers, to gaining the insights from interactions across all channels, businesses today cannot efficiently scale their efforts, strengthen their customer relationships or achieve their growth goals without a solid Knowledge Management foundation to build from. And like every good superhero saga, we’re not even close to being finished. Next we are taking Oracle Knowledge into the Cloud. Yes, we’re thinking what you’re thinking: ROCKET BOOTS! Stay tuned for the next adventure… By Nav Chakravarti, Vice-President, Product Management, CRM Knowledge and previously the CTO of InQuira, a knowledge management company acquired by Oracle in 2011. 

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >