Search Results

Search found 41329 results on 1654 pages for 'oracle database vault'.

Page 190/1654 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Automatic database generation / migration with perl

    - by pistacchio
    Hi, In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any perl module that does the same? Eg, it takes the YAML / XML / JSON description of a database as input and modifies / generates the database accordingly? Thanks in advance.

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • Oracle Primavera Partner Programs

    - by mark.kromer
    Here is the slide presentation with only the slides that can be shared at this time, for our Oracle Primavera partner programs focusing on expanding P6's workflows and reporting capabilities. By leveraging Oracle's BPM & BI Publisher products, you can build exciting new workflow & enhanced reports to expand the capabilities of Primavera applications.

    Read the article

  • OpenWorld hands on labs: HOL9558, HOL9559 and HOL9870

    - by cpauliat
    In the upcoming event Oracle OpenWorld that will start in 2 days in San Francisco, Olivier Canonge, Simon Coter, Eric Bezille and I will run 3 hands on lab about Cloud using Oracle VM for X86 virtualization tool (details below) For each lab, a detailed document (in PDF format) explains all steps. If you don't have the opportunity to attend OpenWorld labs sessions, you can still run the labs at home or office using those documents. Lab 9558: Deploying Infrastructure as a Service (IaaS) with Oracle VM Session ID: HOL9558 Tuesday October 2nd, 2012, 10:15am – 11:15amLocation: Marriott Marquis - Salon 14/15PDF Document: part1 part2 part3 (right click and save link for each part then use winzip on file .001 to extract the PDF doc from the 3 zip files) Lab 9559: Virtualize and Deploy Oracle Applications Using Oracle VM Templates Session ID: HOL9559 Tuesday October 2nd, 2012, 11:45am – 12:45pmLocation: Marriott Marquis - Salon 14/15 PDF Document Lab 9870: x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL 9870 Wednesday, 3 Oct, 2012, 5:00 PM - 6:00 PMLocation: Marriott Marquis - Salon 14/15 PDF Document: part1 part2 part3 (right click and save link for each part then use winzip on file .001 to extract the PDF doc from the 3 zip files)

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • New Worklist features on 12.1.3

    - by Vijay Shanmugam
    Following new Worklist features are available on E-Business Suite 12.1.3 via Patch 13646173. Ability to view comments on top of a notification If an action is performed on a notification such as Reassign, Request for Information or Provide Information, the recipient of the notification will see who performed the last action and the associated comment on top of the notification. Reassigning a request for information notification If an approver requests more information on a notification from it's submitter, the submitter now has two options Answer Request for More Information Transfer Request for More Information If the submitter thinks the requested information can be provided by another user, he/she can transfer the request to the other user. Please note that only Transfer is supported for Request for More Information. Once transferred, the submitter cannot access the notification and provide the requested information. Use actual sent date when reassigning a notification The Sent field in notification header always showed the date on which the notification was first created. If the notification was later reassigned, the Sent date was not updated to show the last action date. This caused problems in following scenario Approval notification was sent to JACK on 01-JAN-2012 JACK waited for 10 days before reassigning to JILL on 10-JAN-2012 JILL does not see the notification as sent on 10-JAN-2012, instead sees it as sent on 01-JAN-2012 Although the notification was originally created on 01-JAN-2012, it was sent to JILL only on 10-JAN-2012 The enhancement now shows the correct sent date in Worklist and Notification Details page. Figure 1 - Depicts all the above 3 features Related Action History for response required notification So far it was possible to embed Action History of an response-required notification into another FYI notification using #RELATED_HISTORY attribute (Please refer to Workflow Developer Guide for details about this attribute). The enhancement now enables developers to embed Action History of one response-required notification into another response-required notification. To embed Action History of one response-required notification into another, create message attribute #RELATED_HISTORY. To this attribute set a value during run-time in the following format. {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAMEThe TITLE, ITEM_TYPE and ITEM_KEY are optional values. TITLE is used as Related Action History header title. If TITLE is not present, then a default title "Related Action History" is shown. If ITEM_TYPE is present and ITEM_KEY is not, For Example: {TITLE}[ITEM_TYPE]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from parent item type of the current item. If both ITEM_TYPE and ITEM_KEY is present, For Example: {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from that specific instance activity. Figure 2 - Depicts Related Action History feature

    Read the article

  • Oracle Database In-Memory Launch - Featuring Larry Ellison - June 10 - Joint the webcast!

    - by Javier Puerta
    For more than three-and-a-half decades, Oracle has defined database innovation. With our market-leading technologies, customers have been able to out-think and out-perform their competition. Soon they will be able to do that even faster. At a live launch event and simultaneous webcast, Larry Ellison will reveal the future of the database. Promote this strategic event to customers.  Watch Larry Ellison on Tuesday, June 10, 2014 19:00 – 20:30 a.m. CET  6:00 pm - 7:30 pm UK  Join the webcast here!

    Read the article

  • Critical Patch Update for October 2012 Now Available

    - by Steven Chan (Oracle Development)
    The Critical Patch Update (CPU) for October 2012 was released on July 16, 2012. Oracle strongly recommends applying the patches as soon as possible. The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied. Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information. The Critical Patch Update Advisory is available at the following location: Oracle Technology Network The next four Critical Patch Update release dates are: January 15, 2013 April 16, 2013 July 16, 2013 October 15, 2013 E-Business Suite Releases 11i and 12 Reference Oracle E-Business Suite Releases 11i and 12 Critical Patch Update Knowledge Document (October 2012) (Note 1486535.1)

    Read the article

  • 1.6 or 1.7: Browser side Java version for Oracle VM console

    - by katsumii
    I noticed one of the recent FAQ in OVM forum is about console.OTN Discussion Forums : Oracle VM Server for x86vnc console not running on Oracle VM 3.1.1 One of the variable for running console is Java version. I myself hit the bug below on Windows with JDK1.7 This is Windows only bug. I had success with JDK1.7 on Linux.Bug ID: 7191616 javaws.exe crashes when starting jnlp filejnlp file provided in the steps to reproduce always crashes javaws.exe in Java 7 Update 6.No problems in Update 4

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • Oracle’s Primavera Inspire for SAP: Aligning Business Priorities with Project Priorities

    Oracle’s Primavera Inspire for SAP integrates schedule, financial and resource information between Oracle’s Primavera project portfolio management applications and SAP’s enterprise resource planning solutions. Join Tracy Bowman, Principal Product Manager and learn how Primavera Inspire for SAP can help utilities and oil and gas companies’ complete projects on-time and within budget by providing them with a single access point for all project and portfolio related information.

    Read the article

  • Oracle’s Visual CRM Solution

    Visual CRM adds the powerful visualization and document centric collaboration capabilities of Oracle’s AutoVue to Oracle’s best-in-class CRM solutions. By introducing a visual aspect to call center, field service, and ordering processes, Visual CRM helps teams provide faster responses to customer issues, optimize field service performance, and shorten ordering cycles while minimizing order errors.With Visual CRM, organizations can achieve improved customer service levels and field service operations which help drive margin, top line revenue, and customer retention.

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • The Benefits of Oracle's Compliance Architecture

    Fred chats with Deborah Hamilton, Senior Compliance Product Marketing Director at Oracle about what the Oracle Compliance Architecture is, how customers are benefiting from its integrated approach to compliance - of technology, people and processes - and how it helps with organizations meet multiple compliance mandates.

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • How to distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web spring application running on a MySql or PostGre kind of database. The traffic is becoming so high and the amount of data is becoming so big that a distributed dataase solution needs to be implemented. It is a scalability issue. Let's assume this application is using Hibernate and the data access layer is cleanly separated with DAO objects. What would be the best strategy to scale this database? Does anyone have hands on experience to share? Is it possible to minimize sharding code (Shard) in the application? Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. I am not looking for you could go for sharding or you could go no sql kind of answers. I am looking for deeper answers from people with experience.

    Read the article

  • Oracle moves to Java technology to embedded middleware

    - by hinkmond
    Here's another article pointing out our move to Java Embedded Middleware with our launch of Oracle Java Embedded Suite 7.0 See: Oracle moves to Java embedded middleware Here's a quote: At the JavaOne Embedded conference, a wafer thin embedded device that was smaller than a Ritz cracker was loaded up with the Java Embedded Suite. I like that: "a wafer thin embedded device". Just one thin wafer. Reminds me of the scene from Monty Python's, The Meaning of Life. "Better?" Hinkmond

    Read the article

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • Database Change Auditing - Part of or Abstracted from ORM / Application Layer?

    - by BrandonV
    My fellow developers and I are at a crossroads in how to go about continuing our auditing of database changes. Most of our applications log changes via INSERT, UPDATE, and DELETE triggers. A few of our newer applications audit at the ORM layer; specifically using Hibernate Envers. While ORM layer auditing provides a much cleaner interface and is much more maintainable, it will not capture any manual database changes that are made. ORM layer auditing also means that our libraries will currently require a dependency on our ORM implementation unless, specifically in our case for example, JPA plans on providing something in the near future. Is there a common paradigm that addresses this?

    Read the article

  • How can I give my client "full access" to their PHP application's MySQL database?

    - by Micah Delane Bolen
    I am building a PHP application for a client and I'm seriously considering WordPress or a simple framework that will allow me to quickly build out features like forums, etc. However, the client is adamant about having "full access" to the database and the ability to "mine the data." Unfortunately, I'm almost certain they will be disappointed when they realize they won't be able to easily glean meaningful insight by looking at serialized fields in wp_usermeta, etc. One thought I had was to replicate a variation on the live database where I flatten out all of those ambiguous and/or serialized fields into something that is then parsable by a mere mortal using a tool as simple as phpMyAdmin. Unfortunately, the client is not going to settle for a simple backend dashboard where I create the custom reports for them even though I know that would be the easiest and most sane approach.

    Read the article

  • FORBES.COM: Oracle's message is loud & clear – “we've got the cloud”

    - by Richard Lefebvre
    In a two-part series on Oracle's cloud strategy, Bob Evans reports on the October 4 meeting where Wall Street analysts questioned Mark Hurd and Safra Catz about the company's positioning for the shift to cloud computing. Check out Bob's related Forbes.com piece "The Dumbest Idea of 2013," in response to the preposterous chatter that Larry Ellison and Oracle don't "get" the cloud. His powerful six-point argument unravels our competitors' spin. Read the "Dumbest Idea."

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >