Search Results

Search found 22357 results on 895 pages for 'virtual member manager'.

Page 231/895 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Partner Blog Series: PwC Perspectives - Looking at R2 for Customer Organizations

    - by Tanu Sood
    Welcome to the first of our partner blog series. November Mondays are all about PricewaterhouseCoopers' perespective on Identity and R2. In this series, we have identity management experts from PricewaterhouseCoopers (PwC) share their perspective on (and experiences with) the recent identity management release, Oracle Identity Management R2. The purpose of the series is to discuss real world identity use cases that helped shape the innovations in the recent R2 release and the implementation strategies that customers are employing today with expertise from PwC. Part 1: Looking at R2 for Customer Organizations In this inaugural post, we will discuss some of the new features of the R2 release of Oracle Identity Manager that some of our customer organizations are implementing today and the business rationale for those. Oracle's R2 Security portfolio represents a solid step forward for a platform that is already market-leading.  Prior to R2, Oracle was an industry titan in security with reliable products, expansive compatibility, and a large customer base.  Oracle has taken their identity platform to the next level in their latest version, R2.  The new features include a customizable UI, a request catalog, flexible security, and enhancements for its connectors, and more. Oracle customers will be impressed by the new Oracle Identity Manager (OIM) business-friendly UI.  Without question, Oracle has invested significant time in responding to customer feedback about making access requests and related activities easier for non-IT users.  The flexibility to add information to screens, hide fields that are not important to a particular customer, and adjust web themes to suit a company's preference make Oracle's Identity Manager stand out among its peers.  Customers can also expect to carry UI configurations forward with minimal migration effort to future versions of OIM.  Oracle's flexible UI will benefit many organizations looking for a customized feel with out-of-the-box configurations. Organizations looking to extend their services to end users will benefit significantly from new usability features like OIM’s ‘Catalog.’  Customers familiar with Oracle Identity Analytics' 'Glossary' feature will be able to relate to the concept.  It will enable Roles, Entitlements, Accounts, and Resources to be requested through the out-of-the-box UI.  This is an industry-changing feature as customers can make the process to request access easier than ever.  For additional ease of use, Oracle has introduced a shopping cart style request interface that further simplifies the experience for end users.  Common requests can be setup as profiles to save time.  All of this is combined with the approval workflow engine introduced in R1 that provides the flexibility customers need to meet their compliance requirements. Enhanced security was also on the list of features Oracle wanted to deliver to its customers.  The new end-user UI provides additional granular access controls.  Common Help Desk use cases can be implemented with ease by updating the application profiles.  Access can be rolled out so that administrators can only manage a certain department or organization.  Further, OIM can be more easily configured to select which fields can be read-only vs. updated.  Finally, this security model can be used to limit search results for roles and entitlements intended for a particular department.  Every customer has a different need for access and OIM now matches this need with a flexible security model. One of the important considerations when selecting an Identity Management platform is compatibility.  The number of supported platform connectors and how well it can integrate with non-supported platforms is a key consideration for selecting an identity suite.  Oracle has a long list of supported connectors.  When a customer has a requirement for a platform not on that list, Oracle has a solution too.  Oracle is introducing a simplified architecture called Identity Connector Framework (ICF), which holds the potential to simplify custom connectors.  Finally, Oracle has introduced a simplified process to profile new disconnected applications from the web browser.  This is a useful feature that enables administrators to profile applications quickly as well as empowering the application owner to fulfill requests from their web browser.  Support will still be available for connectors based on previous versions in R2. Oracle Identity Manager's new R2 version has delivered many new features customers have been asking for.  Oracle has matured their platform with R2, making it a truly distinctive platform among its peers. In our next post, expect a deep dive into use cases for a customer considering R2 as their new Enterprise identity solution. In the meantime, we look forward to hearing from you about the specific challenges you are facing and your experience in solving those. Meet the Writers Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years. Praveen Krishna is a Manager in the Advisory  Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.

    Read the article

  • Right click is disabled on Desktop

    - by RameshKatkam
    I installed elementary OS and e17 on my Ubuntu 12.04 from ppas. After logout and login, right click on desktop is not working in gnome and unity but working in file manger.When I login into Pantheon session, I don't find an option to open Home folder and my login display manager theme is also changed.I want to get back to default Ubuntu login theme.Can i change the login theme with simple-lightdm manager. I dont want to mess up my system.Please hemp me how to fix these.

    Read the article

  • Reboot without sudoer privileges?

    - by Lincoln
    Hi together, I've been trying to get my ubuntu restart without having to edit the sudoers. This has been possible before (in lucid I think) using a dbus command: dbus-send –system –print-reply –dest=org.freedesktop.ConsoleKit /org/freedesktop/ConsoleKit/Manager org.freedesktop.ConsoleKit.Manager.Restart But this gives me an error. Looks like things have changed. In KDE (which I don't use) one has something similar (see this answer) Could anyone show me an alternative way to make my machine reboot from a script (without adjusting rights)

    Read the article

  • Linux Mint 14 est arrivé, "Nadia" est disponible avec les environnements de bureau MATE et Cinnamon

    "Nadia" : Linux Mint 14 disponible en Release Candidate avec Nemo, le fork de Nautilus les bureaux Cinnamon 1.6 et Mate 1.4 [IMG]http://www.franck-depan.fr/images/logo/systemes-exploitation/linux/distribution-mint/mint-logo.png[/IMG] L'équipe de développement de GNU/Linux Mint annonce la Release Candidate de la quatorzième version de sa distribution fondée sur Ubuntu Voici une brève liste des nouveautés :Mate 1.4 Cinnamon 1.6 Mint Desktop Manager Software Manager améliorations système Mate 1.4 Mate 1.4 renforce non seulement la...

    Read the article

  • Linux Mint 14 disponible en Release Candidate, "Nadia" sort avec Nemo le fork de Nautilus, les bureaux Cinnamon 1.6 et Mate 1.4

    "Nadia" : Linux Mint 14 disponible en Release Candidate avec Nemo, le fork de Nautilus les bureaux Cinnamon 1.6 et Mate 1.4 [IMG]http://www.franck-depan.fr/images/logo/systemes-exploitation/linux/distribution-mint/mint-logo.png[/IMG] L'équipe de développement de GNU/Linux Mint annonce la Release Candidate de la quatorzième version de sa distribution fondée sur Ubuntu Voici une brève liste des nouveautés :Mate 1.4 Cinnamon 1.6 Mint Desktop Manager Software Manager améliorations système Mate 1.4 Mate 1.4 renforce non seulement la...

    Read the article

  • MegaCli newly created disk doesn't appear under /dev/sdX

    - by Henry-Nicolas Tourneur
    After having successfully added 2 new disks in a new RAID virtual drive (background initialization done), I would have exepected it to appear under /dev/sdh but it's not there (so, unusable). The system is running a CentOS 5.2 64 bits, HAL and udev daemons are running, not records of any sdh apparition under the messsage log file or in dmesg, only MegaCli do see that virtual drive. Any idea ? Some data: [root@server ~]# ./MegaCli -LDInfo -LALL -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (target id: 0) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:139392MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (target id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:285568MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default [root@server ~]# ls -l /dev/disk/by-id/scsi-360* lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09 -> ../../sda lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee -> ../../sdf lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f -> ../../sdb lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec -> ../../sde lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec-part1 -> ../../sde1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d -> ../../sdd lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7 -> ../../sdc lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1 -> ../../sdg lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1-part1 -> ../../sdg1

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • WEBCAST: Strategies for Managing the Oracle Database Lifecycle

    - by Scott McNeil
    Thursday November 110:00 a.m. PST / 1:00 p.m. EST Join us for a live Webcast and see how Oracle Enterprise Manager 12c makes database lifecycle management easier. You’ll learn how to: Simplify database configurations thanks to extensive automation for discovery and change detection Improve IT service levels with Oracle’s next-generation database patching and provisioning automation Ensure consistency and compliance with comprehensive database change management Register today. Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Do best practices to avoid vendor lock-in exist?

    - by user1598390
    Is there a set of community approved rules to avoid vendor lock-in ? I mean something one can show to a manager or other decision maker that is easy to understand and easily verifiable. Are there some universally accepted set of rules, checklist or conditions that help detect and prevent vendor lock-in in an objective, measurable way ? Have any of you warned a manager about the danger of vendor lock-in during the initial stages of a project ?

    Read the article

  • SCOM, 90 Days In, I

    - by merrillaldrich
    At my office we’re about 90 days into our implementation of System Center Operations Manager for Windows Server and SQL Server monitoring. All in all it’s been a good experience, and I’m really excited to have access to this tool. I’ve logged a fair number of years as a DBA on products like Idera’s SQL Diagnostic Manager and Quest Spotlight on SQL Server Enterprise (and “roll-your-own” solutions) in smaller environments, and liked them, but they always, in my experience, struggled with really large...(read more)

    Read the article

  • OT: Improbable use for an iPad?

    - by merrillaldrich
    Here's an interesting tidbit: I have noticed an even more pronounced trend toward centralized or virtual workstations lately. Both my wife and I can sit at home, as we are now, at the dining room table and work on our laptops (exciting life, I know!) but both of us are not actually working locally on these machines. We are both remoting into machines at our respective workplaces. Hers is a desktop machine physically located at her desk, while mine is a virtual workstation in my company's data center...(read more)

    Read the article

  • Cannot update 12.04 due to "Failed to fetch" warning

    - by harsha
    I can't update my ubuntu 12.04. I tried it from terminal, update manager and also from the synaptic package manager. The error is W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_IN Unable to connect to 172.20.0.100:8080: W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Unable to connect to 172.20.0.100:8080: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Last chance for a day of free SQL Server training at SQL in the City 2012

    SQL Server developers and database administrators have one last chance for a full day of free training and networking at SQL in the City 2012. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • Új dimenzió middleware (Java) virtualizáció területén

    - by peter.nagy
    Korábban már írtam róla, de most megtörtént a hivatalos bejelentés. Tehát elérheto lesz a JRockit Virtual Edition ami várhatóan új mértéket teremt middleware területen. Továbbá megjelent a Virtual Assembly Builder mely a rohamosan terjedo virtualizációs környezetben használt konfiguráció menedzsmentjét támogatja hatékonyan. A termék elso nyilvános webcast-jára pedig itt tessék regisztrálni.

    Read the article

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • SQL Server Storage Internals 101

    This article is an extract from the book Tribal SQL. In this article, Mark S. Rasmussen offers a concise introduction to the physical storage internals behind SQL Server databases. He doesn't dive into every detail, but provides a simple, clear picture of how SQL Server stores data. Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • XSIGO Product Training Now Available

    - by Cinzia Mascanzoni
    Xsigo Sales and Pre-Sales training is available via iLearning for partners registered in OPN Server and Storage Systems Knowledge Zones. The recommended online training sessions provide sales training solutions that equip partners with the product knowledge, market knowledge and selling strategies to help achieve their revenue targets. Partners are invited to learn more here: Sales Training: Product Essentials For Sales - Oracle Virtual Networking Pre-Sales Training and Product Essentials For Sales Consultants - Oracle Virtual Networking.

    Read the article

  • The Latest Developments with Oracle's Report Tool, XML Publisher

    Rich Colton, Application Integration Manager for Washington Group International (WGI) and Tim Dexter, XML Publisher Group Product Manager speak with Cliff about the Enterprise release of XML Publisher, the new extraction engine that allows developers to create reports that access multiple databases and datasources and WGI's XML strategy and benefits with for their business applications.

    Read the article

  • The Benefits of Oracle's Reporting Tool, XML Publisher

    During this session, Cliff speaks with Mike Tobin, IT Manager, Oracle Apps Development and Architecture for Qualcomm and Tim Dexter EML Publisher Group Product Manager for Oracle about what XML Publisher is, the business need or reporting headache this solution solves for organizations.

    Read the article

  • Ubuntu and Postfix Configuration Issues

    - by Obi Hill
    I recently installed postfix on Ubuntu Natty. I'm having a problem with the configuration. Firstly here is my postfix configuration file: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = $myorigin myhostname = mail.nairanode.com alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 mydestination = $myorigin, $myhostname, localhost.localdomain, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all #mynetworks_style = host # ADDITIONAL unknown_local_recipient_reject_code = 550 maximal_queue_lifetime = 7d minimal_backoff_time = 1000s maximal_backoff_time = 8000s smtp_helo_timeout = 60s smtpd_recipient_limit = 16 smtpd_soft_error_limit = 3 smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_$ # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.n$ # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_do$ # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes Here is also my /etc/postfix/aliases: # See man 5 aliases for format postmaster: root Here is also my /etc/mailname: nairanode.com I've also updated my hostname to nairanode.com However, when I run postalias /etc/postfix/aliases I get the following : postalias: warning: valid_hostname: invalid character 47(decimal): /etc/mailname postalias: fatal: file /etc/postfix/main.cf: parameter mydomain: bad parameter value: /etc/mailname Is there something I'm doing wrong?! I noticed that when I replace myorigin = /etc/mailname with myorigin = nairanode.com in my postfix config, I don't see any errors anymore after calling postalias. Is this a bug or something?!

    Read the article

  • Violation of the DRY Principle

    - by Onorio Catenacci
    I am sure there's a name for this anti-pattern somewhere; however I am not familiar enough with the anti-pattern literature to know it. Consider the following scenario: or0 is a member function in a class. For better or worse, it's heavily dependent on class member variables. Programmer A comes along and needs functionality like or0 but rather than calling or0, Programmer A copies and renames the entire class. I'm guessing that she doesn't call or0 because, as I say, it's heavily dependent on member variables for its functionality. Or maybe she's a junior programmer and doesn't know how to call it from other code. So now we've got or0 and c0 (c for copy). I can't completely fault Programmer A for this approach--we all get under tight deadlines and we hack code to get work done. Several programmers maintain or0 so it's now version orN. c0 is now version cN. Unfortunately most of the programmers that maintained the class containing or0 seemed to be completely unaware of c0--which is one of the strongest arguments I can think of for the wisdom of the DRY principle. And there may also have been independent maintainance of the code in c. Either way it appears that or0 and c0 were maintained independent of each other. And, joy and happiness, an error is occurring in cN that does not occur in orN. So I have a few questions: 1.) Is there a name for this anti-pattern? I've seen this happen so often I'd find it hard to believe this is not a named anti-pattern. 2.) I can see a few alternatives: a.) Fix orN to take a parameter that specifies the values of all the member variables it needs. Then modify cN to call orN with all of the needed parameters passed in. b.) Try to manually port fixes from orN to cN. (Mind you I don't want to do this but it is a realistic possibility.) c.) Recopy orN to cN--again, yuck but I list it for sake of completeness. d.) Try to figure out where cN is broken and then repair it independently of orN. Alternative a seems like the best fix in the long term but I doubt the customer will let me implement it. Never time or money to fix things right but always time and money to repair the same problem 40 or 50 times, right? Can anyone suggest other approaches I may not have considered? If you were in my place, which approach would you take? If there are other questions and answers here along these lines, please post links to them. I don't mind removing this question if it's a dupe but my searching hasn't turned up anything that addresses this question yet. EDIT: Thanks everyone for all the thoughtful responses. I asked about a name for the anti-pattern so I could research it further on my own. I'm surprised this particular bad coding practice doesn't seem to have a "canonical" name for it.

    Read the article

  • What best practices exist to avoid vendor lock-in?

    - by user1598390
    Is there a set of community approved rules to avoid vendor lock-in? I mean something one can show to a manager or other decision maker that is easy to understand and easily verifiable. Are there some universally accepted set of rules, checklist or conditions that help detect and prevent vendor lock-in in an objective, measurable way? Have any of you warned a manager about the danger of vendor lock-in during the initial stages of a project?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >