Search Results

Search found 67448 results on 2698 pages for 'data management'.

Page 9/2698 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Oracle Financial Management Analytics 11.1.2.2.300 is available

    - by THE
    (guest post by Greg) Oracle Financial Management Analytics 11.1.2.2.300 is now available for download from My Oracle Support as Patch 15921734 New Features in this release: Support for the new Oracle BI mobile HD iPad client. New Account Reconciliation Management and Financial Data Quality Management analytics Improved Hyperion Financial Management analytics and usability enhancements Enhanced Configuration Utility to support multiple products. For HFM, FCM or ARM, and FDM, we support both Oracle and Microsoft SQL Server database. Simplified Test to Production migration of OFMA. Web browsers support for Oracle Financial Management Analytics: Internet Explorer Version 9 - The Oracle Financial Management Analytics supports the Internet Explorer 9 Web browser (for both 32 and 64 bit). Firefox Version 6.x - The Oracle Financial Management Analytics supports the Firefox 6.x Web browser. Chrome Version 12.x - The Oracle Financial Management Analytics supports the Chrome 12.x Web browser. See OBIEE Certification Matrix 11.1.1.6:  http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html Oracle Financial Management Analytics Compatibility: The Oracle Financial Management Analytics supports the following product version: Oracle Hyperion Financial Data Quality Management Release 11.1.2.2.300 Oracle Financial Close Manager Release 11.1.2.2.300 Oracle Hyperion Financial Management Release 11.1.2.2.300  

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • Document Management System

    - by rjayavrp
    Is there any Document Management System in Ubuntu? I tried Alfresco, RavenDB, Owl, Document Manager. Alfresco, RavenDB are heavy. More than my requirements. Owl having source issues. Document Manager im trying to install. Should keep data on the same machine as I am looking for more of internal purpose. Should allow to upload Zip files as well. If it extracts Zip it will be a great + Should allow to send email to preconfigured email addresses Should allow to upload data of size around 100MB at one go Should maintain history of documents also deleted documents Should allow role based document access. Should be Free :) It should not do any spoofing on data. Documents are confidential. Please share your knowledge. Thanks.

    Read the article

  • Data recovery on a data HDD (no OS)

    - by aCuria
    I am helping a family member with a dead hard disk. It is a seagate 200Gb 3.5" HDD in one of those old-school external enclosures. The problem was that windows failed to detect the hard disk when plugged in through USB. I removed the hard disk from its enclosure, and plugged it into my desktop PC. The BIOS does detect it upon POST, but unfortunately windows 7 would refuse to boot. It will get stuck on the loading screen with the glowing windows logo. Safe mode doesn't help either. What options do I have before going for some professional data recovery? edit: Someone modified the Title to something completely different from what I was asking, i just changed it back. 1) 2 HDD drives, DiskA(Dead), DiskB(my OS disk) 2) when B is connected to my system, everything works fine 3) when A AND B is connected, failure to boot. POSTs fine, but windows wont load 4) A has NO OS, its PURE data. It came from an EXTERNAL HDD enclosure which doesnt belong to me, and im trying to do data recovery.

    Read the article

  • Will final version of 12.04 use the power management changes found in kernel 3.3

    - by Luis Alvarado
    I have seen in some version of Ubuntu that instead of making a huge change to update to the latest kernel, they take some of the good stuff out of it, for the sake of stability and put it on a previous version. In this case, kernel 3.3 has seen some very good power management enhancements that are not all found in the kernel 3.2. My question then is: Will this updates in 3.3 be somehow pulled into the 3.2 kernel used for Ubuntu 12.04?

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Data Governance 2010 Conference in San Diego

    - by Tony Ouk
    The Data Governance Annual Conference is one of the world's most authoritative and vendor neutral event on Data Governance and Data Quality.  The conference will focus on the "how-tos" from starting a data governance and stewardship program to attaining data governance maturity with specific topics on MDM.  This year's event will be hosted June 7 through June 10 in San Diego, California. For more information, including registration details, visit the Data Governance 2010 Conference website.

    Read the article

  • How to search for newline or linebreak characters in Excel?

    - by Highly Irregular
    I've imported some data into Excel (from a text file) and it contains some sort of newline characters. It looks like this initially: If I hit F2 (to edit) then Enter (to save changes) on each of the cells with a newline (without actually editing anything), Excel automatically changes the layout to look like this: I don't want these newlines characters here, as it messes up data processing further down the track. How can I do a search for these to detect more of them? The usual search function doesn't accept an enter character as a search character.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Can you recommend a good Idea Management System?

    - by Tedi
    I'm trying to find a good (and cheap) Idea Management System for a non-profit project. I've browsed lot of good options which cost a lot of money. They are probably worth but we're planning to run this as a non-profit project, so unfortunately money investment is not the key strenght. Basically what we want to run is a platform where users can propose ideas that are voted, commented and enriched by the rest of the community. Thanks.

    Read the article

  • More info: a "stand-alone" installer for Management Studio Express 2008

    - by AaronBertrand
    Last February, I blogged about something I was initially very happy about: a stand-alone installer for Management Studio Express (SSMSE) 2008 . Now users could allegedly download a much smaller installer, and only install the client tools without having to install an instance of SQL Server Express. While the latter is true, the former remains a pipe dream. Bill Ramos stated in his 2009-02-20 announcement : "We teased out the Tools portion of SQL Server 2008 Express with Tools into it’s own download."...(read more)

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • power management of USB-enclosed hard drives

    - by intuited
    With a typical USB hard drive enclosure, is the full range of drive power management functionality available? In what may be an unrelated matter: is it possible to suspend a PC without unmounting an attached USB-powered drive, and then remounting it on resume? This is the behaviour I'm currently seeing (running Ubuntu linux 10.10). Are there certain models or brands that provide more complete control over this aspect of drive operation? My Friendly Neighbourhood Computer Store carries (part of) the Vantec Nexstar product line.

    Read the article

  • Macbook (non-pro, late 2010) Power Management Issues relating to Nvidia-Current drivers

    - by gbvitaobscura
    I have tried many potential solutions but to no avail! (this is going to be a long post) Essentially, when I first install ubuntu (I have tested 10.11-12.04 beta) I can change the brightness of the macbook backlight. One of the first things I usually do once my machine finishes installing is enter "sudo apt-get install nvidia-current" into the terminal to get my Nvidia graphics card set up. Before I install nvidia-current power management works flawlessly. After installing nvidia-current my graphics driver works mostly properly (more on that later). When I press F1 or F2 notify osd pops up and shows icon for the backlight along with a meter which changes according to how much I press F1 or F2, but the backlight does not change brightness at all. Two supplementary facts: when I am running off of battery power the backlight does not dim ever, also changing the backlight brightness through the terminal does not work (it recognizes the command but changes nothing). Things which did not work: 1. editing xorg.conf 2. python script/hack 3. editing grub Things which did work: 1. removing Nvidia-Current Final piece of information: Although my graphics card works in every way but Power Management it can not be found through System Settings/Details (Graphics Unknown) I'm using the NVIDIA GeForce 320M. the output of lspci is: 00:00.0 Host bridge: NVIDIA Corporation MCP89 HOST Bridge (rev a1) 00:00.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:01.0 RAM memory: NVIDIA Corporation Device 0d6d (rev a1) 00:01.1 RAM memory: NVIDIA Corporation Device 0d6e (rev a1) 00:01.2 RAM memory: NVIDIA Corporation Device 0d6f (rev a1) 00:01.3 RAM memory: NVIDIA Corporation Device 0d70 (rev a1) 00:02.0 RAM memory: NVIDIA Corporation Device 0d71 (rev a1) 00:02.1 RAM memory: NVIDIA Corporation Device 0d72 (rev a1) 00:03.0 ISA bridge: NVIDIA Corporation MCP89 LPC Bridge (rev a2) 00:03.1 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.2 SMBus: NVIDIA Corporation MCP89 SMBus (rev a1) 00:03.3 RAM memory: NVIDIA Corporation MCP89 Memory Controller (rev a1) 00:03.4 Co-processor: NVIDIA Corporation MCP89 Co-Processor (rev a1) 00:04.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:04.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:06.0 USB controller: NVIDIA Corporation MCP89 OHCI USB 1.1 Controller (rev a1) 00:06.1 USB controller: NVIDIA Corporation MCP89 EHCI USB 2.0 Controller (rev a2) 00:08.0 Audio device: NVIDIA Corporation MCP89 High Definition Audio (rev a2) 00:09.0 Ethernet controller: NVIDIA Corporation MCP89 Ethernet (rev a1) 00:0a.0 IDE interface: NVIDIA Corporation MCP89 SATA Controller (rev a2) 00:0b.0 RAM memory: NVIDIA Corporation Device 0d75 (rev a1) 00:15.0 PCI bridge: NVIDIA Corporation Device 0d9b (rev a1) 00:17.0 PCI bridge: NVIDIA Corporation MCP89 PCI Express Bridge (rev a1) 01:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) 02:00.0 VGA compatible controller: NVIDIA Corporation Device 08a0 (rev a2) There is a very similar bug on Launchpad which I have posted below. https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers/+bug/764195

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • What is Mozilla's new release management strategy ?

    - by RonK
    I saw today that FireFox released a new version (5). I tried reading about what was added and ran into this link: http://arstechnica.com/open-source/news/2011/06/firefox-5-released-arrives-only-three-months-after-firefox-4.ars It states that: Mozilla has launched Firefox 5, a new version of the popular open source Web browser. This is the first update that Mozilla has issued since adopting a new release management strategy that has drastically shortened the Firefox development cycle. I find this very intriguing - any idea what this new strategy is?

    Read the article

  • More info: a "stand-alone" installer for Management Studio Express 2008

    - by AaronBertrand
    Last February, I blogged about something I was initially very happy about: a stand-alone installer for Management Studio Express (SSMSE) 2008 . Now users could allegedly download a much smaller installer, and only install the client tools without having to install an instance of SQL Server Express. While the latter is true, the former remains a pipe dream. Bill Ramos stated in his 2009-02-20 announcement : "We teased out the Tools portion of SQL Server 2008 Express with Tools into it’s own download."...(read more)

    Read the article

  • *New Movie* featuring AutoVue value in Primavera Contract Management

    - by prasenjit.niyogi(at)oracle.com
    With the Oscar season approaching fast, one movie that you woudn't want to miss is the latest release from our post-production studios. Check out the new demo movie featuring the joint value of AutoVue's enterprise visualization within Primavera Contract Management. Please feel free share it with your partners, customers and prosepects. The video can be found on O.com (here), or on AutoVue's YouTube channel (here) or by simply clicking on the image below

    Read the article

  • Sabre Manages Fast Data Growth with Oracle Data Integration Products

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Last year at OpenWorld we announced Sabre Holding as a winner of the Fusion Middleware Innovation Awards. The Sabre team did an excellent job at leveraging cutting edge technologies for managing rapid data growth and exponential scalability demands they have experienced in the travel industry. Today we announced the details and specific benefits of Sabre’s new real-time data integration solution in a press release. Please take a look if you haven’t seen it yet. Sabre Holdings Deploys Oracle Data Integrator and Oracle GoldenGate to Support Rapid Customer Growth There are 3 different areas of benefits Sabre achieved by using Oracle Data Integration products: Manages 7X increase in data sources for the enterprise data warehouse Reduced infrastructure complexity Decreased time to market for new products and services by 30 percent. This simply shows that using latest technologies helps the companies to innovate robust solutions against today’s key data management challenges. And the benefit of using a next generation data integration technology is not only seen in the IT operations, but also in the business side. A better data integration solution for the enterprise data warehouse delivered the platform they need to accelerate how they service their customers, improving their competitive advantage. Tomorrow I will give another great example of innovation with next generation data integration from Oracle. We will be discussing the Fusion Middleware Innovation Awards 2012 winners and their results with using Oracle’s data integration products.

    Read the article

  • Automating release management and CI on python projects under mercurial VCS

    - by ms4py
    I have a set of Python projects which are under the mercurial VCS. I would like to automate the following tasks: Run the test suite for every commit (CI). Make a source distribution for every commit, which has a tag in mercurial. This is regarded as a new release. Copy the distribution to a special repository. There is Jenkins as a proposal for similar questions, but I'm not sure if it can handle the release management like intended.

    Read the article

  • Paranoid management, contractor checking work [closed]

    - by user833345
    Just wanted to get some opinions and experiences on an issue I'm having at work. First, a little background. I've been working at a company for some time (past any probation periods) and rewriting a horrendous system. No tests, incomplete and broken functionality everywhere, enough copypasta to feed a small village, redundant code, more unused SQL tables than used ones and terrible performance. I've never seen such bad code, pretty much all of it is worthy of being posted on TheDailyWTF. The company has been operating for a number of years and have had a string of bad developers working on this system. I made a call on rewriting instead of refactoring since I judged it to be less work overall and decided that the result will address the requirements more appropriately, since the central requirement is to have a future-proof system for the next decade with plenty of room to scale up. Refactoring would have entailed untangling a huge ball of yarn and at the same time integrating it with a proper foundation or building a foundation from scratch. I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment. I've fixed most of the performance issues of the old system (there were no indexes on any of the tables, for example). I've created an automated deployment process for the old system. The CTO has been maintaining the old system while I have been building the new one and he has been advising management that everything is being done as per best practices. However, management is hiring a contractor to come in and verify my work. In my experience, this is unprecedented. I can understand their reasoning to an extent, since they've had bad luck in the past, but can't help but feel somewhat offended at the fact that they distrust two senior developers who have been working with them for some time enough that a third party is being brought in. And it's not just me who is under watch - people's emails are constantly checked, someone had a remote desktop application installed on their computer of which I was asked to check the usage logs to try to determine if they were stealing sensitive data and there are CCTV cameras in one of the rooms. It's the first time I've decided to disable my Skype history at work. Am I right to feel indignant here? Has anyone else ever encountered such a situation? If so, how did it work out in the end? Was it worth sticking around? Should I just find another job?

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >