Search Results

Search found 14780 results on 592 pages for 'low level'.

Page 434/592 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • WNA Configuration in OAM 11g

    - by P Patra
    Pre-Requisite: Kerberos authentication scheme has to exist. This is usually pre-configured OAM authentication scheme. It should have Authentication Level - "2", Challenge Method - "WNA", Challenge Direct URL - "/oam/server" and Authentication Module- "Kerberos". The default authentication scheme name is "KerberosScheme", this name can be changed. The DNS name has to be resolvable on the OAM Server. The DNS name with referrals to AD have to be resolvable on OAM Server. Ensure nslookup work for the referrals. Pre-Install: AD team to produce keytab file on the AD server by running ktpass command. Provide OAM Hostname to AD Team. Receive from AD team the following: Keypass file produced when running the ktpass command ktpass username ktpass password Copy the keytab file to convenient location in OAM install tree and rename the file if desired. For instance where oam-policy.xml file resides. i.e. /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Configure WNA Authentication on OAM Server: Create config file krb.config and set the environment variable to the path to this file: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf The variable KRB_CONFIG has to be set in the profile for the user that OAM java container(i.e. Wbelogic Server) runs as, so that this setting is available to the OAM server. i.e. "applmgr" user. In the krb.conf file specify: [libdefaults] default_realm= NOA.ABC.COM dns_lookup_realm= true dns_lookup_kdc= true ticket_lifetime= 24h forwardable= yes [realms] NOA.ABC.COM={ kdc=hub21.noa.abc.com:88 admin_server=hub21.noa.abc.com:749 default_domain=NOA.ABC.COM [domain_realm] .abc.com=ABC.COM abc.com=ABC.COM .noa.abc.com=NOA.ABC.COM noa.abc.com=NOA.ABC.COM Where hub21.noa.abc.com is load balanced DNS VIP name for AD Server and NOA.ABC.COM is the name of the domain. Create authentication policy to WNA protect the resource( i.e. EBSR12) and choose the "KerberosScheme" as authentication scheme. Login to OAM Console => Policy Configuration Tab => Browse Tab => Shared Components => Application Domains => IAM Suite => Authentication Policies => Create Name: ABC WNA Auth Policy Authentication Scheme: KerberosScheme Failure URL: http://hcm.noa.abc.com/cgi-bin/welcome Edit System Configuration for Kerberos System Configuration Tab => Access Manager Settings => expand Authentication Modules => expand Kerberos Authentication Module => double click on Kerberos Edit "Key Tab File" textbox - put in /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Edit "Principal" textbox - put in HTTP/[email protected] Edit "KRB Config File" textbox - put in /fa-gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Cilck "Apply" In the script setting environment for the WLS server where OAM is deployed set the variable: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Re-start OAM server and OAM Server Container( Weblogic Server)

    Read the article

  • links for 2011-02-18

    - by Bob Rhubart
    VirtualBox: Pre-Built Developer VMs "Learning your way around a new software stack is challenging enough without having to spend multiple cycles on the install process. Instead, we have packaged such stacks into pre-built Oracle VM VirtualBox appliances that you can download, install, and experience as a single unit." (tags: oracle virtualization virtualbox) Java Space on Parleys (The Java Source) "'Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices." (tags: oracle java parleys) Why ADF Developers Should Attend ODTUG This Year (Shay Shmeltzer's Weblog) Shay says: "A new track called the "Fusion Middleware" track has been formed and it has lots of sessions for any level of ADF developer. The track is run by several Oracle ACEs who are also involved in the ADF Enterprise Methodology Group." (tags: oracle otn odtug fusionmiddleware) Wrapping up an Exciting Mobile World Congress (The Java Source) "One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data..." (tags: oracle java smartgrid) How to Audit and Monitor BI Publisher Reports Access? (Oracle BI Publisher Blog) "Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions." (tags: oracle otn businessintelligence) Oracle VM VirtualBox 4.0.4 Released! (Oracle's Virtualization Blog) Fat Bloke says: "Oracle made a maintenance update release of Oracle VM VirtualBox version 4.0.4 today. You can Download it now, or read about the changes in the ChangeLog." (tags: oracle otn virtualization virtualbox) Obama says Cloud and Data Center Consolidation Will Help Curb IT Costs | WHIR Web Hosting Industry News "In the report, he estimated that the federal government could reallocate some $20 billion of IT spending to cloud computing technologies and reduce 'data center infrastructure expenditure by approximately 30 percent' through cloud computing." (tags: cloud obama datacenter) Chris Muir: ADF BC: Creating an "EXISTS" View Criteria Oracle ACE Director Chris Muir shares some ADF tips. (tags: oracle otn oracleace adf) Translation and Multiple Languages with Oracle UCM | Bex Huff Bex says: "Last year, I gave a presentation at Oracle Open World about Creating and Maintaining an Internationalized Web Site. Well, I'm happy to announce that one of the several add-ons to UCM is now available for purchase!" (tags: oracle otn enterprise2.0 ecm oracleace) ORACLENERD: Design Documentation Oracle ACE Chet "ORACLENERD" Justice makes a pledge. (tags: oracle otn oracleace database)

    Read the article

  • What are the alternatives to "overriding a method" when using composition instead of inheritance?

    - by Sebastien Diot
    If we should favor composition over inheritance, the data part of it is clear, at least for me. What I don't have a clear solution to is how overwriting methods, or simply implementing them if they are defined in a pure virtual form, should be implemented. An obvious way is to wrap the instance representing the base-class into the instance representing the sub-class. But the major downsides of this are that if you have say 10 methods, and you want to override a single one, you still have to delegate every other methods anyway. And if there were several layers of inheritance, you have now several layers of wrapping, which becomes less and less efficient. Also, this only solve the problem of the object "client"; when another object calls the top wrapper, things happen like in inheritance. But when a method of the deepest instance, the base class, calls it's own methods that have been wrapped and modified, the wrapping has no effect: the call is performed by it's own method, instead of by the highest wrapper. One extreme alternative that would solve those problems would be to have one instance per method. You only wrap methods that you want to overwrite, so there is no pointless delegation. But now you end up with an incredible amount of classes and object instance, which will have a negative effect on memory usage, and this will require a lot more coding too. So, are there alternatives (preferably alternatives that can be used in Java), that: Do not result in many levels of pointless delegation without any changes. Make sure that not only the client of an object, but also all the code of the object itself, is aware of which implementation of method should be called. Does not result in an explosion of classes and instances. Ideally puts the extra memory overhead that is required at the "class"/"particular composition" level (static if you will), rather than having every object pay the memory overhead of composition. My feeling tells me that the instance representing the base class should be at the "top" of the stack/layers so it receives calls directly, and can process them directly too if they are not overwritten. But I don't know how to do it that way.

    Read the article

  • Improve Playback Using Enhancements in Windows Media Player 12

    - by DigitalGeekery
    Are you looking for ways to improve the playback of your media in Windows Media Player 12? We’ll show you how to do that by using the enhancements in WMP 12. If you are in Library mode, you’ll need to click the icon at the lower right to switch to Now Playing mode. Right-click anywhere in Media Player while in Now Playing mode, select Enhancements, and select any of the available options.   You can switch between the individual enhancements by clicking the right and left buttons at the top left.   Crossfading and Auto Volume Leveling The Auto Volume Leveling setting is just a simple toggle on and off. If your MP3 or WMA files have volume leveling information values.   You can automatically add volume leveling information values to all files you add to your library by switching to Library view, going to Tools > Options, and selecting Add volume leveling information values for new files on the Library tab. Click OK when finished.   Crossfading will gradually decrease the volume of the song that is ending (fade out) and increase volume of the song that is beginning. Click Turn on Crossfading and then click and drag the slider left or right change the amount of overlap between tracks. Graphic Equalizer The graphic equalizer is toggled on and off by clicking Turn on / Turn off at the top left. You can select pre-defined equalizer settings by music genre by clicking the Default list. The radio buttons on the left allow you to move the sliders individually, in a loose group or a tight group. You can always return to the default settings by clicking Reset. Play Speed Settings Choose a pre-defined settings by clicking Slow, Normal, or Fast. Uncheck the Snap slider to common speeds the move the slider right and left to your desired speed. If nothing else, these settings provide a little fun and amusement. Quiet Mode Quiet mode will level out any sharp volume highs and lows within a single track. Simply toggle the setting on or off and select whether you prefer Medium difference or Little difference by selecting one of the radio buttons. SRS WOW effects SRS WOW effects enhance low-frequency and stereo sound performance. Click Turn on to enable the TruBass and WOW Effect sliders. You can also optimize for your speaker type. Click to switch between Regular, Large, and Headphones. Video Settings Video Settings allow you to adjust the Hue, Brightness, Saturation, and Contrast.   You can also adjust the zoom settings by clicking Select video zoom settings.   Dolby Digital Settings Choose between Normal, Night, and Theater settings to adjust the audio for Dolby Digital content. This setting will only effect media with Dolby Digital sound. Looking for more ways to improve your media experience in WMP 12? Check out how to update metadata and cover art and how to share media with other Windows 7 computers on your home network. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxHow To Rip a Music CD in Windows 7 Media CenterStream Media from Windows 7 to XP with VLC Media PlayerInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • IPgallery banks on Solaris SPARC

    - by Frederic Pariente
    IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value Added Services (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advanced converged voice, chat, video/content services and rich unified social communications in a combined legacy (fixed/mobile), Over-the-Top (OTT) and Social Community (SC) environments for home and business customers. Technically speaking, this offer is a scalable and robust telco solution enabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: Oracle Solaris, Netra T4 and SPARC T4 in order to provide a competitive and scalable solution without the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases of the Solaris OS. That means, if a binary application runs on Solaris 2.6 or later, it will run on the latest release of Oracle Solaris.  IPgallery developed their application on Solaris 9 and Solaris 10 then runs it on Solaris 11, without any code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect their long-term investment in the development, training and maintenance of their applications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris 11. It provides a framework for complete software life-cycle management such as installation, upgrade and removal of software packages. IPgallery leverages this new packaging system in order to speed up and simplify software installation for the R&D and production environments. Notably, they use IPS to deliver Solaris Studio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software package integrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-market with a faster, more reliable software installation and deployment in production environments. Extreme Network Performance IPgallery saw a huge improvement in application performance both in CPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2 servers.  The same application (with the same activation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch (Call management), TappS (Telecom Application Server) and Billing Server running on same machine and initiating various services in capacity of 1000 CAPS (Call Attempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, which reduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/O operations; they ran all tests with maximum logging capability and it didn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1 architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solution on six T2-based machines, we will deploy on 2 redundant machines while utilizing Oracle Solaris Zones and Oracle VM for higher availability and virtualization" Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARC technologies, IPgallery is able to offer solutions with much lower TCO, while providing a higher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

    Read the article

  • Silverlight Cream for November 22, 2011 -- #1172

    - by Dave Campbell
    In this Issue: XAMLGeek, WindowsPhoneGeek, Nigel Sampson, Jesse Liberty, Sumit Dutta(-2-), Dave Bost, Jared Bienz, Joost van Schaik, and Michael Crump. Above the Fold: Silverlight: "10 Laps around Silverlight 5 (Part 7 of 10)" Michael Crump WP7: "Using MVVMLight, ItemsControl, Blend and behaviors to make a ‘heads up compass’" Joost van Schaik Metro/WinRT/W8: "“Badevand” for Windows 8" XAMLGeek Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:“Badevand” for Windows 8XAMLGeek posted a Metro app that shows water and air temperature and rain level for 5 beaches in Copenhagen, Denmark... no source, but good to see people posting appsGetting Started with Windows Phone RemindersWindowsPhoneGeek digs into Reminders in this WP7.1 post... the code you need, description, and a project to downloadHelp my app has been revoked!Nigel Sampson had a surprise when his latest app was revoked on his device, and then another... read what the solution wasA Dozen Windows Phone Videos… And CountingJesse Liberty posted his 12th WP7.1 video on Channel 9 - all about Reminders in MangoPart 23 - Windows Phone 7 - Detect Operator and Network InformationSumit Dutta has 2 more parts to his WP7 quest up... this part 23 is about getting mobile operator information and hot to get network capabilities using Microsoft.Phone.Net.DeviceNetworkInformationPart 24 - Windows Phone 7 - Microphone RepeaterIn part 24, Sumit Dutta uses the Microsoft.Xna.Framework Microphone class to record and play back voice.31 Days of Mango | Day #13: Marketplace Test KitDave Bost is at the helm of Jeff Blankenburg's Day 13 in his 31 day quest, discussing the Marketplace Test Kit and showing how to use it to determine if your app is ready for certification31 Days of Mango | Day #12; Beta DistributionJeff Blankenburg's Day 12 is written by guest author Jared Bienz, and shows how to submit an application for Beta testingUsing MVVMLight, ItemsControl, Blend and behaviors to make a ‘heads up compass’Joost van Schaik has a tutorial up showing how to make a WP7 Compass app using MVVMLight, Expression Blend, and then shows his thoughts on using the ItemsControl and Behaviors... code, descriptions and a project to download.... and I think I got your name right for the first time, Joost :)10 Laps around Silverlight 5 (Part 7 of 10)Michael Crump put out part 7 of his Silverlight 5 series at SilverlightShow... this is actually part 2 of OS Integration with Silverlight covering, among other things, 64-bit browser support and Power AwarenessStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Boot Problem in Asus EEE PC 1015CX

    - by Sâmrat VikrãmAdityá
    I am a newbie to Linux world, although I have previously worked on Ubuntu 11.04 for daily use (Net Access and simple recordings using Audacity). I am not sure, at what level I stand as a newbie. I bought this Asus Eee PC two days back. The model is Asus 1015CX. See the specs here http://www.flipkart.com/asus-1015cx-blk011w-laptop-2nd-gen-atom-dual-core-1gb-320gb-linux/p/itmd8qu4quzu8srr . I created a live USB to install 12.10. The usb booted fine. When I clicked "Try Ubuntu" option, it showed me a black screen with a cursor blinking. I waited for 15 minutes and had to restart using the power button. On clicking the "Install Ubuntu" button, the install process went seamlessly. [I have a Windows7 installed on one of the partitions]. i installed it alongside previous windows installation. The system was then rebooted for the first time. It showed the GRUB menu and I selected the first option Ubuntu. After showing the splash screen for a second, it began showing various messages on a black screen and then it struck on "Stopping Save kernel state message". I had to force shut the system using power button. Sometimes it just gives a blank screen with a cursor blinking and on pressing power button, some messages stating that acpid is doing something and stopping services pops up and the system shuts down. I tried booting with "nomodeset" and other parameters as directed in solution to previous such problems on forums. Also Ctrl+Alt+F1,F2,F3,F4,F5,F6..F12 is not doing anything for me anywhere. At installation, I checked Login automatically option. On booting into recovery several options comes up. Clicking resume just gives me a blank screen with cursor blinking. on dropping to root shell and remounting filesystem as RW, I am able to supply some command that worked for others. startx -- Several messages comes up with last one stating Fatal error: No screen found sudo service lightdm start -- Gives a blank screen with a cursor blinking lspci | grep VGA -- Shows some Intel Integrated Graphic... something I don't remember I had reconfigured xserver-xorg, lightdm, reinstalled ubuntu-desktop, unity. What should I do..?? Will going back to 11.04 work..?? Or I should leave all hopes of running Ubuntu on my netbook. Please help.

    Read the article

  • It's intellisense for SQL Server

    - by Nick Harrison
    It's intellisense for SQL Server Anyone who has ever worked with me, heard me speak, or read any of writings knows that I am a HUGE fan of Reflector.    By extension,  I am a big fan of Red - Gate   I have recently begun exploring some of their other offerings and came across this jewel. SQL Prompt is a plug in for Visual Studio and SQL Server Management Studio.    It provides several tools to make dealing with SQL a little easier for your friendly neighborhood developer. When you a query window in a database, the plugin kicks in and gathers the metadata for the database that you are in.    As you type a query, you get handy feedback like a list of tables after you type select.    You can select one of the tables, specify * and then tab to expand the select clause to include all of the columns from the selected table.    As you are building up the where clause, you are prompted by the names of columns in the selected tables. If you spend any time writing ad hoc queries or building stored procedures by hand, this can save you substantial time. If you are learning a new data model, this can greatly cut down on your frustration level. The other really cool thing here is Format SQL.   I have searched all over the place for a really good SQL formatter.    Badly formatted  SQL is so much harder to read than well formatted SQL.   Unfortunately, management studio offers no support for keeping your SQL well formatted.    There are many tools available to format your SQL.   Some work better than others.    Some don't work that well at all.   Most will give you some measure of control over how the formatted SQL looks.    SQL Prompt produces good results and is easy to configure. Sadly no tool is perfect, and what would we be without a wish list.    There are some features that I would like to see: Make it easier to paste SQL in and out of code.    Strip off string builder, etc Automate replacing hard coded values with bind variables or parameters In addition to reformatting SQL, which is a huge refactor, support for other SQL refactors would be nice.    Convert join to sub query and vice versa come to mind Wish list a side, this is a wonderful tool that easily saves me an hour or more on most weeks.

    Read the article

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

  • Ed Burns' Servlet 4/HTTP 2 Session at JavaOne

    - by Yolande Poirier
    By Guest Blogger Reza Rahman For the Java EE track at JavaOne 2014 we are highlighting some key sessions and speakers to better inform you of what you can expect, right up until the start of the conference. To this end we recently interviewed Ed Burns. Ed is a veteran of Sun and now Oracle. He has been and is instrumental in pushing the JSF ecosystem forward as specification lead. Besides his specification lead work Ed is well regarded as an author and speaker on his own right. In addition to carrying the JSF torch Ed will be co-leading the key Servlet 4 specification for Java EE 8, along with Servlet specification guru Shing Wai Chan. The primary goal of Servlet 4 is to enable the fundamentally important changes in HTTP 2 for the entire server-side Java ecosystem. We wanted to talk to Ed about his Servlet 4 session at JavaOne 2014 and HTTP 2 generally: The details for the Servlet 4 session can be found here. Ed has several other key sessions on the track that we hope to talk to him about separately in the near future: What’s Next for JSF?: In this key session, Ed will be sharing the next steps for the continued evolution of the JSF specification in Java EE 8. Where’s My UI? The 2014 JavaOne Web App UI Smackdown: The UI space for web applications, especially in the Java ecosystem continues to be as hotly contested as ever. This is especially true with the (re)introduction of JavaScript based rich client frameworks like AngularJS. This lively panel brings together experts representing the diverse schools of thought for web UIs. Ed will be representing JSF of course. Neal Ford will moderate the panel as an independent and hopefully reasonably neutral party. Adopt-a-JSR for Java EE 7 and Java EE 8: Adopt-a-JSR has been a reasonable success for Java EE 7. With Java EE 8 we are planning to strengthen it far more as away of getting grassroots level participation in the specification efforts. This session will introduce Adopt-a-JSR, share how it worked for Java EE 7 and what we plan to do with it in Java EE 8. Ed will be sharing his perspectives on Adopt-a-JSR for both Java EE 7 and Java EE 8. Besides Ed's sessions, we have a very strong program for the Java EE track and JavaOne overall - just explore the content catalog. If you can't make it, you can be assured that we will make key content available after the conference just as we have always done.

    Read the article

  • Oracle Data Integration 12c: Perspectives of Industry Experts, Customers and Partners

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 As you may have seen from our recent blog posts on Oracle Data Integrator 12c and Oracle GoldenGate 12c, we are very excited to share with you the great new features the 12c release brings to Oracle’s data integration solutions. And, fortunately we are not alone in this sentiment. Since the press announcement October 17th, which incorporates our customers' and experts' testimonials, we have seen positive comments in leading technology publications and social media as well. Here are some examples: In CIO and PCWorld you can find Joab Jackson’s article, Oracle Data Integrator 12c ready for real-time analysis, where wrote about the tight integration between Oracle Data Integrator and Oracle GoldenGate . He noted “Heeding the call from enterprise customers who clamor for more immediacy in their data-driven reports, Oracle has updated its data-integration software portfolio so that it can more rapidly deliver data to data warehouses and analysis applications.” Integration Developer News’ Vance McCarthy wrote the article Oracle Ships ‘Future Proofs’ Integration Tools for Traditional, Cloud, Big Data, Real-Time Projects and mentioned that “Oracle Data Integrator 12c and Oracle GoldenGate 12c sport a wide range of improvements to let devs more easily deliver data integration for cloud, analytics, big data and other new projects that leverage multiple datasets for business.“ InformationWeek’s Doug Henschen gave a great overview to several key features including the new flow-based UI in Oracle Data Integrator. Doug said “Oracle Data Integrator 12c introduces a complete makeover of the job-building experience, while real-time oriented GoldenGate 12c introduces performance gains “. In Database Trends and Applications’ article Oracle Strengthens Data Integration with Release of Oracle Data Integrator 12c and Oracle GoldenGate 12c highlighted the productivity aspect of the new solution with his remarks: “tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training”. We are also thrilled about what our customers and partners have to say about our products and the new release. And we are equally excited to share those perspectives with you in our upcoming launch video webcast on November 12th. SolarWorld Industries America’s Senior Database Manager, Russ Toyama will join our executives in our studio in Redwood Shores to discuss GoldenGate’s core benefits and the new release, while Surren Partharb, CTO of Strategic Technology Services for BT, and Mark Rittman, CTO of Rittman Mead, will provide their comments via the interviews conducted in the UK. This interactive panel discussion in the video webcast will unveil the new release with the expertise of our development executives and the great insight from our customers and partners. In addition, our product experts will be available online to answer chat questions. This is really a great opportunity to learn how Oracle's data integration offering has changed the integration and replication technology space with the new release, and established itself as the new leader. If you have not registered for this free event yet, you can do so via this link. We will run the live event at 8am PT/4pm GMT, followed by a replay of the event with live chat for Q&A  at 10am PT/6pm GMT. The replay will be available on-demand for those who register but cannot attend either session on November 12th. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • MythTV lost recordings - "No recordings available" and no recording rules either

    - by nimasmi
    I have a c.6 year old mythtv database. I recently upgraded from Ubuntu 10.04 to 12.04. This brought a MythTV upgrade from 0.24 to 0.25, which went well. Today, all my recordings have disappeared. They still exist in the /var/lib/mythtv/recordings folder, and the 'M' key in the Watch Recordings page says that there are 201 recordings available somewhere, but they will not display. See screenshot: (implicit thanks to whomever upvoted this, giving me sufficient reputation to upload images) Changing the filter does not remedy the fact that there is nothing shown in the lists. My Upcoming Recordings screen says that there are no rules set, but my list of previously recorded shows is still there, and has an entry from as recently as 3am today. mythbackend --printsched gives the following: user@box:~$ mythbackend --printsched 2012-09-22 12:59:20.537008 C mythbackend version: fixes/0.25 [v0.25.2-15-g46cab93] www.mythtv.org 2012-09-22 12:59:20.537043 C Qt version: compile: 4.8.1, runtime: 4.8.1 2012-09-22 12:59:20.537048 N Enabled verbose msgs: general 2012-09-22 12:59:20.537076 N Setting Log Level to LOG_INFO 2012-09-22 12:59:20.537142 I Added logging to the console 2012-09-22 12:59:20.537152 I Added database logging to table logging 2012-09-22 12:59:20.537279 N Setting up SIGHUP handler 2012-09-22 12:59:20.537373 N Using runtime prefix = /usr 2012-09-22 12:59:20.537394 N Using configuration directory = /home/user/.mythtv 2012-09-22 12:59:20.537999 I Assumed character encoding: en_GB.UTF-8 2012-09-22 12:59:20.538599 N Empty LocalHostName. 2012-09-22 12:59:20.538610 I Using localhost value of box 2012-09-22 12:59:20.538792 I Testing network connectivity to '192.168.1.2' 2012-09-22 12:59:20.539420 I Starting process manager 2012-09-22 12:59:20.541412 I Starting IO manager (read) 2012-09-22 12:59:20.541715 I Starting IO manager (write) 2012-09-22 12:59:20.541836 I Starting process signal handler 2012-09-22 12:59:20.684497 N Setting QT default locale to EN_GB 2012-09-22 12:59:20.684694 I Current locale EN_GB 2012-09-22 12:59:20.684813 N Reading locale defaults from /usr/share/mythtv//locales/en_gb.xml 2012-09-22 12:59:20.697623 I New static DB connectionDataDirectCon 2012-09-22 12:59:20.704769 I MythCoreContext: Connecting to backend server: 192.168.1.2:6543 (try 1 of 1) Calculating Schedule from database. Inputs, Card IDs, and Conflict info may be invalid if you have multiple tuners. 2012-09-22 12:59:27.710538 E MythSocket(21dfcd0:14): readStringList: Error, timed out after 7000 ms. 2012-09-22 12:59:27.710592 C Protocol version check failure. The response to MYTH_PROTO_VERSION was empty. This happens when the backend is too busy to respond, or has deadlocked in due to bugs or hardware failure. Things I have tried so far: restart the backend restart the frontend run mythtv-setup and check database passwords and IP addresses change the frontend setting for backend IP from localhost to 192.168.1.2 (the backend/frontend's IP) run optimize_mythdb.pl Other suggestions appreciated.

    Read the article

  • Laptop monitor stopped working and can't be re-enabled on a Dell Latitude E6410

    - by xektrum
    I'm using Ubuntu 12.04 (upgraded from 11.10), everything seemed to work fine until today when my laptop monitor suddenly stopped working. Here are the facts: My laptop is a Dell Latitude E6410, Intel graphics. External Monitor is attached through a docking station. Everything worked fine for about 6-7 month, then upgraded to 12.04 Issue started today after a week of upgrade. I think the issue started after I ran CounterStrike 1.6, both monitors blinked and then only the attached monitor which is connected to a docking station continued to work I thought at first that was a transient issue but then I've rebooted, removed the battery but the same happens. Laptop Monitor and external monitor work fine up to login screen, but after I login it goes black Whenever I try to re-enable laptop monitor from Display Manager I get errors: The selected configuration for displays could not be applied could not set the configuration for CRTC 63 Not sure what technical details are required but here are some: $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected (normal left inverted right x axis y axis) 1440x900 60.0 + 40.0 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) $ tail /var/log/Xorg.0.log [ 8367.132] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.132] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy I'm using gnome-shell, and the only ways I've been able to get both display working have been: 1) Booting with laptop disconnected from docking and then re attach external with VGA instead of DVI, but only worked for a session. 2) Removing xserver-xorg-video-intel, but then I gnome-shell is gone as well as dri I would appreciate any suggestions. Regards, ============================= WORKAROUND FOUND ============================= So I have tried few things and here is what worked: I've installed a newer version of xserver-xorg-video-intel (2.19 vs 2.17) from ppa:xorg-edgers/ppa, it didn't work at first, it was only showing low graphics mode, so I tried with a different linux-image 3.0.0-19-generic-pae instead of 3.2.0-24-generic-pae, which I believe is 12.04 precise default, then everything started to work again, Now I've installed 3.4.0-1-generic-pae from same ppa and everything goes flawless so I believe the issue is either with linux-image 3.0.0-19-generic-pae or xserver-xorg-video-intel 2.17. Hope this helps someone in the future. PS: Now xrandr shows multiple modes for my laptop monitor $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected 1440x900+1680+0 (normal left inverted right x axis y axis) 303mm x 189mm 1440x900 60.0*+ 59.9 40.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis)

    Read the article

  • PayPal India Problems Continues

    - by Ravish
    Reserve Bank of India has been giving hard time to PayPal and its users in India. RBI had previously blocked PayPal transactions in India a few times, and they made it difficult to withdraw payments by enforcing exports and forex related compliance. Here is yet another bad news for Indian PayPal users. With effect from March 1st, Indian users cannot receive payments of more than $500 in your PayPal account. Moreover, you cannot keep or use any funds in your PayPal account. You can use your PayPal balance to make send money for any goods or services, and must withdraw it to your bank account within 7 days of the receipt. These changes have rendered PayPal almost useless for small business, webmasters and publishers. Most webmasters and publishers rely on PayPal to receive payments from advertisers and clients. It has also made it impossible to buy anything online with PayPal. Sending payments abroad via other channels is already a pain, sending a bank wire requires too many formalities, documentation and time. Moreover, you are even required to deduct TDS on payments you make for any products or services. The restrictions will take effect on March 1st, so you have 30 days to complete any pending transactions you may have. This step by RBI is yet another gimmick by corrupt Indian Government to make life difficult of entrepreneurs, kill innovation, slap more taxes and create more channels to take bribes. Following is the notification from PayPal about this issue: As part of our commitment to provide a high level of customer service, we would like to give you a 30-day advance notice on changes to our user agreement for India. With effect from 1 March 2011, you are required to comply with the requirements set out in the notification of the Reserve Bank of India governing the processing and settlement of export-related receipts facilitated by online payment gateways (“RBI Guidelines”). In order to comply with the RBI Guidelines, our user agreement in India will be amended for the following services as follows: Any balance in and all future payments into your PayPal account may not be used to buy goods or services and must be transferred to your bank account in India within 7 days from the receipt of confirmation from the buyer in respect of the goods or services; and Export-related payments for goods and services into your PayPal account may not exceed US$500 per transaction. We seek your understanding as we continue to employ our best efforts to comply with the RBI Guidelines in a timely manner. Related posts:WordCamp India Ends On a High Note Silicon WordPress Theme Accord WordPress Theme

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Oracle OpenWorld Update: Demo Pods and Hands-on Labs

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Less than one week away until the start of Oracle OpenWorld 2012 and the Data Integration Solutions team is ready to go!  We have an exciting line up for you this year which we have summarized for you in the Oracle OpenWorld Focus on Data Integration Solutions document. In past posts we have discussed session themes and our customer panel, but today I would like to summarize our Hands-on Labs and Demo Pods that we have available for attendees. For Oracle GoldenGate Hands-On Labs we have two labs that we are running this year. Deep Dive into Oracle GoldenGate Thursday October 4th at 11:15AM in the Marriott Marquis Salon 1/2 Oracle GoldenGate provides real-time log-based change data capture and delivery between heterogeneous systems. It enables cost-effective, low-impact, real-time data integration and continuous availability solutions. This session covers Oracle GoldenGate 11g’s internal product architecture and includes a hands-on lab that covers configuration examples for target database instantiation and real-time change data capture and delivery. The participants will configure Oracle GoldenGate to instantiate a secondary database that can be used for disaster recovery or a reporting instance. Come learn how easy it is to use and how this can be a very valuable and easy technology solution for your organization. Introduction to Oracle GoldenGate Veridata Wednesday October 3rd 10:15AM in the Marriott Marquis Sales 1/2 Oracle GoldenGate Veridata compares one set of data with another and identifies data that is out of synchronization. In this hands-on lab, you will be introduced to the key features of this product. Using the Oracle GoldenGate Veridata Web client, you will have the opportunity to configure comparison objects and rules, initiate a comparison, review the status and output of a comparison, and review out-of-sync data. As a bonus this year, we have recorded the labs and made them available on youtube.com/oraclegoldengate. These will be available the day of the labs. Our demo pods are an opportunity for attendees to see our products but more so to meet the product management and development teams. I would like to point out that we have two Oracle GoldenGate 11gR2 demo pods, one in the database camp and the other in the middleware camp. The one in the middleware camp will be focused on all platforms while the one in the database camp will have a focus on the Oracle platform. The other two I would like to point out are the Monitoring Oracle GoldenGate and the Oracle Enterprise Manager demo pods; both of these pods will focus on methods to monitor GoldenGate but the OEM demo pod will have a specific focus on the Oracle GoldenGate Management Pack plug-in for OEM. Below is a list of our demo pods and their locations. Monitoring Oracle GoldenGate for End-to-End Visibility Moscone South, Right - S-241 Oracle Data Integrator and Oracle GoldenGate for Oracle Applications Moscone South, Right - S-240 Oracle GoldenGate 11gR2 New Features Moscone South, Right - S-239 Oracle GoldenGate 11gR2: Real-Time, Transactional Database Replication     Moscone South, Left - S-027 Oracle GoldenGate Veridata and Adapters Moscone South, Right - S-242 Oracle Enterprise Manager Moscone South, Left - S-040 Keep tuned to our blog during the show for news and highlights from the Data Integration Solutions team. See you there.

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Did I lost my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful so it even can reshape arrays so you can start with a smaller array and the grow it as you need. I've bought 3x1Tb drives and made RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as a spare and run a command to grow array to 7-disk. It started working, but speed was ridiculously slow, ~100kb/sec. AFter processing first 37Mb at such an amasing speed, one of old HDDs fails. I properly shutdown PC and disconnected failed drive. After bootup it appeared it recreated intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlocks, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lost my data again? Any suggestions how can I make it work? OS is Ubuntu Server 10.04.2... PS. Needless to say that data is unaccessible - I cannot mount /dev/md0 as save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it.

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • The Spotlight is on You

    - by Claudia McDonald
    On the field or off the field, in ballet slippers or singing your heart out on stage, offering a stellar performance every time is key to holding the attention of your audience and having them come back hungry for more. Similarly, showing up to a new business meeting wearing pink tights and a tutu might be one way to holding the attention of your customer, but offering them an unmatched and ground-breaking software solution certainly will get their attention! Simply put, the Oracle Exastack program enables both ISV's and OEM's to rapidly build and deliver faster, more reliable applications. It comes as no surprise that the success of the Oracle Exastack program is centered on establishing Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud as the highest performance, lowest cost platforms available in the industry today.  But here is where the real standing-ovation-worthy facts come in. The Oracle Exadata Database Machine is the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) workloads, making it the ideal platform for consolidating onto private clouds. Whereas the Oracle Exalogic Elastic Cloud is an engineered hardware and software system tested and tuned by Oracle to provide the best foundation for cloud computing, while allowing Java applications, Oracle Applications and other enterprise applications to run with extreme performance. – And the crowd goes wild, ladies and gentlemen! In just four months alone, our partners have already achieved over 150 Oracle Exastack Ready milestones for Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server.  As Judson has said, “With the Oracle Exastack program, Oracle is helping partners test, tune and optimize their applications to deliver optimal performance and reliability, accelerating innovation and delivering superior value to customers." And get this, not only are their applications running faster and more efficiently, they are actually being delivered at a lower cost to customers than ever before – extreme performance well deserving of 3 consecutive arabesques! If you haven’t already, check out what some of our partners are saying about the Oracle Exastack program in this video, and find out all that is available to you today. By participating in the Oracle Exastack program, partners now have the ability to achieve Oracle Exadata Optimized, Oracle Exalogic Optimized, Oracle Exadata Ready and Oracle Exalogic Ready status for their solutions. New Oracle Exastack labs, provide OPN members with access to Oracle technical resources, on-premise facilities and remote lab environments. With Oracle Exastack Optimized, partners experience faster and more reliable applications to run on the Oracle Exadata Database Machine, as well as the long awaited Oracle Exalogic Elastic Cloud. Savvy OPN members are leveraging the Oracle Exastack Optimized program toward their advancement to Platinum or Diamond level in OPN. Partners are achieving Oracle Exadata Ready and Oracle Exalogic Ready giving them a competitive advantage and signaling to customers that their applications readily support Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud to deliver extreme performance. Get your dancing shoes on, The OPN Communications Team

    Read the article

  • Oracle Utilities Application Framework V4.2.0.0.0 Released

    - by ACShorten
    The Oracle Utilities Application Framework V4.2.0.0.0 has been released with Oracle Utilities Customer Care And Billing V2.4. This release includes new functionality and updates to existing functionality and will be progressively released across the Oracle Utilities applications. The release is quite substantial with lots of new and exciting changes. The release notes shipped with the product includes a summary of the changes implemented in V4.2.0.0.0. They include the following: Configuration Migration Assistant (CMA) - A new data management capability to allow you to export and import Configuration Data from one environment to another with support for Approval/Rejection of individual changes. Database Connection Tagging - Additional tags have been added to the database connection to allow database administrators, Oracle Enterprise Manager and other Oracle technology the ability to monitor and use individual database connection information. Native Support for Oracle WebLogic - In the past the Oracle Utilities Application Framework used Oracle WebLogic in embedded mode, and now, to support advanced configuration and the ExaLogic platform, we are adding Native Support for Oracle WebLogic as configuration option. Native Web Services Support - In the past the Oracle Utilities Application Framework supplied a servlet to handle Web Services calls and now we offer an alternative to use the native Web Services capability of Oracle WebLogic. This allows for enhanced clustering, a greater level of Web Service standards support, enchanced security options and the ability to use the Web Services management capabilities in Oracle WebLogic to implement higher levels of management including defining additional security rules to control access to individual Web Services. XML Data Type Support - Oracle Utilities Application Framework now allows implementors to define XML Data types used in Oracle in the definition of custom objects to take advantage of XQuery and other XML features. Fuzzy Operator Support - Oracle Utilities Application Framework supports the use of the fuzzy operator in conjunction with Oracle Text to take advantage of the fuzzy searching capabilities within the database. Global Batch View - A new JMX based API has been implemented to allow JSR120 compliant consoles the ability to view batch execution across all threadpools in the Coherence based Named Cache Cluster. Portal Personalization - It is now possible to store the runtime customizations of query zones such as preferred sorting, field order and filters to reuse as personal preferences each time that zone is used. These are just the major changes and there are quite a few more that have been delivered (and more to come in the service packs!!). Over the next few weeks we will be publishing new whitepapers and new entries in this blog outlining new facilities that you want to take advantage of.

    Read the article

  • OpenWorld Day 1

    - by Antony Reynolds
    A Day in the Life of an OpenWorld Attendee Part I Lots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend! With 50,000 attendees I didn’t expect to bump into too many people I knew, boy was I wrong!  I walked into the registration area and immediately was hailed by a couple of customers I had worked with a few months ago.  Moving to the employee registration area in a different hall I bumped into a colleague from the UK who was also registering.  As soon as I got my badge I bumped into a friend from Ireland!  So maybe OpenWorld isn’t so big after all! First port of call was Larrys Keynote.  As always Larry was provocative and thought provoking.  His key points were announcing the Oracle cloud offering in IaaS, PaaS and SaaS, pointing out that Fusion Apps are cloud enabled and finally announcing the 12c Database, making a big play of its new multi-tenancy features.  His contention was that multi-tenancy will simplify cloud development and provide better security by providing DB level isolation for applications and customers. Next day, Monday, was my first full day at OpenWorld.  The first session I attended was on monitoring of OSB, very interesting presentation on the benefits achieved by an Illinois area telco – US Cellular.  Great discussion of why they bought the SOA Management Packs and the benefits they are already seeing from their investment in terms of improved provisioning and time to market, as well as better performance insight and assistance with capacity planning. Craig Blitz provided a nice walkthrough of where Coherence has been and where it is going. Last night I attended the BOF on Managed File Transfer where Dave Berry replayed Oracles thoughts on providing dedicated Managed File Transfer as part of the 12c SOA release.  Dave laid out the perceived requirements and solicited feedback from the audience on what if anything was missing.  He also demoed an early version of the functionality that would simplify setting up MFT in SOA Suite and make tracking activity much easier. So much for Day 1.  I also ran into scores of old friends and colleagues and had a pleasant dinner with my friend from Ireland where I caught up on the latest news from Oracle UK.  Not bad for Day 1!

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >