Search Results

Search found 92 results on 4 pages for 'standardization'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Advice on whether to use scripting, run time compile or something else

    - by Gaz83
    I work in the prodution area at my works and I design and create the software to run our automated test equipment. Everytime I get involved with a new machine I end up with a different and (hopefully) better design. Anyway I have come to the point where I feel I need to start standardization all the machines with the same program. I see a problem when it comes to applying updates as at the moment the test procedures are hard coded into the program at each station. I neeed to be able to update the core program without affecting the testing section. The way I see it that this will mean splitting the program into 2 sections. Main UI - This is the core that talks to everything on the machine such as cameras, sensors, printer etc and is a standalone application. Test Procedure - This is the steps that is executeted everytime the machine runs through a test. The main UI will load the test procedure and execute when ever a test is required. My question is what is the best approach to this in terms of having an application load a file and execute the code with in? Take into account that the code in the test procedure will need access to public methods on the UI/core system to communicate to sensors etc. I have heard about MS Roslyn and had a quick look, would this solve my issue?

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • What is the correct usage of blueprint-typography-body([$font-size])?

    - by Alexis Abril
    Recent convert to RoR and I've been using Compass w/ Blueprint to dip into the proverbial pool. Compass has been fantastic, but I've come across something strange within the Typography library. The blueprint-typography-body mixin contains the following: =blueprint-typography-body($font-size: $blueprint-font-size) line-height: 1.5 +normal-text font-size: 100% * $font-size / 16px My question revolves around "font-size." I'm a bit lost, as I would expect to pass in a font size and have that size reflected upon page load. However, in this scenario the formula seems to dictate a percentage against the default font. ie: +blueprint-typography-body(10px) //produces 7.5px off of the default font size of 12px from what I can tell. In essence, I'm curious if there is a standard to setting font size within Compass other than explicitly declaring "font-size: 10px". Note: The reason I'm leaning towards Blueprint/Compass font stylings is due to the standardization of line-heights, fonts and colors.

    Read the article

  • Best Practice Guide: Swing

    - by wishi_
    Hi! Does anybody know Swing related GUI guidelines - specifically on how to design Swing apps and which components I should use? I'm not looking for an official standard, but pragmatic tips I can use to set a good standard for my projects. I haven't used too much of Swing by myself. Surely clicking a GUI with a GUI designer isn't a big deal. However I'd like to get some insights from people who have experience with Swing and know what to avoid. Swing lately (in Java 6- 10) got decent changes. So there isn't too much specific standardization out there currently.

    Read the article

  • Is the F# language reference documentation available in an offline format (PDF, CHM)?

    - by stakx
    I've found several posts on hubFS of people asking if there is, or will be, offline documentation for F#. These posts haven't been answered. So I want to give it a shot and ask the same question here on SO. Where I've looked for offline documentation so far: The April 2010 CTP release of Visual F# (version 2.0) is available for VS 2008, but it doesn't come without an offline help. There's a question on SO about offline documentation for various programming languages, but F# isn't mentioned there at the time of this writing. There is of course Microsoft's F# language reference documentation (available on MSDN), which could be downloaded for offline browsing using e.g. wget. Question: Does anyone know whether any "official" offline documentation is on the way, anytime soon? (And related to this, albeit this probably can't be answered objectively: Would it be reasonable to expect that F# likely won't undergo ECMA or ISO standardization, ie. there likely won't be a standards document describing the language?)

    Read the article

  • In Python, are there builtin functions for elementwise map of boolean operators over tuples of lists

    - by bshanks
    For example, if you have n lists of bools of the same length, then elementwise boolean AND should return another list of that length that has True in those positions where all the input lists have True, and False everywhere else. It's pretty easy to write, i just would prefer to use a builtin if one exists (for the sake of standardization/readability). Here's an implementation of elementwise AND: def eAnd(*args): return [all(tuple) for tuple in zip(*args)] example usage: >>> eAnd([True, False, True, False, True], [True, True, False, False, True], [True, True, False, False, True]) [True, False, False, False, True] thx

    Read the article

  • In Python, are there builtin functions for elementwise boolean operators over boolean lists?

    - by bshanks
    For example, if you have n lists of bools of the same length, then elementwise boolean AND should return another list of that length that has True in those positions where all the input lists have True, and False everywhere else. It's pretty easy to write, i just would prefer to use a builtin if one exists (for the sake of standardization/readability). Here's an implementation of elementwise AND: def eAnd(*args): return [all(tuple) for tuple in zip(*args)] example usage: >>> eAnd([True, False, True, False, True], [True, True, False, False, True], [True, True, False, False, True]) [True, False, False, False, True] thx

    Read the article

  • iphone @property(retain), init(), and standards

    - by inyourcorner
    I'm new to the memory management of the iphone and had a question about standards/correctness. My header file declares: IBOutlet UITabBarController *tabBarController; @property (nonatomic, retain) UITabBarController *tabBarController; In my init() code I was doing something like the following: self.tabBarController = [[UITabBarController alloc] init]; [tabBarController release]; NSLog(@"Retain count of tbc: %d",[tabBarController retainCount]); to get the retain count back to one. Is this correct from a standardization point of view? It just looked a bit different to me, but again I'm new to this. Thanks

    Read the article

  • ruby on rails w/ SQLServer

    - by jaydel
    I've heard from some people that RoR doesn't marry cleanly with SQLServer. We have a series of historical, standardization to use SQLServer but if we can push back with valid reasons we can move to another db. One person on the team wants MySql and another wants Postgres, etc. I'm trying to stay out of the religious wars and really understand what the pain point is with SQLServer. We're running the app server on a linux box, and the database will be on a windows box and the SQLServer that we're supposed to standardize on is 2008, if those details help any... thanks in advance!

    Read the article

  • Oracle Utilities Application Framework future feature deprecation

    - by Paula Speranza-Hadley
    From time to time, existing functionality is replaced with alternative features to offer greater flexibility and standardization. In Oracle Utilities Application Framework V4.2.0.0.0 the following features are being announced for deprecation in the next release or have been previously announced and are not being delivered with this version of the Oracle Utilities Application Framework: ·         No SQL Server Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with any support for SQL Server. ·         No MPL Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with the Multi-Purpose Listener (MPL) component of the XML Application Integration (XAI) component. Customers using the MPL should migrate to Oracle Service Bus. ·         No provided Crystal Reports/Business Objects Interface – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with a supported Crystal Reports/Business Objects Interface. This facility is now available as downloadable customization for existing or new customers. Responsibility for maintenance and new features is now individual customer's responsibility. ·         XAI Servlet deprecation – The XAI Servlet (xaiserver and classicxai) will be removed in the next release of the Oracle Utilities Application Framework. Customers are encouraged to migrate to the native Web Services Support as outlined in XAI Best Practices whitepaper available from My Oracle Support (Doc Id: 942074.1). ·         ConfigLab deprecation – The ConfigLab facility will be removed in the next release of Oracle Utilities Application Framework for products it is shipped with. Customers are recommended to migrate to the Configuration Migration Assistant which provides the same and more functionality.   ·         Archiving deprecation – The inbuilt Archiving has been removed from Oracle Utilities Application Framework V4.2.0.0.0 or above, for products it is shipped with. Customers considering Archiving solution should migrate to the Information Lifecycle Management based solution provided for your product. ·         DISTRIBUTED batch execution mode deprecation – The DISTRIBUTED execution mode used by the batch component of the Oracle Utilities Application Framework will be deprecated in the next release of the Oracle Utilities Application Framework. Customers using DISTRUBUTED mode should migrate to CLUSTERED mode as outlined in the Batch Best Practices For Oracle Utilities Application Framework Based Products whitepaper available from My Oracle Support (Doc Id: 836362.1). ·         XAI Schema Editor deprecation – The XAI Schema Editor which is a component of the Oracle Utilities Software Development Kit will be removed in the next release of the Oracle Utilities Application Framework. Customers should migrate their existing schemas to Business Object based schemas and use the browser based Schema Editor instead.  

    Read the article

  • Digital Storage for Airline Entertainment

    - by Bill Evjen
    by Thomas Coughlin Common flash memory cards The most common flash memory products currently in use are SD cards and derivative products (e.g. mini and micro-SD cards) Some compact flash used for professional applications (such as DSLR cameras) Evolution of leading flash formats Standardization –> market expansion Market expansion –> volume iNAND –> focus is on enabling embedded X3 iSSD –> ideal for thin form factor devices Flash memory applications Phones are the #1 user of flash memory Flash memory is used as embedded and removable storage in many mobile applications Flash memory is being used in computers as USB sticks and SSDs Possible use of flash memory in computer combined with HDDs (hybrid HDDs and paired or dual storage computers) It can be a removable card or an embedded card These devices can only handle a specific number of writes Flash memory reads considerably quicker than hard drives Hybrid and dual storage in computers SSDs can provide fast performance but they are expensive HDDs can provide cheap storage but they are relatively slow Combining some flash memory with a HDD can provide costs close to those of HDDs and performance close to flash memory Seagate Momentus XT hybrid HDD Various dual storage offerings putting flash memory with HDDs Other common flash memory devices USB sticks All forms and colors Used for moving files around Some sold with content on them (Sony Movies on USB sticks) Solid State Drives (SSDs) Floating Gate Flash Memory Cell When a bit is programmed, electrons are stored upon the floating gate This has the effect of offsetting the charge on the control gate of the transistor If there is no charge upon the floating gate, then the control gate’s charge determines whether or not a current flows through the channel A strong charge on the control gate assumes that no current flows. A weak charge will allow a strong current to flow through. Similar to HDDs, flash memory must provide: Bit error correction Bad block management NAND and NOR memories are treated differently when it comes to managing wear In many NOR-based systems no management is used at all, since the NOR is simply used to store code, and data is stored in other devices. In this case, it would take a near-infinite amount of time for wear to become an issue since the only time the chip would see an erase/write cycle is when the code in the system is being upgraded, which rarely if ever happens over the life of a typical system. NAND is usually found in very different application than is NOR Flash memory wears out This is expected to get worse over time Retention: Disappearing data Bits fade away Retention decreases with increasing read/writes Bits may change when adjacent bits are read Time and traffic are concerns Controllers typically groom read disturb errors Like DRAM refresh Increases erase/write frequency Application characteristics Music – reads high / writes very low Video – r high / writes very low Internet Cache – r high / writes low On airplanes Many consumers now have their own content viewing devices – do they need the airlines? Is there a way to offer more to consumers, especially with their own viewers Additional special content tie into airplane network access to electrical power, internet Should there be fixed embedded or removable storage for on-board airline entertainment? Is there a way to leverage personal and airline viewers and content in new and entertaining ways?

    Read the article

  • CIO's Corner: Achieving a Balance

    - by Michelle Kimihira
    Author: Rick Beers Senior Director, Product Management, Oracle Fusion Middleware All too often, a CIO is unfairly characterized as either technology-focused or business-focused; as more concerned with either infrastructure performance or business excellence. It seems to me that this completely misses the point. I have long thought that a CIO has probably the most complex C-level position in an enterprise, one that requires an artful balance among four entirely different constituencies, often with competing values and needs. How a CIO balances these is the single largest determinant of success. I was reminded of this while reading the excellent interview of Mark Hurd by CNBC’s Maria Bartiromo in a recent issue of USATODAY (Bartiromo: Oracle's Hurd is in tech sweet spot). The interview covers topics such as Big Data, Leadership and Oracle’s growth strategy. But the topic that really got my interest, and reminded me of the need for balance, was on IT spending trends, in which Mark Hurd observed, “…budgets are tight. What most of our customers have today is both an austerity plan to save money and at the same time a plan to reapply that money to innovation. There isn't a customer we have that doesn't have an austerity plan and an innovation plan.” In an era of economic uncertainty, and an accelerating pace of business change, this is probably the toughest balance a CIO must achieve. Yet for far too many IT organizations, operating costs consume over 75% of their budgets, leaving precious little for innovation and investment in business-critical technology programs. I have found that many CIO’s are trapped by their enterprise systems platforms, which were originally architected for Standardization, Compliance and tightly integrated linear Workflows. Yes, these traits are still required for specific reasons and cannot be compromised. But they are no longer enough. New demands are emerging: the explosion in the volume and diversity of Data, the Consumerization of IT, the rise of Social Media, and the need for continual Business Process Reengineering. These were simply not the design criteria for Enterprise 1.0 and attempting to leverage them with current systems platforms results in an escalation in complexity and a resulting increase in operating costs for many IT organizations. This is the cost vs investment trap and what most constrains CIO’s from achieving the balance they need. But there is a way out of this trap. Enterprise 2.0 represents an entirely new enterprise systems architecture, one that is ‘Business-Centric’ rather than ‘ERP Centric’, which defined the architecture of Enterprise 1.0. Oracle’s best in class suite of Fusion Middleware Products enables a layered approach to enterprise systems architectures that provides the balance that an enterprise needs. The most exciting part of all this? The bottom two layers are focused upon reducing costs and the upper two layers provide business value and innovation. Finally, the Balance a CIO needs.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Criteria for a programming language to be considered "mature"

    - by Giorgio
    I was recently reading an answer to this question, and I was struck by the statement "The language is mature". So I was wondering what we actually mean when we say that "A programming language is mature"? Normally, a programming language is initially developed out of a need, e.g. Try out / implement a new programming paradigm or a new combination of features that cannot be found in existing languages. Try to solve a problem or overcome a limitation of an existing language. Create a language for teaching programming. Create a language that solves a particular class of problems (e.g. concurrency). Create a language and an API for a special application field, e.g. the web (in this case the language might reuse a well-known paradigm, but the whole API must be new). Create a language to push your competitor out of the market (in this case the creator might want the new language to be very similar to an existing one, in order to attract developers to the new programming language and platform). Regardless of what the original motivation and scenario in which a language has been created, eventually some languages are considered mature. In my intuition, this means that the language has achieved (at least one of) its goals, e.g. "We can now use language X as a reliable tool for writing web applications." This is however a bit vague, so I wanted to ask what you consider the most important criteria (if any) that are applied when saying that a language is mature. IMPORTANT NOTE This question is (on purpose) language-agnostic because I am only interested in general criteria. Please write only language-agnostic answers and comments! I am not asking whether any specific "language X is mature" or "which programming languages can be considered mature", or whether "language X is more mature than language Y": please avoid posting any opinions or reference about any specific languages because these are out of the scope of this question. EDIT To make the question more precise, by criteria I mean such things as "tool support", "adoption by the industry", "stability", "rich API", "large user community", "successful application record", "standardization", "clean and uniform semantics", and so on.

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

  • Technical workshop with the gurus: Architecting Oracle Database-As-A-Service (DBaaS)

    - by Javier Puerta
    Hardware and Software, Engineered to Work Together inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- OCTOBER 2013 Invitation: Architecting Oracle Database-As-A-Service (DBaaS) Stay Connected Sign up for Specific Updates Architecting Oracle Database-As-A-Service (DBaaS) Dear partner, We are pleased to invite you to a 2-day workshop dedicated to EMEA partners on "Architecting Oracle Private Database Cloud & Delivering Database-As-A-Service (DBaaS)". This exclusive workshop will be delivered by Product Management and Product Development from Oracle HQ and focuses on the main theme CIOs are tackling with in the last decade: Consolidation to Private Cloud. For many customers the journey to consolidation has led to DBaaS Cloud deployments to significantly reduce costs and offer agile IT services. With the recent launch of Oracle Database 12c, the game really has changed in terms of what Oracle offers and how database clouds can be deployed. REGISTER NOW Who should attend: Enterprise Architects Infrastructure Architects DB Architects from System Integrators and large Independent Software Vendors. Take this opportunity to learn from the gurus, how you can help your customers maximize on their cloud consolidation strategies. The workshops main focus is service delivery, which includes standardization and consolidation, and how you would help your customers transform their current IT infrastructure to a service delivery model. It will discuss best practices and reviews customer examples that have successfully implemented a database cloud. The agenda is split into two days sessions: Day 1: Overview & Planning Database Cloud - Demos Customer Case Studies Database 12c Day 2: Database Cloud - Design Database Cloud - Implementation EM Cloud Control DBaaS on Engineered Systems Question and Answers Attendance is free of charge for qualified Oracle partners - Register now for one of the below sessions: Date Country Location 5 & 6 November 2013  United Kingdom   Manchester 7 & 8 November 2013  Germany  Munich 11 & 12 November 2013  Netherlands  Amsterdam 14 & 15 November 2013  Turkey Istanbul 18 & 19 November 2013  Austria Vienna Looking forward to seeing you! Javier Puerta Director, Core Technology Partner Programs EMEA Prashant Barot Director, Core Technology Resources OPN Portal OPN Enablement News Blog Oracle Partner Store Use Oracle Trademark in Google AdWords OPN Events Calendar OPN Information Center OPN Solutions Catalog Promote Your Events on Oracle Calendar Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Technical workshop with the gurus: Architecting Oracle Database-As-A-Service (DBaaS)

    - by Javier Puerta
    Hardware and Software, Engineered to Work Together inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- OCTOBER 2013 Invitation: Architecting Oracle Database-As-A-Service (DBaaS) Stay Connected Sign up for Specific Updates Architecting Oracle Database-As-A-Service (DBaaS) Dear partner, We are pleased to invite you to a 2-day workshop dedicated to EMEA partners on "Architecting Oracle Private Database Cloud & Delivering Database-As-A-Service (DBaaS)". This exclusive workshop will be delivered by Product Management and Product Development from Oracle HQ and focuses on the main theme CIOs are tackling with in the last decade: Consolidation to Private Cloud. For many customers the journey to consolidation has led to DBaaS Cloud deployments to significantly reduce costs and offer agile IT services. With the recent launch of Oracle Database 12c, the game really has changed in terms of what Oracle offers and how database clouds can be deployed. REGISTER NOW Who should attend: Enterprise Architects Infrastructure Architects DB Architects from System Integrators and large Independent Software Vendors. Take this opportunity to learn from the gurus, how you can help your customers maximize on their cloud consolidation strategies. The workshops main focus is service delivery, which includes standardization and consolidation, and how you would help your customers transform their current IT infrastructure to a service delivery model. It will discuss best practices and reviews customer examples that have successfully implemented a database cloud. The agenda is split into two days sessions: Day 1: Overview & Planning Database Cloud - Demos Customer Case Studies Database 12c Day 2: Database Cloud - Design Database Cloud - Implementation EM Cloud Control DBaaS on Engineered Systems Question and Answers Attendance is free of charge for qualified Oracle partners - Register now for one of the below sessions: Date Country Location 5 & 6 November 2013  United Kingdom   Manchester 7 & 8 November 2013  Germany  Munich 11 & 12 November 2013  Netherlands  Amsterdam 14 & 15 November 2013  Turkey Istanbul 18 & 19 November 2013  Austria Vienna Looking forward to seeing you! Javier Puerta Director, Core Technology Partner Programs EMEA Prashant Barot Director, Core Technology     Resources OPN Portal OPN Enablement News Blog Oracle Partner Store Use Oracle Trademark in Google AdWords OPN Events Calendar OPN Information Center OPN Solutions Catalog Promote Your Events on Oracle Calendar Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Java EE 7 Roadmap

    - by Linda DeMichiel
    The Java EE 6 Platform, released in December 2009, has seen great uptake from the community with its POJO-based programming model, lightweight Web Profile, and extension points. There are now 13 Java EE 6 compliant appserver implementations today! When we announced the Java EE 7 JSR back in early 2011, our plans were that we would release it by Q4 2012. This target date was slightly over three years after the release of Java EE 6, but at the same time it meant that we had less than two years to complete a fairly comprehensive agenda — to continue to invest in significant enhancements in simplification, usability, and functionality in updated versions of the JSRs that are currently part of the platform; to introduce new JSRs that reflect emerging needs in the community; and to add support for use in cloud environments. We have since announced a minor adjustment in our dates (to the spring of 2013) in order to accommodate the inclusion of JSRs of importance to the community, such as Web Sockets and JSON-P. At this point, however, we have to make a choice. Despite our best intentions, our progress has been slow on the cloud side of our agenda. Partially this has been due to a lack of maturity in the space for provisioning, multi-tenancy, elasticity, and the deployment of applications in the cloud. And partially it is due to our conservative approach in trying to get things "right" in view of limited industry experience in the cloud area when we started this work. Because of this, we believe that providing solid support for standardized PaaS-based programming and multi-tenancy would delay the release of Java EE 7 until the spring of 2014 — that is, two years from now and over a year behind schedule. In our opinion, that is way too long. We have therefore proposed to the Java EE 7 Expert Group that we adjust our course of action — namely, stick to our current target release dates, and defer the remaining aspects of our agenda for PaaS enablement and multi-tenancy support to Java EE 8. Of course, we continue to believe that Java EE is well-suited for use in the cloud, although such use might not be quite ready for full standardization. Even today, without Java EE 7, Java EE vendors such as Oracle, Red Hat, IBM, and CloudBees have begun to offer the ability to run Java EE applications in the cloud. Deferring the remaining cloud-oriented aspects of our agenda has several important advantages: It allows Java EE Platform vendors to gain more experience with their implementations in this area and thus helps us avoid risks entailed by trying to standardize prematurely in an emerging area. It means that the community won't need to wait longer for those features that are ready at the cost of those features that need more time. Because we have already laid some of the infrastructure for cloud support in Java EE 7, including resource definition metadata, improved security configuration, JPA schema generation, etc., it will allow us to expedite a Java EE 8 release. We therefore plan to target the Java EE 8 Platform release for the spring of 2015. This shift in the scope of Java EE 7 allows us to better retain our focus on enhancements in simplification and usability and to deliver on schedule those features that have been most requested by developers. These include the support for HTML 5 in the form of Web Sockets and JSON-P; the simplified JMS 2.0 APIs; improved Managed Bean alignment, including transactional interceptors; the JAX-RS 2.0 client API; support for method-level validation; a much more comprehensive expression language; and more. We feel strongly that this is the right thing to do, and we hope that you will support us in this proposed direction.

    Read the article

  • How to estimate the contribution of an individual to a software project?

    - by Amit Kumar
    I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things. The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis. My idea of an ideal tool for this purpose would: measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution) measure the decomposibility/flexibility of the software (more decomposable is better) how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library. be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code". It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately. If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has a different goal, but difficult does not mean it is not useful.

    Read the article

  • Char C question about encoding signed/unsigned.

    - by drigoSkalWalker
    Hi guys. I read that C not define if a char is signed or unsigned, and in GCC page this says that it can be signed on x86 and unsigned in PowerPPC and ARM. Okey, I'm writing a program with GLIB that define char as gchar (not more than it, only a way for standardization). My question is, what about UTF-8? It use more than an block of memory? Say that I have a variable unsigned char *string = "My string with UTF8 enconding ~ çã"; See, if I declare my variable as unsigned I will have only 127 values (so my program will to store more blocks of mem) or the UTF-8 change to negative too? Sorry if I can't explain it correctly, but I think that i is a bit complex. NOTE: Thanks for all answer I don't understand how it is interpreted normally. I think that like ascii, if I have a signed and unsigned char on my program, the strings have diferently values, and it leads to confuse, imagine it in utf8 so.

    Read the article

  • Abstract away a compound identity value for use in business logic?

    - by John K
    While separating business logic and data access logic into two different assemblies, I want to abstract away the concept of identity so that the business logic deals with one consistent identity without having to understand its actual representation in the data source. I've been calling this a compound identity abstraction. Data sources in this project are swappable and various and the business logic shouldn't care which data source is currently in use. The identity is the toughest part because its implementation can change per kind of data source, whereas other fields like name, address, etc are consistently scalar values. What I'm searching for is a good way to abstract the concept of identity, whether it be an existing library, a software pattern or just a solid good idea of some kind is provided. The proposed compound identity value would have to be comparable and usable in the business logic and passed back to the data source to specify records, entities and/or documents to affect, so the data source must be able to parse back out the details of its own compound ids. Data Source Examples: This serves to provide an idea of what I mean by various data sources having different identity implementations. A relational data source might express a piece of content with an integer identifier plus a language specific code. For example. content_id language Other Columns expressing details of content 1 en_us 1 fr_ca The identity of the first record in the above example is: 1 + en_us However when a NoSQL data source is substituted, it might somehow represent each piece of content with a GUID string 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 plus language code of a different standardization, And a third kind of data source might use just a simple scalar value. So on and so forth, you get the idea.

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

< Previous Page | 1 2 3 4  | Next Page >