Search Results

Search found 7281 results on 292 pages for 'quality attributes'.

Page 4/292 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to convince my boss that quality is a good thing to have in code?

    - by Kristof Claes
    My boss came to me today to ask me if we could implement a certain feature in 1.5 days. I had a look at it and told him that 2 to 3 days would be more realistic. He then asked me: "And what if we do it quick and dirty?" I asked him to explain what he meant with "quick and dirty". It turns out, he wants us to write code as quickly as humanly possible by (for example) copying bits and pieces from other projects, putting all code in the code-behind of the WebForms pages, stop caring about DRY and SOLID and assuming that the code and functionalities will never ever have to be modified or changed. What's even worse, he doesn't want us do it for just this one feature, but for all the code we write. We can make more profit when we do things quick and dirty. Clients don't want to pay for you taking into account that something might change in the future. The profits for us are in delivering code as quick as possible. As long as the application does what it needs to do, the quality of the code doesn't matter. They never see the code. I have tried to convince him that this is a bad way to think as the manager of a software company, but he just wouldn't listen to my arguments: Developer motivation: I explained that it is hard to keep developers motivated when they are constantly under pressure of unrealistic deadlines and budget to write sloppy code very quickly. Readability: When a project gets passed on to another developer, cleaner and better structured code will be easier to read and understand. Maintainability: It is easier, safer and less time consuming to adapt, extend or change well written code. Testability: It is usually easier to test and find bugs in clean code. My co-workers are as baffled as I am by my boss' standpoint, but we can't seem to get to him. He keeps on saying that by making things more quickly, we can sell more projects, ask a lower price for them while still making a bigger profit. And in the end these projects pay the developer's salaries. What more can I say to make him see he is wrong? I want to buy him copies of Peopleware and The Mythical Man-Month, but I have a feeling they won't change his mind either. A lot of you will probably say something like "Run! Get out of there now!" or "I'd quit!", but that's not really an option since .NET web development jobs are rather rare in the region where I live...

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • C#/.NET: Retrieving the contents/file attributes from a file inside a recycle bin

    - by eibhrum
    Hi, I just wanna ask if there's a possibility to retrieve the contents of a 'dump' file from the recycle bin programatically. The contents that I'm looking for are file attributes like 'Date Last Modified, 'Data created', 'size', etc (without restoring the file itself to the original location to preserve the original attributes found while inside the recycle bin.) Comments and suggestions are highly appreciated. Thanks.

    Read the article

  • Attributes don't inherit?

    - by Stebi
    I played with attributes and assumed that they are inherited but it doesn't seem so: type [MyAttribute] TClass1 = class end; TClass2 = class(TClass1) end; TClass2 doesn't have the Attribute "MyAttribute" although it inherits from Class1. Is there any possibility to make an attribute inheritable? Or do I have to go up the class hierarchy and search for attributes?

    Read the article

  • Automating HP Quality Center with Python or Java

    - by Hari
    Hi, We have a project that uses HP Quality Center and one of the regular issues we face is people not updating comments on the defect. So I was thinkingif we could come up with a small script or tool that could be used to periodically throw up a reminder and force the user to update the comments. I came across the Open Test Architecture API and was wondering if there are any good Python or java examples for the same that I could see. Thanks Hari

    Read the article

  • Mocking attributes - C#

    - by bob
    I use custom Attributes in a project and I would like to integrate them in my unit-tests. Now I use Rhino Mocks to create my mocks but I don't see a way to add my attributes (and there parameters) to them. Did I miss something, or is it not possible? Other mocking framework? Or do I have to create dummy implementations with my attributes? example: I have an interface in a plugin-architecture (IPlugin) and there is an attribute to add meta info to a property. Then I look for properties with this attribute in the plugin implementation for extra processing (storing its value, mark as gui read-only...) Now when I create a mock can I add easily an attribute to a property or the object instance itself? EDIT: I found a post with the same question - link. The answer there is not 100% and it is Java... EDIT 2: It can be done... searched some more (on SO) and found 2 related questions (+ answers) here and here Now, is this already implemented in one or another mocking framework?

    Read the article

  • how to Improve DrawDIB's quality?

    - by sxingfeng
    I am coding in c++, gdi I use stretchDIBits to draw Images to dc. ::SetStretchBltMode(hDC, HALFTONE); ::StretchDIBits( hDC, des.left,des.top,des.right - des.left,des.bottom - des.top, 0, 0, img.getWidth(), img.getHeight(), (img.accessPixels()), (img.getInfo()), DIB_RGB_COLORS, SRCCOPY ); However It is slow. So I changed to use DrawDib function. ::SetStretchBltMode(hDC, HALFTONE); DrawDibDraw( hdd, hDC, des.left,des.top,des.right - des.left,des.bottom - des.top, (LPBITMAPINFOHEADER)(img.getInfo()), (img.accessPixels()), 0, 0, img.getWidth(), img.getHeight(), DDF_HALFTONE ); However the result is just like draw by COLORONCOLOR Mode. How can I improve the drawing quality?

    Read the article

  • High quality software examples

    - by Francisco Garcia
    One of the best ways to learn about programming is reading high quality code/projects from great engineers. Which open-source projects do you think is worth looking at? I mean, that code that you can print and sit under a tree with a glass of wine and enjoy reading. If you can, also specify if the software is great to look at because its documentation, design, UML diagrams or just plain code. I believe UML is not very common within open-source projects. Is there such a thing as a project branch that polishes code and design with the sole objective to give other programmers a great example of great software?

    Read the article

  • Saving custom attributes in NSAttributedString

    - by regulus6633
    I need to add a custom attribute to the selected text in an NSTextView. So I can do that by getting the attributed string for the selection, adding a custom attribute to it, and then replacing the selection with my new attributed string. So now I get the text view's attributed string as NSData and write it to a file. Later when I open that file and restore it to the text view my custom attributes are gone! After working out the entire scheme for my custom attribute I find that custom attributes are not saved for you. Look at the IMPORTANT note here: http://developer.apple.com/mac/library/DOCUMENTATION/Cocoa/Conceptual/AttributedStrings/Tasks/RTFAndAttrStrings.html So I have no idea how to save and restore my documents with this custom attribute. Any help?

    Read the article

  • how to save nested form attributes to database

    - by siulamvictor
    I am not really understand how's the nested attributes work in Rails. I have 2 models, Accounts and Users. Accounts has_many Users. When a new user filled in the form, Rails reported User(#2164802740) expected, got Array(#2148376200) Is that Rails cannot read the nested attributes from the form? How can I fix it? How can I save the data from nested attributes form to database? Thanks all~ Here are the MVCs: Account Model class Account < ActiveRecord::Base has_many :users accepts_nested_attributes_for :users validates_presence_of :company_name, :message => "companyname is required." validates_presence_of :company_website, :message => "website is required." end User Model class User < ActiveRecord::Base belongs_to :account validates_presence_of :user_name, :message => "username too short." validates_presence_of :password, :message => "password too short." end Account Controller class AccountController < ApplicationController def new end def created end def create @account = Account.new(params[:account]) if @account.save redirect_to :action => "created" else flash[:notice] = "error!!!" render :action => "new" end end end Account/new View <h1>Account#new</h1> <% form_for :account, :url => { :action => "create" } do |f| %> <% f.fields_for :users do |ff| %> <p> <%= ff.label :user_name %><br /> <%= ff.text_field :user_name %> </p> <p> <%= ff.label :password %><br /> <%= ff.password_field :password %> </p> <% end %> <p> <%= f.label :company_name %><br /> <%= f.text_field :company_name %> </p> <p> <%= f.label :company_website %><br /> <%= f.text_field :company_website %> </p> <% end %> Account Migration class CreateAccounts < ActiveRecord::Migration def self.up create_table :accounts do |t| t.string :company_name t.string :company_website t.timestamps end end def self.down drop_table :accounts end end User Migration class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.string :user_name t.string :password t.integer :account_id t.timestamps end end def self.down drop_table :users end end Thanks everyone. :)

    Read the article

  • RegEx to extract all HTML tag attributes including inline JavaScript

    - by Mike
    I found this useful regex code here while looking to parse HTML tag attributes: (\S+)=["']?((?:.(?!["']?\s+(?:\S+)=|[>"']))+.)["']? It works great, but it's missing one key element that I need. Some attributes are event triggers that have inline Javascript code in them like this: onclick="doSomething(this, 'foo', 'bar');return false;" Or: onclick='doSomething(this, "foo", "bar");return false;' I can't figure out how to get the original expression to not count the quotes from the JS (single or double) while it's nested inside the set of quotes that contain the attribute's value.

    Read the article

  • Inheritance of Custom Attributes on Abstract Properties

    - by Marty Trenouth
    I've got a custom attribute that I want to apply to my base abstract class so that I can skip elements that don't need to be viewed by the user when displaying the item in HTML. It seems that the properties overriding the base class are not inheriting the attributes. Does overriding base properties (abstract or virtual) blow away attributes placed on the original property? From Attribute class Defination [AttributeUsage(AttributeTargets.Property, Inherited = true, AllowMultiple = false)] public class NoHtmlOutput : Attribute { } From Abstract Class Defination [NoHtmlOutput] public abstract Guid UniqueID { get; set; } From Concrete Class Defination public override Guid UniqueID{ get{ return MasterId;} set{MasterId = value;}} From class checking for attribute Type t = o.GetType(); foreach (PropertyInfo pi in t.GetProperties()) { if (pi.GetCustomAttributes(typeof(NoHtmlOutput), true).Length == 1) continue; // processing logic goes here }

    Read the article

  • Ruby on Rails - nested attributes: How do I access the parent model from child model

    - by TMaYaD
    I have a couple of models like so class Bill < ActiveRecord::Base has_many :bill_items belongs_to :store accepts_nested_attributes_for :bill_items end class BillItem <ActiveRecord::Base belongs_to :product belongs_to :bill validate :has_enough_stock def has_enough_stock stock_available = Inventory.product_is(self.product).store_is(self.bill.store).one.quantity errors.add(:quantity, "only #{stock_available} is available") if stock_available < self.quantity end end The above validation so obviously doesn't work because when I'm reading the bill_items from nested attributes inside the bill form, the attributes bill_item.bill_id or bill_item.bill are not available before being saved. So how do I go about doing something like that?

    Read the article

  • rails nested attributes

    - by user342798
    I am using rails 3.0.0.beta3 and I am trying to implement form with nested attributes using :accepts_nested_attributes_for. My form is nested to three levels: Survey Question Answer. Survey has_many Questions, and Question has many Answers. Inside the Survey model, there is :accepts_nested_attributes_for :questions and inside the question mode, there is :accepts_nested_attributes_for :answers Everything is working fine except when I add a new answer to an existing question, it doesn't get created. However, if I make changes to the corresponding question while creating the answer, I can successfully create the answer. This example is exactly similar to a railscast: http://railscasts.com/episodes/197-nested-model-form-part-2 but doesn't work in rails3 (at least in my case). Please let me know if there is any issue with nested attributes in Rails 3. Thanks in advance.

    Read the article

  • .Net SvcUtil: attributes must be optional

    - by Michel van Engelen
    Hi, I'm trying to generate C# code classes with SvcUtil.exe instead of Xsd.exe. The latter is giving me some problems. Command line: SvcUtil.exe myschema.xsd /dconly /ser:XmlSerializer Several SvcUtil problems are described and solved here: http://blog.shutupandcode.net/?p=761 One problem I can't solve is this one: Error: Type 'DatafieldDescription' in namespace '' cannot be imported. Attributes must be optional and from namespace 'http://schemas.microsoft.com/2003/10/Seri alization/'. Either change the schema so that the types can map to data contract types or use ImportXmlType or use a different serializer. ' I changed <xs:attribute name="Order" use="required"> to <xs:attribute name="Order" use="optional"> and <xs:attribute name="Order"> But the error remains. Is it possible to use attributes, or do I have to delete them all (in that case, this excercition is over)?

    Read the article

  • Server-side Javascript (aspx.cs) Attributes.Add Code to Change a Label's text

    - by DFM
    I am trying to change a label's text by using server-side javascript (onclick) and C# within the page_load event. For example, I would like to write something like the following: Label1.Attributes.Add("onclick", "Label2.text='new caption'") Does anyone know the correct code for this? Also, what is this type of code referred to; is it just javascript or javascript in C# or is there a specific name? Lastly, does a book or online resource exist that lists the choices of control.attributes.add("event", "syntax") code to use with C#? Thank you, DFM

    Read the article

  • How to clone a model's attributes easily?

    - by Zabba
    I have these models: class Address < ActiveRecord::Base belongs_to :event attr_accessible :street, :city validates :street, :city, :presence => true end class Event < ActiveRecord::Base has_one :address accepts_nested_attributes_for :address end If I do the below assignment in the Events create action and save the event I get an error: #Use the current user's address for the event @event.address_attributes = current_user.address.attributes #Error occurs at the above mentioned line ActiveRecord::RecordNotFound (Couldn't find Address with ID=1 for Event with ID=) I think what's happening is that all the address's attributes (including the primary key) is getting assigned in the @event.address_attributes = line. But all I really want is the "real data" (street, city), not the primary keys or created_at etc to get copied over. I suppose I could write a small method to do this sort of selective copy but I can't help but feel there must be some built-in method for this? What's the best/right way to do this?

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
    It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software. Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors. However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools. Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • How can I have better sound quality?

    - by joelalmeidaptg
    I have been using Ubuntu for years now, and the latest Ubuntu 13.10 is working perfectly on my N56VZ. The image quality is awesome, better than on Windows, but there is one thing that is really bugging me and that kills my "cinema" experience... The sound. Ubuntu sound quality isn't nearly as good as it does on Windows using Realtek (with Powerful on the equalizer). On Ubuntu the sound is like "faded", it isn't as clear as on Windows. This happens on the system overall: VLC, Youtube, Rhythmbox... I think it is pulse itself that has a horrible sound quality. So, does anyone knows a solution for this? How can I have better sound quality on Ubuntu?

    Read the article

  • How to cut the line between quality and time?

    - by m3th0dman
    On one hand, I have been taught by various software engineering books ([1] as example) that my job as a programmer is to make the best possible software: great design, flexibility, to be easily maintained etc. One the other hand although I realize that I actually write software for money and not for entertainment, although is very nice to write good code and plan ahead and refactor after writing and ... I wonder if it is always best for the business (after all we should be responsible). Is the business always benefiting from a best code? Maybe I'm over-engineering something, and it's not always useful? So how should I know when to stop in the process to achieving the best possible code? I am sure that experience is something that makes a difference here, but I believe this cannot be the only answer. [1] Uncle Bob's in Clean Code says at page 6 about the fact that: They [managers] may defend the schedule and requirements with passion; but that’s their job. It’s your job to defend the code with equal passion.

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
     It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software.   Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors.  However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools.  Read all about it right here http://bit.ly/Ay45SG

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >