Search Results

Search found 58965 results on 2359 pages for 'ssis data transformations'.

Page 10/2359 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Welcome Oracle Data Integration 12c: Simplified, Future-Ready Solutions with Extreme Performance

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 The big day for the Oracle Data Integration team has finally arrived! It is my honor to introduce you to Oracle Data Integration 12c. Today we announced the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions Extreme Performance Fast Time-to-Value       There are many new features that support these key differentiators for Oracle Data Integrator 12c and for Oracle GoldenGate 12c. In this first 12c blog post, I will highlight only a few:·Future-Ready Solutions to Support Current and Emerging Initiatives: Oracle Data Integration offer robust and reliable solutions for key technology trends including cloud computing, big data analytics, real-time business intelligence and continuous data availability. Via the tight integration with Oracle’s database, middleware, and application offerings Oracle Data Integration will continue to support the new features and capabilities right away as these products evolve and provide advance features. E    Extreme Performance: Both GoldenGate and Data Integrator are known for their high performance. The new release widens the gap even further against competition. Oracle GoldenGate 12c’s Integrated Delivery feature enables higher throughput via a special application programming interface into Oracle Database. As mentioned in the press release, customers already report up to 5X higher performance compared to earlier versions of GoldenGate. Oracle Data Integrator 12c introduces parallelism that significantly increases its performance as well. Fast Time-to-Value via Higher IT Productivity and Simplified Solutions:  Oracle Data Integrator 12c’s new flow-based declarative UI brings superior developer productivity, ease of use, and ultimately fast time to market for end users.  It also gives the ability to seamlessly reuse mapping logic speeds development.Oracle GoldenGate 12c ‘s Integrated Delivery feature automatically optimally tunes the process, saving time while improving performance. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. On November 12th we will reveal much more about the new release in our video webcast "Introducing 12c for Oracle Data Integration". Our customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Please join us at this free event to learn more from our executives about the 12c release, hear our customers’ perspectives on the new features, and ask your questions to our experts in the live Q&A. Also, please continue to follow our blogs, tweets, and Facebook updates as we unveil more about the new features of the latest release. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SSIS how to set connection string dynamically from a config file

    - by swapna
    Hi, I am using sql server integration services(SSIS) in sql server business intelligent devolopment studio. I need to do a task --that is. I have to read from a source database and put it into a destination flat file.But the same time the source databse should be configurable. That means in the Oledbconnection manager connection string should change dynamically.this connection string should be taking from a configuration/xml/flat file. I read that i can use varaibles and expressions to change the connection string dynamically.But how do i read connection string value from a config/xml/flat file and set the variable? This part i am unable to do.Or is this the right way to achieve this.. Can we add web.config files to ssis project.? I am new to SSIs. Please provide some help with examles etc. and this is quiet urgent for me. Thanks SNA.

    Read the article

  • How to create SSIS package to update from one database to another database within same server

    - by Pavan Kumar
    My query is related to the answers i got for questions i had posted earlier in the same forum. I have a copy of a client database which is attached to SQL Server where the Central Database exists. The copy already contains the updated data. I just want to update from that copy to Central Database both holding same schema and are present in the same server using C# .NET. One of the solution i got was to create SSIS package and run it from the UI. I want to know as how i can create SSIS Package to achieve this. I am new to SSIS. I am using SQL Server 2005 Standard Edition and Visual Studio 2008 SP1 installed. I learnt that BIDS 2005 is used to create packages which comes by default with SQL Server 2005. Can someone please give me a example as i am new to this.

    Read the article

  • Recordset manipulation in SSIS

    - by JSacksteder
    In my SSIS job, I have a need to accumulate a set of rows and commit them all transitionally when processing has completed successfully. If this was pure SQL, I would use a temp table inside a transaction. In SSIS there are a number of issues complicating this. It's difficult to have multiple components share the same transaction and having temp tables that do not exist at design time is a pain. If I use Recordsets inside SSIS for this purpose, there are other issues. My understanding is that an 'Execute SQL' component will re-initialize the Recordset when it runs, so I can't use that to append an additional row. Is there a way to create an OLE DB connection that references an in-memory Recordset? Is there a better way to achieve this result?

    Read the article

  • Search for Variable Usage In SSIS tasks

    - by yoni.s
    Hi all: As what seems to be some sort of penance for sins in a prior life, I have been tasked with maintaining some SSIS packages. (NO! NO BADMOUTHING SSIS!! BAD PROGRAMMMER! NO DOUGHNUT!). Anyhoo, I many of the packages have variables, defined in an outer container, which are used in multiple inner containers, in script tasks. What I want to do, is find out all the places in a package a variable being used; in other words, search for instances of variable usage in all tasks of a package. This would be a huge help, but I cannot for the life of me find out how this can be done in BIDS. (this is SSIS/BIDS 2008) Thanks for any help, YS

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • How to retrieve a bigint from a database and place it into an Int64 in SSIS

    - by b0fh
    I ran into this problem a couple years back and am hoping there has been a fix and I just don't know about it. I am using an 'Execute SQL Task' in the Control Flow of an SSIS package to retrieve a 'bigint' ID value. The task is supposed to place this in an Int64 SSIS variable but I getting the error: "The type of the value being assigned to variable "User::AuditID" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object." When I brought this to MS' attention a couple years back they stated that I had to 'work around' this by placing the bigint into an SSIS object variable and then converting the value to Int64 as needed. Does anyone know if this has been fixed or do I still have to 'work around' this mess? edit: Server Stats Product: Microsoft SQL Server Enterprise Edition Operating System: Microsoft Windows NT 5.2 (3790) Platform: NT INTEL X86 Version: 9.00.1399.06

    Read the article

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • Convert date from access to SQL Server with SSIS

    - by Arne
    Hi, I want to convert a database from access to SQL Server using SSIS. I cannot convert the date/time columns of the access db. SSIS says something like: conversion between DT_Date and DT_DBTIMESTAMP is not supported. (Its translated from my German version, might be different in English version). In Access I have Date/Time column, in SQL Server I have datetime. In the dataflow chart of the SSIS I have a OLE DB source for the access db, an sql server target and a data conversion. In the data conversion I convert the columns to date[DT_DATE]. They are connected like this: AccessDB -> conversion -> SQL DB What am I doing wrong? How can I convert the Access date columns to SQL Server date columns?

    Read the article

  • Run SSIS Package from T-SQL

    - by Dr. Zim
    I noticed you can use the following stored procedures (in order) to schedule a SSIS package: msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]' msdb.dbo.sp_add_job ... msdb.dbo.sp_add_jobstep ... msdb.dbo.sp_update_job ... msdb.dbo.sp_add_jobschedule ... msdb.dbo.sp_add_jobserver ... (You can see an example by right clicking a scheduled job and selecting "Script Job as- Create To".) AND you can use sp_start_job to execute the job immediately, effectively running SSIS packages on demand. Question: does anyone know of any msdb.dbo.[...] stored procedures that simply allow you to run SSIS packages on the fly without using sp_cmdshell directly, or some easier approach?

    Read the article

  • SSIS 2005 How to add a zip task?

    - by swapna
    Hi , I have a ssis 2005 project. I have to add a zipping task.ie, i have to zip a text file. But i have to follow DB standards of the ORG. strictly. 1) No 3rd party software in the DB server.Not even resourec kit tools 2) No exe's in the DB server. so i cant use(windows resource kit tools,7-zip, and a execute process task to invoke a c# exe.) please suggest me a way to achieve zipping in ssis without compromising the DB standards.Any zip task is available as an addin to SSIS 2005? Thanks SNA

    Read the article

  • SQL SERVER – List of Article on Expressor Data Integration Platform

    - by pinaldave
    The ability to transform data into meaningful and actionable information is the most important information in current business world. In this fast growing and changing business needs effective data integration is single most important thing in making proper decision making. I have been following expressor software since November 2010, when I met expressor team in Seattle. Here are my posts on their innovative data integration platform and expressor Studio, a free desktop ETL tool: 4 Tips for ETL Software IDE Developers Introduction to Adaptive ETL Tool – How adaptive is your ETL? Sharing your ETL Resources Across Applications with Ease expressor Studio Includes Powerful Scripting Capabilities expressor 3.2 Release Review 5 Tips for Improving Your Data with expressor Studio As I had mentioned in some of my blog posts on them, I encourage you to download and test-drive their Studio product – it’s free. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SSIS

    Read the article

  • Performance problems loading XML with SSIS, an alternative way!

    - by AtulThakor
    I recently needed to load several thousand XML files into a SQL database, I created an SSIS package which was created as followed: Using a foreach container to loop through a directory and load each file path into a variable, the “Import XML” dataflow would then load each XML file into a SQL table.       Running this, it took approximately 1 second to load each file which seemed a massive amount of time to parse the XML and load the data, speaking to my colleague Martin Croft, he suggested the use of T-SQL Bulk Insert and OpenRowset, so we adjusted the package as followed:     The same foreach container was used but instead the following SQL command was executed (this is an expression):     "INSERT INTO MyTable(FileDate) SELECT   CAST(bulkcolumn AS XML)     FROM OPENROWSET(         BULK         '" + @[User::CurrentFile]  + "',         SINGLE_BLOB ) AS x"     Using this method we managed to load approximately 20 records per second, much faster…for data loading! For what we wanted to achieve this was perfect but I’ll leave you with the following points when making your own decision on which solution you decide to choose!      Openrowset Method Much faster to get the data into SQL You’ll need to parse or create a view over the XML data to allow the data to be more usable(another post on this!) Not able to apply validation/transformation against the data when loading it The SQL Server service account will need permission to the file No schema validation when loading files SSIS Slower (in our case) Schema validation Allows you to apply transformations/joins to the data Permissions should be less of a problem Data can be loaded into the final form through the package When using a schema validation errors can fail the package (I’ll do another post on this)

    Read the article

  • Big Data – Final Wrap and What Next – Day 21 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored various resources related to learning Big Data and in this blog post we will wrap up this 21 day series on Big Data. I have been exploring various terms and technology related to Big Data this entire month. It was indeed fun to write about Big Data in 21 days but the subject of Big Data is much bigger and larger than someone can cover it in 21 days. My first goal was to write about the basics and I think we have got that one covered pretty well. During this 21 days I have received many questions and answers related to Big Data. I have covered a few of the questions in this series and a few more I will be covering in the next coming months. Now after understanding Big Data basics. I am personally going to do a list of the things next. I thought I will share the same with you as this will give you a good idea how to continue the journey of the Big Data. Build a schedule to read various Apache documentations Watch all Pluralsight Courses Explore HortonWorks Sandbox Start building presentation about Big Data – this is a great way to learn something new Present in User Groups Meetings on Big Data Topics Write more blog posts about Big Data I am going to continue learning about Big Data – I want you to continue learning Big Data. Please leave a comment how you are going to continue learning about Big Data. I will publish all the informative comments on this blog with due credit. I want to end this series with the infographic by UMUC. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • How to build a data flow?

    - by salvationishere
    I am running Visual Studio 2008, the SSIS Tutorial described on: http://msdn.microsoft.com/en-us/library/ms167106.aspx I finished all of the tasks but am getting following errors: Error 1 Validation error. Extract Sample Currency Data: Extract Sample Currency Data: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Error 2 Validation error. Extract Sample Currency Data SSIS.Pipeline: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Can you tell what I need to do to make this build without errors?

    Read the article

  • SSIS pckage run from file system

    - by Pramodtech
    Initially I deployed packages on SQL server but since my machine is not having SSIS installed I faced issue of version while executing the packages. Then I deployed packages to file system on server which has SQL server enterprise edition with SSIS installed on it. I access the folder on server where I have deployed packages from my system and execute the package but I get error saying "cannot run on this edition of integration services, need higher version." Do I need to rmote login (RDP) to execute package?

    Read the article

  • Loading Hybrid Dimension Table with SCD1 and SCD2 attributes + SSIS

    - by Nev_Rahd
    Hello I am just in a process of starting a new task, wherein in i need to load Hybrid Dimension Table with SCD1 and SCD2. This need to be achieved as a SSIS Package. Can someone guide what would be the best way dealing this in SSIS, should i used SCD component or there is other way? What are the best practices for this. For SCD2 type, am using Merge statement. Thanks

    Read the article

  • Unzip password protected Zip file in SSIS

    - by MarcoF
    Does anyone know how to Unzip password protected files in an SSIS package? We have an SSIS package that currently use java.util.zip to unzip zip file and has been working perfectly for some time now. They now want to password protect the files and unfortunetly this library cannot do it. We are using a SQL server 2008 with .Net 3.5 on the windows server 2008 R2.

    Read the article

  • Using version control with SSIS packages (saving 'sensitive' data)

    - by sandesh247
    Hi. We are a team working on a bunch of SSIS packages, which we share using version control (SVN). We have three ways of saving sensitive data in these packages : not storing them at all storing them with a user key storing them with a password However, each of these options is inconvenient while testing packages saved and committed by an other developer. For each such package, one has to update the credentials, no matter how the sensitive data was persisted. Is there a better way to collaborate on SSIS packages?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >