Search Results

Search found 102772 results on 4111 pages for 'sql server 2008'.

Page 288/4111 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • SQL Interview Preparation : QA Engineer Position

    - by user9009
    Hello, I have interview with enterprise company for QA Engineer(New Grad-Mid level experience) position. I was told i would expect some questions on SQL. The company is eCommerce shopping portal. So what kind of questions do i expect for SQL coding ? . DO i need to learn how to code complex queries? Any inputs would be appreciated. Please provide links which you think can be helpful. Yes i found similar question on StackOverflow, but i wanted to know important SQL topics from QA Engineer Perspective Thanks

    Read the article

  • Check/Monitor Amount of SQL Queries per Hour

    - by deathlock
    My website is hosted on a shared hosting and I'd like to know how much SQL queries it is using per hour. I tried to navigate through cPanel and I find nothing to check or monitor the amount of SQL queries per hour. I tried to ask my host and they said it is not possible to do manually. However I found this http://forum.powweb.com/archive/index.php/t-49937.html and another one on Stackoverflow: http://stackoverflow.com/questions/9842094/sql-how-can-i-get-the-number-of-executed-queries-per-database-or-hour-or And since this exists, I assume that it is actually possible. Problem is I can't execute that in my phpmyAdmin. Can someone here guide me through the process?

    Read the article

  • The SQL Beat Podcast-Capturing a SQL Rockstar

    - by SQLBeat
      This is the first permissible (waiting for signed disclaimers) episode of the SQL Beat Podcast featuring the gracious and famous Thomas La Rock. We talk about gay marriage, abortion, SQL community and a 9 inch pipe with a hole in it at the tip. No really. If there ever was a gentleman, SQL Rockstar is one and I want to thank him from the bottom of my digital recorder for agreeing to talk to me and my audience. All forty of them will appreciate the candor. Enjoy World. I did. Oh and a special rock start drum intro from me to you. CLICK HERE TO PLAY >>

    Read the article

  • The SQL Beat Podcast–Capturing a SQL Rockstar

    - by SQLBeat
      This is the first permissible (waiting for signed disclaimers) episode of the SQL Beat Podcast featuring the gracious and famous Thomas La Rock. We talk about gay marriage, abortion, SQL community and a 9 inch pipe with a hole in it at the tip. No really. If there ever was a gentleman, SQL Rockstar is one and I want to thank him from the bottom of my digital recorder for agreeing to talk to me and my audience. All forty of them will appreciate the candor. Enjoy World. I did. Oh and a special rock start drum intro from me to you. CLICK BELOW TO LISTEN >>>>>>>>>CLICK HERE TO PLAY >>>>>>>>> CLICK ABOVE TO SPEAR A FISH INSTEAD

    Read the article

  • What criteria would I use SQL Stream Insight vs TPL Dataflow [closed]

    - by makerofthings7
    There is an add-in to the Task Parallel Library (TPL) called TPL Dataflow that allows a variety of data processing scenarios. It seems that there are some parallels to the SQL Stream Insight product, however since SQL's Stream Insight has some interesting licensing around it, and it has a better performance depending on what license I get... I found myself asking myself should I use TPL Dataflow and not have any licensing issues, and possibly better performance. Can anyone tell me if performance is a valid criteria for comparing SQL Stream Insight vs TPL Dataflow? What other criteria should I be looking at when comparing the two?

    Read the article

  • The SQL Beat Podcast-Capturing a SQL Rockstar

    - by SQLBeat
    This is the first permissible (waiting for signed disclaimers) episode of the SQL Beat Podcast featuring the gracious and famous Thomas La Rock. We talk about gay marriage, abortion, SQL community and generally convivial and ergonomic as will be witnessed by THAT LONG PIPE IN THE CHAIR. If there ever was a gentleman, SQL Rockstar is one and I want to thank him from the bottom of my digital recorder for agreeing to talk to me and my audience. All forty of them will appreciate the candor. Enjoy World. I did. Oh and a special rock start drum intro from me to you. CLICK HERE TO PLAY

    Read the article

  • SQL and Database: Where to start! [closed]

    - by Nizar
    First of all I just know HTML and CSS (this is my background in web development and design) and I have found that before I move to a server-side language I need to learn about databases and SQL. My first question: Do you think this order of learning is good (I mean to learn SQL after HTML and CSS)? My secod related question: Do I have to learn a lot about SQL and databases? or just the basics? and if you know any good beginners books please write their titles.

    Read the article

  • Task failed because "sgen.exe" was not found

    - by Kinze
    I am getting the following error when attempting to build my project in Visual Studio 2008 Professional Edition: *Task failed because “sgen.exe” was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for “sgen.exe” in the “bin” subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v6.0A. You may be able to solve the problem by doing one of the following: 1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5. 2) Install Visual Studio 2008. 3) Manually set the above registry key to the correct location. 4) Pass the correct location into the “ToolPath” parameter of the task.* I tried downloading Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5, and still get the error. I also tried downloading the Windows 7 SDK and .NET Framework 3.5 and still same result. I also tried to manually editing the registry to change the InstallationFolder and I tried repairing the Visual Studio install. The project was originally created on WinXP and I am trying to compile on a reformatted machine running Windows 7 Enterprise.

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Problems with Continuous Integration (CI) in TFS during Build Automation?

    - by Steve Johnson
    Hi all, I am using TFS 2008 and Visual Studio and my boss has instructed me to implement Build Automation for Development and Release builds for a web Project. I am a total newbie in Build Automation. There are multiple developers working on the project on different machines using Visual Studio 2008 team System. Source is already being maintained on TFS 2008. SQL Server in Use is SQL Server 2000 and hosted IIS is IIS 7.5 on Windows Server 2008 x64. I have searched over the net and found Continuous Integration and Nightly Builds as two important Build Automation techniques. I was just wondering of any disadvantages associated with both the methodologies (CI and Nightly Builds). If someone could guide me to a working tutorial that explains both techniques the it would be quite helpful. Please also tell the requirements of IIS, SQL Server and any other that might be pre-requisite to implement build automation. Also i would like to know whether there are other techniques that are better then CI? Replies and discussion much appreciated. Thanks

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • Cannot connect Linux XAMPP PHP to SQL Server database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and SQL Server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • pass username and password to get-credential or run sql query without using invoke-sqlcmd in Powersh

    - by Emo
    I am trying to connect to a remote sql database and simply run the "select @@servername" query in Powershell. I'm trying to do this without using integrated security. I've been struggling with "get-credential" and "invoke-sqlcmd", only to find (I think), that you can't pass the password from "get-credential" to another Powershell cmdlets. Here's the code I'm using: add-pssnapin sqlserverprovidersnapin100 add-pssnapin sqlservercmdletsnapin100 load assemblies [Reflection.Assembly]::Load("Microsoft.SqlServer.Smo, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.SqlEnum, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.SmoEnum, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.ConnectionInfo, Version=9.0.242.0, Culture=neutral,PublicKeyToken=89845dcd8080cc91") connect to SQL Server $serverName = "HLSQLSRV03" $server = New-Object -typeName Microsoft.SqlServer.Management.Smo.Server -argumentList $serverName login using SQL authentication $server.ConnectionContext.LoginSecure=$false; $credential = Get-Credential $userName = $credential.UserName -replace("\","") $pass = $credential.Password $server.ConnectionContext.set_Login($userName) $server.ConnectionContext.set_SecurePassword($credential.Password) $DB = "Master" invoke-sqlcmd -query "select @@Servername" -database $DB -serverinstance $servername -username $username -password $pass If if just hardcode the password in at the end of the "invoke-sqlcmd" line, it works. Is this because you can't use "get-credential" with "invoke-sqlcmd"? If so...what are my alternatives? Thanks so much for you help Emo

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • Handy SQL Server Functions Series (HSSFS) Part 2.0 - Prelude to Parsing Patterns Properly

    - by Most Valuable Yak (Rob Volk)
    In Part 1 of the series I wrote about 2 lesser-known and somewhat undocumented functions. In this part, I'm going to cover some familiar string functions like Substring(), Parsename(), Patindex(), and Charindex() and delve into their strengths and weaknesses. I'm also splitting this part up into sub-parts to help focus on a particular technique and/or problem with the technique, hence the Part 2.0. Consider this a composite post, or com-post, if you will. (It may just turn out to be a pile of sh_t after all) I'll be using a contrived example, perhaps the most frustratingly useful, or usefully frustrating, function in SQL Server: @@VERSION. Contrived, because there are better ways to get the information (which I'll cover later); frustrating, because of the way Microsoft formatted the value; and useful because it does have 1 or 2 bits of information not found elsewhere. First let's take a look at the output of @@VERSION: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3) There are 4 lines, with lines 2-4 indented with a tab character.  In case your browser (or this blog software) doesn't show it correctly, I gave each line a different color.  While this PRINTs nicely, if you SELECT @@VERSION in grid mode it all runs together because it ignores carriage return/line feed (CR/LF) characters.  Not fatal, but annoying. Note that @@VERSION's output will vary depending on edition and version of SQL Server, and also the OS it's installed on.  Despite the differences, the output is laid out the same way and the relevant pieces are in the same order. I'll be using the following view for Parts 2.1 onward, so we have a nice collection of @@VERSION information: create view version(SQLVersion,VersionString) AS ( select 2000, 'Microsoft SQL Server 2000 - 8.00.2055 (Intel X86) Dec 16 2008 19:46:53 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86) May 26 2009 14:24:20 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.3080.00 (Intel X86) Sep 6 2009 01:43:32 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' ) Feel free to add your own @@VERSION info if it's not already there. In Part 2.1 I'll focus on extracting the SQL Server version number (10.50.1600.1 in first example) and the Edition (Developer), but will have a solution that works with all versions.  Stay tuned!

    Read the article

  • Crystal reports .net visual studio 2008 bundled edition

    - by DeveloperChris
    I have a serious issue with crystal reports. when run in my development environment or debugged on my local machine it works fine. but when the application is published to a windows server 2003 it has the dreaded "The report you requested requires further information" Message I have had no luck trying to get rid of this message Anybody know what I can try? DC Here is a bunch more info. I use a placeholder in the aspx page and then set the user/password and database in the codebehind I could not get it to work with a dataset and found that I had to assign odbc connection in the cr designer. and then in the code behind change the above details as required. This is done because the same report can get the data from 3 different databases (live development and training) protected override void Page_Load(object sender, EventArgs e) { base.Page_Load(sender, e); CrystalReportSource1.ReportDocument.Load(Server.MapPath(@"~/Reports/Report5asp.rpt")); CrystalReportViewer1.ReportSource = ConfigureCrystalReports(CrystalReportSource1.ReportDocument,CrystalReportViewer1); // parameters CrystalReportViewer1.ParameterFieldInfo.Clear(); AddParameter("DIid", _app.Data["DIid"], CrystalReportViewer1.ParameterFieldInfo); AddParameter("EEid", _app.Data["EEid"], CrystalReportViewer1.ParameterFieldInfo); AddParameter("CTid", _app.Data["CTid"], CrystalReportViewer1.ParameterFieldInfo); } public ReportDocument ConfigureCrystalReports(ReportDocument report, CrystalReportViewer viewer) { String _connectionString = _app.ConnectionString(); String dsn = _app.DSN(); SqlConnectionStringBuilder SConn = new SqlConnectionStringBuilder(_connectionString); TableLogOnInfos crtableLogoninfos = new TableLogOnInfos(); TableLogOnInfo crtableLogoninfo = new TableLogOnInfo(); ConnectionInfo crConnectionInfo = new ConnectionInfo(); crConnectionInfo.ServerName = dsn;// SConn.DataSource; crConnectionInfo.DatabaseName = SConn.InitialCatalog; crConnectionInfo.UserID = SConn.UserID; crConnectionInfo.Password = SConn.Password; crConnectionInfo.Type = ConnectionInfoType.SQL; crConnectionInfo.IntegratedSecurity = false; foreach (CrystalDecisions.CrystalReports.Engine.Table CrTable in report.Database.Tables) { crtableLogoninfo = CrTable.LogOnInfo; crtableLogoninfo.ConnectionInfo = crConnectionInfo; CrTable.ApplyLogOnInfo(crtableLogoninfo); } return report; } As stated this works fine on my XP machine used for development when deployed on winserver 2003 I get the error DC Some interesting additional information I moved the development to my home machine so I could work on the problem this weekend. So now I am developing debugging and testing on the same machine! In VS2008 I can edit and preview the reports with no problems If I fire up the debugger I can view the reports in the browser with no problems But if I publish the website to another folder on the same machine and fire up IIS and try to browse to a report I get the aforementioned error. All else works as expected. IIS runs under different permissions than VS2008 so perhaps its something to do with that, but I have tried lots of different permissions and cannot get it to run. DC

    Read the article

  • ELMAH with IIS6.0 and SQL Server 2005

    - by RajeshT
    On production I have ELMAH configured so that the errors are logged into a SQL Server 2005 database created specially for ELMAH. The sites run on IIS6.0 web server. ELMAH is able to send emails for the errors, but nothing is getting logged into the SQL Server database. The connection is a trusted connection and I have <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah"/> in section group <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="Elmah.Sql" applicationName="VBCPortal"/> under elmah <add name="Elmah.Sql" connectionString="Data Source=(local);Initial Catalog=ELMAH;Trusted_Connection=True" /> in connectionStrings in my web.config file. Any idea why the errors are not getting logged into the database?

    Read the article

  • Converting from SQL Server 2000 to 2005 for ASP.NET web App

    - by Bazza Formez
    Hi there, I'm moving my ASP.NET website to a new provider. Only problem is, old host support my SQL Server 2000 db. New host only supports SQL Server 2005. How should I go about the conversion ? Can I simply produce a backup of the 2000 (.bak) file at the old host, and restore that file into SQL Server 2005 at the new host ? Or is there more to it ?? Note that I don't own a copy of SQL Server 2005 at home... and I'm trying to avoid having to do so. Thanks, Bazza

    Read the article

  • Sql Server copying table information between databases

    - by Andrew
    Hi, I have a script that I am using to copy data from a table in one database to a table in another database on the same Sql Server instance. The script works great when I am connected to the Sql Server instance as myself as I have dbo access to both databases. The problem is that this won't be the case on the client's Sql Server. They have seperate logins for each database (Sql Authentication Logins). Does anyone know if there is a way to run a script under these circumstances. The script would be doing something like. use sourceDB Insert targetDB.dbo.tblTest (id, test_name) Select id, test_name from dbo.tblTest Thanks

    Read the article

  • Integrate Lucene or any other search product with SQL server 2005

    - by HBACHARYA
    Hi, I need to use full text search with SQL server 2005 and I have explored its inbuilt search approach (SQL server full text indexing) but it seems less powerful. I have also looked features of Lucene. Now my questions: Is is possible to integrate lucene and SQL server in anyway? 1. Can my T-Sql queries use Lucene index for returning results? (May be uses CLR based function internally) 2. How to update Lucene index while data in the tables are getting updated 3. What can be overall architecutre? 4. Are there any commercial products avaliable which provides this kind of support? Thanks, HB

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • T-SQL Tuesday #31: Paradox of the Sawtooth Log

    - by merrillaldrich
    Today’s T-SQL Tuesday, hosted by Aaron Nelson ( @sqlvariant | sqlvariant.com ) has the theme Logging . I was a little pressed for time today to pull this post together, so this will be short and sweet. For a long time, I wondered why and how a database in Full Recovery Mode, which you’d expect to have an ever-growing log -- as all changes are written to the log file -- could in fact have a log usage pattern that looks like this: This graph shows the Percent Log Used (bold, red) and the Log File(s)...(read more)

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • SQL University: Parallelism Week - Part 3, Settings and Options

    - by Adam Machanic
    Congratulations! You've made it back for the the third and final installment of Parallelism Week here at SQL University . So far we've covered the fundamentals of multitasking vs. parallel processing and delved into how parallel query plans actually work . Today we'll take a look at the settings and options that influence intra-query parallelism and discuss how best to set things up in various situations. Instance-Level Configuration Your database server probably has more than one logical processor....(read more)

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >