Search Results

Search found 37457 results on 1499 pages for 'sql 2008 r2'.

Page 235/1499 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • What criteria would I use SQL Stream Insight vs TPL Dataflow [closed]

    - by makerofthings7
    There is an add-in to the Task Parallel Library (TPL) called TPL Dataflow that allows a variety of data processing scenarios. It seems that there are some parallels to the SQL Stream Insight product, however since SQL's Stream Insight has some interesting licensing around it, and it has a better performance depending on what license I get... I found myself asking myself should I use TPL Dataflow and not have any licensing issues, and possibly better performance. Can anyone tell me if performance is a valid criteria for comparing SQL Stream Insight vs TPL Dataflow? What other criteria should I be looking at when comparing the two?

    Read the article

  • The SQL Beat Podcast-Capturing a SQL Rockstar

    - by SQLBeat
    This is the first permissible (waiting for signed disclaimers) episode of the SQL Beat Podcast featuring the gracious and famous Thomas La Rock. We talk about gay marriage, abortion, SQL community and generally convivial and ergonomic as will be witnessed by THAT LONG PIPE IN THE CHAIR. If there ever was a gentleman, SQL Rockstar is one and I want to thank him from the bottom of my digital recorder for agreeing to talk to me and my audience. All forty of them will appreciate the candor. Enjoy World. I did. Oh and a special rock start drum intro from me to you. CLICK HERE TO PLAY

    Read the article

  • SQL and Database: Where to start! [closed]

    - by Nizar
    First of all I just know HTML and CSS (this is my background in web development and design) and I have found that before I move to a server-side language I need to learn about databases and SQL. My first question: Do you think this order of learning is good (I mean to learn SQL after HTML and CSS)? My secod related question: Do I have to learn a lot about SQL and databases? or just the basics? and if you know any good beginners books please write their titles.

    Read the article

  • Reg Query Issues

    - by Fitz
    I have a batch script that does reporting on our systems and part of it queries the registry for information. The script fails to get the key's value when its ran by the system, but whenever I run the script myself, it works perfectly the command: REG QUERY "HKEY_LOCAL_MACHINE\SOFTWARE\TrendMicro\ScanMail for Exchange\CurrentVersion" /v PatternStringFormatted > current1.tmp should return: HKEY_LOCAL_MACHINE\SOFTWARE\TrendMicro\ScanMail for Exchange\CurrentVersion PatternStringFormatted REG_SZ 6.645.00 This script is failing on a Server 2008 R2 machine, but runs fine on Server 2003 R2 machines.

    Read the article

  • I have problem installing the sql server 2008 r2 on windows server 2008 r2. It gives me the following error.

    - by user411072
    TITLE: Microsoft SQL Server 2008 R2 Setup The following error has occurred: Error reading from file: C:\en_sql_server_2008_r2_standard_x86_x64_ia64_dvd_521546\1033_ENU_LP\x64\setup\sqlsupport_msi\PFiles\SqlServr\100\Setup\fe72iemr\e4grzzmx\x64\1033\ynzwp_7q.rtf. Verify that the file exists and that you can access it. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.50.1600.1&EvtType=0xDF039760%25401201%25401 BUTTONS: OK

    Read the article

  • Task failed because "sgen.exe" was not found

    - by Kinze
    I am getting the following error when attempting to build my project in Visual Studio 2008 Professional Edition: *Task failed because “sgen.exe” was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for “sgen.exe” in the “bin” subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v6.0A. You may be able to solve the problem by doing one of the following: 1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5. 2) Install Visual Studio 2008. 3) Manually set the above registry key to the correct location. 4) Pass the correct location into the “ToolPath” parameter of the task.* I tried downloading Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5, and still get the error. I also tried downloading the Windows 7 SDK and .NET Framework 3.5 and still same result. I also tried to manually editing the registry to change the InstallationFolder and I tried repairing the Visual Studio install. The project was originally created on WinXP and I am trying to compile on a reformatted machine running Windows 7 Enterprise.

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Problems with Continuous Integration (CI) in TFS during Build Automation?

    - by Steve Johnson
    Hi all, I am using TFS 2008 and Visual Studio and my boss has instructed me to implement Build Automation for Development and Release builds for a web Project. I am a total newbie in Build Automation. There are multiple developers working on the project on different machines using Visual Studio 2008 team System. Source is already being maintained on TFS 2008. SQL Server in Use is SQL Server 2000 and hosted IIS is IIS 7.5 on Windows Server 2008 x64. I have searched over the net and found Continuous Integration and Nightly Builds as two important Build Automation techniques. I was just wondering of any disadvantages associated with both the methodologies (CI and Nightly Builds). If someone could guide me to a working tutorial that explains both techniques the it would be quite helpful. Please also tell the requirements of IIS, SQL Server and any other that might be pre-requisite to implement build automation. Also i would like to know whether there are other techniques that are better then CI? Replies and discussion much appreciated. Thanks

    Read the article

  • Handy SQL Server Functions Series (HSSFS) Part 2.0 - Prelude to Parsing Patterns Properly

    - by Most Valuable Yak (Rob Volk)
    In Part 1 of the series I wrote about 2 lesser-known and somewhat undocumented functions. In this part, I'm going to cover some familiar string functions like Substring(), Parsename(), Patindex(), and Charindex() and delve into their strengths and weaknesses. I'm also splitting this part up into sub-parts to help focus on a particular technique and/or problem with the technique, hence the Part 2.0. Consider this a composite post, or com-post, if you will. (It may just turn out to be a pile of sh_t after all) I'll be using a contrived example, perhaps the most frustratingly useful, or usefully frustrating, function in SQL Server: @@VERSION. Contrived, because there are better ways to get the information (which I'll cover later); frustrating, because of the way Microsoft formatted the value; and useful because it does have 1 or 2 bits of information not found elsewhere. First let's take a look at the output of @@VERSION: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3) There are 4 lines, with lines 2-4 indented with a tab character.  In case your browser (or this blog software) doesn't show it correctly, I gave each line a different color.  While this PRINTs nicely, if you SELECT @@VERSION in grid mode it all runs together because it ignores carriage return/line feed (CR/LF) characters.  Not fatal, but annoying. Note that @@VERSION's output will vary depending on edition and version of SQL Server, and also the OS it's installed on.  Despite the differences, the output is laid out the same way and the relevant pieces are in the same order. I'll be using the following view for Parts 2.1 onward, so we have a nice collection of @@VERSION information: create view version(SQLVersion,VersionString) AS ( select 2000, 'Microsoft SQL Server 2000 - 8.00.2055 (Intel X86) Dec 16 2008 19:46:53 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86) May 26 2009 14:24:20 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.3080.00 (Intel X86) Sep 6 2009 01:43:32 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' ) Feel free to add your own @@VERSION info if it's not already there. In Part 2.1 I'll focus on extracting the SQL Server version number (10.50.1600.1 in first example) and the Edition (Developer), but will have a solution that works with all versions.  Stay tuned!

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • T-SQL Tuesday #31: Paradox of the Sawtooth Log

    - by merrillaldrich
    Today’s T-SQL Tuesday, hosted by Aaron Nelson ( @sqlvariant | sqlvariant.com ) has the theme Logging . I was a little pressed for time today to pull this post together, so this will be short and sweet. For a long time, I wondered why and how a database in Full Recovery Mode, which you’d expect to have an ever-growing log -- as all changes are written to the log file -- could in fact have a log usage pattern that looks like this: This graph shows the Percent Log Used (bold, red) and the Log File(s)...(read more)

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • SQL University: Parallelism Week - Part 3, Settings and Options

    - by Adam Machanic
    Congratulations! You've made it back for the the third and final installment of Parallelism Week here at SQL University . So far we've covered the fundamentals of multitasking vs. parallel processing and delved into how parallel query plans actually work . Today we'll take a look at the settings and options that influence intra-query parallelism and discuss how best to set things up in various situations. Instance-Level Configuration Your database server probably has more than one logical processor....(read more)

    Read the article

  • SQL Down Under podcast 60 with SQL Server MVP Adam Machanic

    - by Greg Low
    I managed to get another podcast posted over the weekend. Late last week, I managed to get a show recorded with Adam Machanic. Adam's always fascinating. In this show, he's talking about what he's found regarding increasing query performance using parallelism. Late in the show, he gives his thoughts on a number of topics related to the upcoming SQL Server 2014.Enjoy!The show is online now: http://www.sqldownunder.com/Podcasts 

    Read the article

  • SQL Server Boolean (bit datatype)

    - by Derek D.
    In SQL Server, boolean values can be represented using the bit datatype. Bit values differ from boolean values in that a bit can actually be one of three values 1, 0, or NULL; while booleans can only either be true or false. When assigning bits, it is best to use 1 or zero [...]

    Read the article

  • T-SQL Tuesday # 16 : This is not the aggregate you're looking for

    - by AaronBertrand
    This week, T-SQL Tuesday is being hosted by Jes Borland ( blog | twitter ), and the theme is " Aggregate Functions ." When people think of aggregates, they tend to think of MAX(), SUM() and COUNT(). And occasionally, less common functions such as AVG() and STDEV(). I thought I would write a quick post about a different type of aggregate: string concatenation. Even going back to my classic ASP days, one of the more common questions out in the community has been, "how do I turn a column into a comma-separated...(read more)

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • T-SQL Tuesday #006: LOB Data

    - by Adam Machanic
    Just a quick note for those of you who may not have seen Michael Coles's post (and a reminder for the rest of you): The topic of this month's T-SQL Tuesday is LOB data . Get your posts ready; next Tuesday we go big! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • SSIS Design Patterns Training in London 8-11 Sep!

    - by andyleonard
    A few seats remain for my course SQL Server Integration Services 2012 Design Patterns to be delivered in London 8-11 Sep 2014. Register today to learn more about: New features in SSIS 2012 and 2014 Advanced patterns for loading data warehouses Error handling The (new) Project Deployment Model Scripting in SSIS The (new) SSIS Catalog Designing custom SSIS tasks Executing, managing, monitoring, and administering SSIS in the enterprise Business Intelligence Markup Language (Biml) BimlScript ETL Instrumentation...(read more)

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 4)

    - by Hugo Kornelis
    Scalar user-defined functions are bad for performance. I already showed that for T-SQL scalar user-defined functions without and with data access, and for most CLR scalar user-defined functions without data access , and in this blog post I will show that CLR scalar user-defined functions with data access fit into that picture. First attempt Sticking to my simplistic example of finding the triple of an integer value by reading it from a pre-populated lookup table and following the standard recommendations...(read more)

    Read the article

  • Using Extended Events in SQL Server Denali CTP1 to Map out the TransactionLog SQL Trace Event EventSubClass Values

    - by Jonathan Kehayias
    John Samson ( Blog | Twitter ) asked on the MSDN Forums about the meaning/description for the numeric values returned by the EventSubClass column of the TransactionLog SQL Trace Event.  John pointed out that this information is not available for this Event like it is for the other events in the Books Online Topic ( TransactionLog Event Class ), or in the sys.trace_subclass_values DMV.  John wanted to know if there was a way to determine this information.  I did some looking and found...(read more)

    Read the article

  • SQL Server Cast

    - by Derek Dieter
    The SQL Server cast function is the easiest data type conversion function to be used compared to the CONVERT function. It takes only one parameter followed by the AS clause to convert a specified value. A quick example is the following:SELECT UserID_String = CAST(UserID AS varchar(50)) FROM dbo.UserThis example will convert the integer to a character value. [...]

    Read the article

  • T-SQL Tuesday: Aggregations in SSIS

    - by andyleonard
    Introduction Jes Borland ( Blog | @grrl_geek ) is hosting this month's T-SQL Tuesday - started by SQLBlog's own Adam Machanic ( Blog | @AdamMachanic ) - and it is about aggregation. I thought I'd show a couple ways to do aggregation using SSIS. The Aggregate Transformation in SSIS The Aggregate transform in SSIS is fast . I built an SSIS package (AggregateScripts.dtsx) with two Data Flow Tasks (Using the Aggregate Transform and Using a Script Component). Using the Aggregate Transform looks like this:...(read more)

    Read the article

  • SQL Server 2012 - Upgrade Whitepaper

    - by JustinL
    Just a short note to mention Microsoft have released the Technical Reference Guide for upgrading to SQL Server 2012. The paper is available for download here: http://tinyurl.com/84xm5b4 There's some interesting details on approaches to upgrade, including features such as high availability, full-text search, service broker and other components (SSIS, SSAS, SSRS). Additionally, there's a (fairly) recent initiative to organise and present TechNet content more easily, there's some useful content (with interesting presentation) at the link below: http://technet.microsoft.com/en-us/library/hh393545.aspx Good luck planning your upgrades, Regards, Justin

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >