Search Results

Search found 5861 results on 235 pages for 'ssis reporting pack'.

Page 36/235 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • SSIS Training Comes to NYC 30 Jul-3 Aug!

    - by andyleonard
    Linchpin People is excited to announce the scheduling of From Zero To SSIS in New York City 30 Jul – 03 Aug 2012! Training Description From Zero to SSIS was developed by Andy Leonard to train technology professionals in the fine art of using SQL Server Integration Services (SSIS) to build data integration and Extract-Transform-Load (ETL) solutions. The training is focused around labs and emphasizes a hands-on approach. Most technologists learn by doing; this training is designed to maximize the time...(read more)

    Read the article

  • SSIS Training Comes to NYC 30 Jul-3 Aug!

    - by andyleonard
    Linchpin People is excited to announce the scheduling of From Zero To SSIS in New York City 30 Jul – 03 Aug 2012! Training Description From Zero to SSIS was developed by Andy Leonard to train technology professionals in the fine art of using SQL Server Integration Services (SSIS) to build data integration and Extract-Transform-Load (ETL) solutions. The training is focused around labs and emphasizes a hands-on approach. Most technologists learn by doing; this training is designed to maximize the time...(read more)

    Read the article

  • JavaScript pack("d") - binary strings

    - by Tim Whitlock
    I'm trying to replicate the Perl and PHP style pack and unpack functions in JavaScript. Unsigned integers were easy enough, so my pack('n') and pack('N') are ok. But my lack of a computer science background is a hurdle now and I don't know where to start with pack('d') for packing JavaScript's standard floating point. Is there a JavaScript library for this out there? If not, is there a good resource where I can learn how to do this? I am fine with bitwise and binary level operations in JS, I just don't know where to start with the logic. Thanks.

    Read the article

  • Stored Procedures with SSRS? Hmm… not so much

    - by Rob Farley
    Little Bobby Tables’ mother says you should always sanitise your data input. Except that I think she’s wrong. The SQL Injection aspect is for another post, where I’ll show you why I think SQL Injection is the same kind of attack as many other attacks, such as the old buffer overflow, but here I want to have a bit of a whinge about the way that some people sanitise data input, and even have a whinge about people who insist on using stored procedures for SSRS reports. Let me say that again, in case you missed it the first time: I want to have a whinge about people who insist on using stored procedures for SSRS reports. Let’s look at the data input sanitisation aspect – except that I’m going to call it ‘parameter validation’. I’m talking about code that looks like this: create procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     /* First check that @eomdate is a valid date */     if isdate(@eomdate) != 1     begin         select 'Please enter a valid date' as ErrorMessage;         return;     end     /* Then check that time has passed since @eomdate */     if datediff(day,@eomdate,sysdatetime()) < 5     begin         select 'Sorry - EOM is not complete yet' as ErrorMessage;         return;     end         /* If those checks have succeeded, return the data */     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where OrderDate >= dateadd(month,-1,@eomdate)         and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Notice that the code checks that a date has been entered. Seriously??!! This must only be to check for NULL values being passed in, because anything else would have to be a valid datetime to avoid an error. The other check is maybe fair enough, but I still don’t like it. The two problems I have with this stored procedure are the result sets and the small fact that the stored procedure even exists in the first place. But let’s consider the first one of these problems for starters. I’ll get to the second one in a moment. If you read Jes Borland (@grrl_geek)’s recent post about returning multiple result sets in Reporting Services, you’ll be aware that Reporting Services doesn’t support multiple results sets from a single query. And when it says ‘single query’, it includes ‘stored procedure call’. It’ll only handle the first result set that comes back. But that’s okay – we have RETURN statements, so our stored procedure will only ever return a single result set.  Sometimes that result set might contain a single field called ErrorMessage, but it’s still only one result set. Except that it’s not okay, because Reporting Services needs to know what fields to expect. Your report needs to hook into your fields, so SSRS needs to have a way to get that information. For stored procs, it uses an option called FMTONLY. When Reporting Services tries to figure out what fields are going to be returned by a query (or stored procedure call), it doesn’t want to have to run the whole thing. That could take ages. (Maybe it’s seen some of the stored procedures I’ve had to deal with over the years!) So it turns on FMTONLY before it makes the call (and turns it off again afterwards). FMTONLY is designed to be able to figure out the shape of the output, without actually running the contents. It’s very useful, you might think. set fmtonly on exec dbo.GetMonthSummaryPerSalesPerson '20030401'; set fmtonly off Without the FMTONLY lines, this stored procedure returns a result set that has three columns and fourteen rows. But with FMTONLY turned on, those rows don’t come back. But what I do get back hurts Reporting Services. It doesn’t run the stored procedure at all. It just looks for anything that could be returned and pushes out a result set in that shape. Despite the fact that I’ve made sure that the logic will only ever return a single result set, the FMTONLY option kills me by returning three of them. It would have been much better to push these checks down into the query itself. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Now if we run it with FMTONLY turned on, we get the single result set back. But let’s consider the execution plan when we pass in an invalid date. First let’s look at one that returns data. I’ve got a semi-useful index in place on OrderDate, which includes the SalesPersonID and TotalDue fields. It does the job, despite a hefty Sort operation. …compared to one that uses a future date: You might notice that the estimated costs are similar – the Index Seek is still 28%, the Sort is still 71%. But the size of that arrow coming out of the Index Seek is a whole bunch smaller. The coolest thing here is what’s going on with that Index Seek. Let’s look at some of the properties of it. Glance down it with me… Estimated CPU cost of 0.0005728, 387 estimated rows, estimated subtree cost of 0.0044385, ForceSeek false, Number of Executions 0. That’s right – it doesn’t run. So much for reading plans right-to-left... The key is the Filter on the left of it. It has a Startup Expression Predicate in it, which means that it doesn’t call anything further down the plan (to the right) if the predicate evaluates to false. Using this method, we can make sure that our stored procedure contains a single query, and therefore avoid any problems with multiple result sets. If we wanted, we could always use UNION ALL to make sure that we can return an appropriate error message. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, /*Placeholder: */ '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     /* Now include the error messages */     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5     order by SalesPersonID; end But still I don’t like it, because it’s now a stored procedure with a single query. And I don’t like stored procedures that should be functions. That’s right – I think this should be a function, and SSRS should call the function. And I apologise to those of you who are now planning a bonfire for me. Guy Fawkes’ night has already passed this year, so I think you miss out. (And I’m not going to remind you about when the PASS Summit is in 2012.) create function dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) returns table as return (     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5 ); We’ve had to lose the ORDER BY – but that’s fine, as that’s a client thing anyway. We can have our reports leverage this stored query still, but we’re recognising that it’s a query, not a procedure. A procedure is designed to DO stuff, not just return data. We even get entries in sys.columns that confirm what the shape of the result set actually is, which makes sense, because a table-valued function is the right mechanism to return data. And we get so much more flexibility with this. If you haven’t seen the simplification stuff that I’ve preached on before, jump over to http://bit.ly/SimpleRob and watch the video of when I broke a microphone and nearly fell off the stage in Wales. You’ll see the impact of being able to have a simplifiable query. You can also read the procedural functions post I wrote recently, if you didn’t follow the link from a few paragraphs ago. So if we want the list of SalesPeople that made any kind of sales in a given month, we can do something like: select SalesPersonID from dbo.GetMonthSummaryPerSalesPerson(@eomonth) order by SalesPersonID; This doesn’t need to look up the TotalDue field, which makes a simpler plan. select * from dbo.GetMonthSummaryPerSalesPerson(@eomonth) where SalesPersonID is not null order by SalesPersonID; This one can avoid having to do the work on the rows that don’t have a SalesPersonID value, pushing the predicate into the Index Seek rather than filtering the results that come back to the report. If we had joins involved, we might see some of those being simplified out. We also get the ability to include query hints in individual reports. We shift from having a single-use stored procedure to having a reusable stored query – and isn’t that one of the main points of modularisation? Stored procedures in Reporting Services are just a bit limited for my liking. They’re useful in plenty of ways, but if you insist on using stored procedures all the time rather that queries that use functions – that’s rubbish. @rob_farley

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • Consolidate SQL Server Reporting Services

    - by Eric C. Singer
    I've been a big fan of consolidating as many DB's to a few SQL servers for a while and I've had great success with it. However, I've never had to deal with SQL reporting services. Has anyone migrated SSRS from a bunch of random SQL servers into a consolidated SQL server? I don't exactly know a whole lot about SSRS which is part of the problem. To my knowlege, it's one DB per SSRS instance, so it sounds like i'd need to find a way of exporting data and merging it. Basically the process used to look like: Move DB from SQL Express to shared SQL server Change point in APP to point at new SQL server With reporting services, how do I move the reporting service compenent of the DB as well? I realize I may need to tweak the app, but my question is on the SQL side.

    Read the article

  • Heap corruption error after language pack installation for Visual Studio 2012

    - by Lyndon
    I have installed the german version of Visual Studio 2012 Premium on my german windows machine and installed the english language pack vor Visual Studio 2012 Premium and it works great but after I installed the german language pack I get the heap corruption error 0xc0000374. The faulty module is ntdll.dll, version: 6.3.9600.16408 Only restoring Windows resolves this issue. Edit: This error also occurs when changing the displayed language and I was able to observe this behavior only after updating from Windows 8 to Windows 8.1 and updating from DevExpress 12.1 to DevExpress 13.1. Not only that, but the error does not occure immediately after installing a language, sometimes I can start debugging my program as usual and then after three to five times or so, the error occurs. Is there another solution than restoring Windows?

    Read the article

  • .NET Framework 4.0 Targeting Pack does not show in Visual Studio

    - by balexandre
    How can I install the .NET 4.0 Framework on Windows 8 Pro / Visual Studio 2012 Professional? I get this: and if I follow the link of Install other frameworks... I get into Microsoft page where I find this information: I have then installed .NET Framework 4.0.1 Targeting Pack and .NET Framework 4.0.2 Targeting Pack as I can't install 4.0.3, restarted the machine over an over, but Visual Studio continues not to show the framework on the dropdown menu. What am I doing wrong? Here is what regedit says what I have installed on my machine:

    Read the article

  • Configuring Corporate Windows Error Reporting On Windows 7

    - by Clément
    Is there any good documentation out there explaining how to setup Corporate Error Reporting (CER) on Windows 7? I found some information in Advanced Windows Debugging but the book targets Windows XP and things have changed quite a bit since then. I could not find any tutorials on the Internet/MSDN either. To give a bit of background information, I work for a company with 25 employees and I would like to send crash reports to a local server so that I can analyze what causes our tools to crash. I think I need to know two things: Setting up a Corporate Error Reporting server. Setting up computer to send error reports to our Corporate Error Reporting server.

    Read the article

  • Operating system not found after downloading skin pack

    - by 8BitSensei
    My brother has downloaded a executable file from Skin Pack — I believe it was the Mountain Lion IO6 skin pack. He's using an Acer Aspire 553G. And now his operating system won't start (Windows 7). It gets to the BIOS and then goes to load up the OS but the screen goes blank and it just goes back to the BIOS over and over again. He decided to play with the bootup settings and tried different options and got the error message "Operating System not found." Does anyone have any idea how to solve this?

    Read the article

  • What .NET reporting tools are best for dynamic report generation?

    - by bvanderw
    Perhaps I need to define "dynamic generation". By this I mean using graphics primitives to draw on the page (such as DrawText or DrawLine, etc) This is what System.Drawing.Printing provides. I often need to create forms and reports for Windows applications that either require dynamic generation or where I need control over the formatting that stretches or goes beyond the capabilities of most report designers. Essentially, I need to ability to create my own pages using graphics primitives like you can do with System.Drawing.Printing that are part of package that also provides a report designer, exporting to PDF, etc. In my Delphi days, I used Rave Reports (along with the exporting add-ons from Gnostice) because it was the only Delphi reporting tool that gave you that kind of fine control. I've been struggling with the reporting tools provided by Developer Express and I have given up trying to make them do what I need to do. I downloaded a trial of ActiveReports and was able to completely create one of my dynamic reports (using their Page class) in a few hours one afternoon. It's likely I will buy their product, but it's a bit frustrating to have to do so after investing in the Developer Express tools. Before I do so, are there any other products that offer this functionality that I should investigate? As far as I can tell, Crystal Reports does not - is this correct? Thanks.... --Bruce

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Calling SSIS from BizTalk Orchestration

    - by aceinthehole
    I have a scenario were I need to move a vast amount of data, and I need to use BizTalk to control the flow and contain the business logic. The problem is that BizTalk will not be able to handle the amount of data that needs to be moved. We have decided to a BizTalk Orchestration to kick off an SSIS package that does the actual heavy lifting. However, there is a caveat in that we have to be able to pass information into SSIS such as file location and info about how to split certain data up. My question is, what is the best way to call into SSIS from an Orchestration given those parameters? Should I build a webservice around it? Is there an adapter or stored procedure that I can call? Or is there a way to call it directly from the Orchestration?

    Read the article

  • SQL SERVER – List of Article on Expressor Data Integration Platform

    - by pinaldave
    The ability to transform data into meaningful and actionable information is the most important information in current business world. In this fast growing and changing business needs effective data integration is single most important thing in making proper decision making. I have been following expressor software since November 2010, when I met expressor team in Seattle. Here are my posts on their innovative data integration platform and expressor Studio, a free desktop ETL tool: 4 Tips for ETL Software IDE Developers Introduction to Adaptive ETL Tool – How adaptive is your ETL? Sharing your ETL Resources Across Applications with Ease expressor Studio Includes Powerful Scripting Capabilities expressor 3.2 Release Review 5 Tips for Improving Your Data with expressor Studio As I had mentioned in some of my blog posts on them, I encourage you to download and test-drive their Studio product – it’s free. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SSIS

    Read the article

  • It's Official, I'm a Geek

    - by andyleonard
    I'm honored to join Glen Gordon ( Blog - @glengordon ) and G. Andrew Duthie ( Blog - @devhammer ) today at 3:00 PM EDT for an MSDN Webcast entitled GeekSpeak: Inside SQL Server Integration Services (SSIS). This is a LiveMeeting and you can join in the fun as an attendee here . It's a live show, so bring your questions! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days has been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Speaking at Triangle SQL Server User Group 16 Mar 2010!

    - by andyleonard
    I'm excited to present Applied SSIS Design Patterns to the Triangle SQL Server User Group 16 Mar 2010! This is a reprise of my PASS Summit 2009 spotlight session. If you read this blog and make the meeting, introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • How do I convert this Crystal Report IF statement for use in a WHERE clause in Reporting Services?

    - by Spacehamster
    I'm trying to translate this Crystal Reports IF Statement for use in a WHERE clause - {@receipt_datetime_daylight} in {?DateRange} and (if {?Call Sign} = "All Call Signs" Then {cacs_incident_task.resource_or_class_id} = {cacs_incident_task.resource_or_class_id} Else If {?Call Sign} = "All Sierra Call Signs" Then {cacs_incident_task.resource_or_class_id} in ["S10", "S11", "S12"] Else If {?Call Sign} = "All Whiskey Call Signs" Then {cacs_incident_task.resource_or_class_id} in ["W01", "W02", "W03"] Else {cacs_incident_task.resource_or_class_id} = {?Call Sign}) and (if {?OffenceType} = "All Offences" Then {cacs_inc_type.description} = {cacs_inc_type.description} else {cacs_inc_type.description} = {?OffenceType}) CASE statements don't work in Reporting Services, so I need to find a why of translating this into a WHERE clause. Does anyone know a way?

    Read the article

  • If Html File Has No Ending "/tr" Tag OR "/td" Tag Then HTML Agility Pack Does Not Read That Informat

    - by Harikrishna
    I am using HTML Agility Pack to parse html content. I am using parsing to extract table information. It works. But if there is no ending "/tr" tag or "/td" tag then it does not parse that information perfectly.(in which there is no ending tr tag or td tag.) Like <TABLE border=0><TBODY><TR height=20><TD class=xl27boL noWrap width="7%">01890345&nbsp;</TD> <TD class=xl27boL noWrap width="4%">1416</TD> <TD class=xl27boL noWrap width="7%">kjlkjkls&nbsp;</TD><TD class=xl27boL noWrap width="4%">14:01:57&nbsp;</TD> <TD class=xl27boL noWrap align=left width="15%">Football</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD> <TD class=xl27boL noWrap align=right width="5%">50&nbsp;</TD> <TD class=xl27boL noWrap align=right width="5%">4997.2500</TD><TD class=xl27boL noWrap align=right width="7%">249862.50&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boRLnoWrap align=right width="8%">249612.64&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">4997.2500</TD><TD class=xl27boL noWrap align=right width="7%">249862.50&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">249.86</TD><TD class=xl27boL noWrap align=right width="5%">4992.2528</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD><TD class=xl27boL noWrap align=right width="5%">&nbsp;</TD> <TD class=xl27boRL noWrap align=right width="8%">249612.64&nbsp;</TD> </table> So for that what should I do ?

    Read the article

  • Executing Stored Procedure for each InputRow + SSIS Script Component.

    - by Nev_Rahd
    Hello, In my Script Component, am trying to execute Stored Procedure = which return multiple rows = of which need to generate output rows. Code as below: /* Microsoft SQL Server Integration Services Script Component * Write scripts using Microsoft Visual C# 2008. * ScriptMain is the entry point class of the script.*/ using System; using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime.Wrapper; [Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute] public class ScriptMain : UserComponent { SqlConnection cnn = new SqlConnection(); IDTSConnectionManager100 cnManager; //string cmd; SqlCommand cmd = new SqlCommand(); public override void AcquireConnections(object Transaction) { cnManager = base.Connections.myConnection; cnn = (SqlConnection)cnManager.AcquireConnection(null); } public override void PreExecute() { base.PreExecute(); } public override void PostExecute() { base.PostExecute(); } public override void InputRows_ProcessInputRow(InputRowsBuffer Row) { while(Row.NextRow()) { DataTable dt = new DataTable(); cmd.Connection = cnn; cmd.CommandText = "OSPATTRIBUTE_GetOPNforOP"; cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@NK", SqlDbType.VarChar).Value = Row.OPNK.ToString(); cmd.Parameters.Add("@EDWSTARTDATE", SqlDbType.DateTime).Value = Row.EDWEFFECTIVESTARTDATETIME; SqlDataAdapter adapter = new SqlDataAdapter(cmd); adapter.Fill(dt); foreach (DataRow dtrow in dt.Rows) { OutputValidBuffer.AddRow(); OutputValidBuffer.OPNK = Row.OPNK; OutputValidBuffer.OSPTYPECODE = Row.OSPTYPECODE; OutputValidBuffer.ORGPROVTYPEDESC = Row.ORGPROVTYPEDESC; OutputValidBuffer.HEALTHSECTORCODE = Row.HEALTHSECTORCODE; OutputValidBuffer.HEALTHSECTORDESCRIPTION = Row.HEALTHSECTORDESCRIPTION; OutputValidBuffer.EDWEFFECTIVESTARTDATETIME = Row.EDWEFFECTIVESTARTDATETIME; OutputValidBuffer.EDWEFFECTIVEENDDATETIME = Row.EDWEFFECTIVEENDDATETIME; OutputValidBuffer.OPQI = Row.OPQI; OutputValidBuffer.OPNNK = dtrow[0].ToString(); OutputValidBuffer.OSPNAMETYPECODE = dtrow[1].ToString(); OutputValidBuffer.NAMETYPEDESC = dtrow[2].ToString(); OutputValidBuffer.OSPNAME = dtrow[3].ToString(); OutputValidBuffer.EDWEFFECTIVESTARTDATETIME1 = Row.EDWEFFECTIVESTARTDATETIME; OutputValidBuffer.EDWEFFECTIVEENDDATETIME1 = Row.EDWEFFECTIVEENDDATETIME; OutputValidBuffer.OPNQI = dtrow[6].ToString(); } } } public override void ReleaseConnections() { cnManager.ReleaseConnection(cnn); } } This is always skipping the first row. while(Row.NextRow()) is always bringing the second row of the input buffer. What am I doing wrong. Thanks

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • How to create dynamic number of output files with SSIS?

    - by JSacksteder
    I will be creating flatfiles and based on the data in the batch, it might be necessary to split the data into an undetermined number of files. I can make the connection string dynamic with an expression, but that is only evaluated when the package starts. I'd like to change that expression to include a '-a' or '-b' in the filename. Alternately, if I have to create new connection manager objects at run time on demand, how do I go about that?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >