Search Results

Search found 708 results on 29 pages for 'ole morten amundsen'.

Page 28/29 | < Previous Page | 24 25 26 27 28 29  | Next Page >

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Can I create template-based library objects in Dreamweaver CS5?

    - by Danjah
    At work we need two 'streams' of template. The first are general layout templates, like the ones already available in the MX through CS5 packages (except we'd have our own customised ones). The second are more granular objects, some of which are functional. In both cases, I don't want Jimmy to be able to wreak havoc inside anything other than the 'editable regions' which make up the templates. Now this is fine if I stick with the first scenario (layout templates) where there's simply a big chunk of editable region for good ole Jim to sprawl into, think of this as the 'body content' area. But I really do need these granular library (or snippet) objects to work in the same way. Unfortunately with my attempts so far they don't work as I'd have thought - perhaps for good reason? When I create a blank template and throw in my chunk of HTML (unobstrusive JS and external CSS use selectors in this HTML to provide style and function) and save it as a new library item or snippet, all looks well. Then I create a new document based on a layout template and save it as a plain html file (still all good so far). Next I drop in my custom library item... still all good... but then I go to save the document and it only allows me to save it as a new template! I expected it would just allow me to save it as HTML and have it simply respect the defined editable regions, as happens in the containing page 'body content' editable region. Apologies if that got specific and technical quite quickly, but it is quite particular. If you want some example files lemme know and I'll zip some up. Many thanks :) p.s It is not a requirement that library objects must somehow inject their dependency files into the newly created page - I already know what they'll be. Also, I know I must 'detatch from original' once I drop a library item into a document which then allows customisation of the library object.

    Read the article

  • Database not updating after UPDATE SQL statement in ASP.net

    - by Ronnie
    I currently have a problem attepting to update a record within my database. I have a webpage that displays in text boxes a users details, these details are taken from the session upon login. The aim is to update the details when the user overwrites the current text in the text boxes. I have a function that runs when the user clicks the 'Save Details' button and it appears to work, as i have tested for number of rows affected and it outputs 1. However, when checking the database, the record has not been updated and I am unsure as to why. I've have checked the SQL statement that is being processed by displaying it as a label and it looks as so: UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, [promo]=@promo WHERE ([users].[user_id] = 16) The function and other relevant code is: Sub Button1_Click(sender As Object, e As EventArgs) changeDetails(emailBox.text, firstBox.text, lastBox.text, promoBox.text) End Sub Function changeDetails(ByVal email As String, ByVal firstname As String, ByVal lastname As String, ByVal promo As String) As Integer Dim connectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0; Ole DB Services=-4; Data Source=C:\Documents an"& _ "d Settings\Paul Jarratt\My Documents\ticketoffice\datab\ticketoffice.mdb" Dim dbConnection As System.Data.IDbConnection = New System.Data.OleDb.OleDbConnection(connectionString) Dim queryString As String = "UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, "& _ "[promo]=@promo WHERE ([users].[user_id] = " + session.contents.item("ID") + ")" Dim dbCommand As System.Data.IDbCommand = New System.Data.OleDb.OleDbCommand dbCommand.CommandText = queryString dbCommand.Connection = dbConnection Dim dbParam_email As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_email.ParameterName = "@email" dbParam_email.Value = email dbParam_email.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_email) Dim dbParam_firstname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_firstname.ParameterName = "@firstname" dbParam_firstname.Value = firstname dbParam_firstname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_firstname) Dim dbParam_lastname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_lastname.ParameterName = "@lastname" dbParam_lastname.Value = lastname dbParam_lastname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_lastname) Dim dbParam_promo As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_promo.ParameterName = "@promo" dbParam_promo.Value = promo dbParam_promo.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_promo) Dim rowsAffected As Integer = 0 dbConnection.Open Try rowsAffected = dbCommand.ExecuteNonQuery Finally dbConnection.Close End Try labelTest.text = rowsAffected.ToString() if rowsAffected = 1 then labelSuccess.text = "* Your details have been updated and saved" else labelError.text = "* Your details could not be updated" end if End Function Any help would be greatly appreciated.

    Read the article

  • SSIS: "Failure inserting into the read-only column <ColumnName>"

    - by Cory
    I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error: "Failure inserting into the read-only column ColumnName" What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert? EDIT (Additional info): Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables My view looks something like this. CREATE VIEW ImportView AS SELECT CONVERT(varchar(100, NULL) AS CustomerName, CONVERT(varchar(100), NULL) AS Address1, CONVERT(varchar(100), NULL) AS Address2, CONVERT(varchar(100), NULL) AS City, CONVERT(char(2), NULL) AS State, CONVERT(varchar(250), NULL) AS ItemOrdered, CONVERT(int, NULL) AS QuantityOrdered ... I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

    Read the article

  • Is this OleDbDataAdapter bug

    - by ????
    It doesn't look to me OleDbDataAdapter should throw an exception on trying to fill DataSet for a db table column of type decimal(28,3). The message is "The numerical value is too large to fit into a 96 bit decimal". Could you just check this, I have no significant experience with ADO.NET and OLE DB components? The VB.NET code we have in the application is this: Dim dbDataSet As New DataSet Dim dbDataAdapter As OleDbDataAdapter Dim dbCommand As OleDbCommand Dim conn As OleDbConnection Dim connectionString As String 'parts where connectionString is set conn = New OleDbConnection(connectionString) 'part where sqlQuery is set but it ends up being "SELECT Price As 'Price' From PricebookView" - Price is of type decimal(28,3) dbCommand = New OleDbCommand(sqlQuery, conn) dbCommand.CommandTimeout = cmdTimeout dbDataAdapter = New OleDbDataAdapter(dbCommand) dbDataAdapter.Fill(dbDataSet) The last line is where the exception is thrown and the top of the stack trace is: at System.Data.ProviderBase.DbBuffer.ReadNumeric(Int32 offset) at System.Data.OleDb.ColumnBinding.Value_NUMERIC() at System.Data.OleDb.ColumnBinding.Value() at System.Data.OleDb.OleDbDataReader.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) ... I am not sure why does it try to set the value to Int32. Thank you for the time !

    Read the article

  • System.Threading.ThreadstateException

    - by Yasindu
    Hi, I'm developing an adding for office powerpoint application. I'm trying to display a description of the object(Customized object) currently dropped on the powerpoint slide in design mode(Design mode of the powerpoint). When i click on my addin the related object description will be displayed on a tabbed window as the first tabpage. There is a button on the tab page, and when i click on it i need the description to get copied to windows clipboard. I tried this using clipboardclass it throws the following exception, System.Threading.ThreadstateException {"Current thread must be set to single thread apartment (STA) mode before OLE calls can be made. Ensure that your Main function has STAThreadAttribute marked on it."} Code for clipboard: Clipboard.Clear() Clipboard.SetText(lblObjectID.Text) I searched the net for a solution and got couple of answers like, 1. Put [STAThread] in the main function 2. Thread.CurrentThread.SetApartmentState(ApartmentState.STA) Immediately before your call to SetDataObject. But I'm not sure where to put the 1st one and the 2nd option didn't work. Can anyone help me please. Thanks.

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • Return if remote stored procedure fails

    - by njk
    I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE]. USE [LOCAL] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[monthlyRollUp] AS SET NOCOUNT, XACT_ABORT ON BEGIN TRY EXEC [REOMTE].[DB].[table].[sp] --This transaction should only begin if the remote procedure does not fail BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp1] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp2] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp3] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp4] COMMIT END TRY BEGIN CATCH -- Insert error into log table INSERT INTO [dbo].[log_table] (stamp, errorNumber, errorSeverity, errorState, errorProcedure, errorLine, errorMessage) SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() END CATCH GO When using a transaction on the remote procedure, it throws this error: OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.". I get that I'm unable to run a transaction locally for a remote procedure. How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?

    Read the article

  • Working with Decimal fields in SSIS

    - by CoffeeAddict
    I'm using SQL Server 2008 w/SP2. I've got an incoming decimal(9,2) field incoming through my OLE DB transformation to my recordset destination transformation. It's like it's reading it as something other than a decimal? I don't know..I'm not an SSIS guru. So continuing on...the problem I have starts here with me trying to stuff the value into a variable for this decimal field. In a foreach loop, I have a variable to represent this decimal field so I can work with it. The first problem that I believe is pretty well known is SSIS variables do not have a decimal type. And from my own testing and what I've read out there, people are using type object for the variable to make SSIS "happy" with decimal values? It makes mine happy. But, then in my foreach loop, I have a for loop. And inside that I'm using an E*xecute SQL Task transformation*. In it, I need to create a parameter mapping to my variable so I can work with that decimal field in my T-SQL call in here. So now I see a type decimal for the parameter and use it and set that to point to my variable. When I run SSIS and it hits my SQL call, I get this in my output window.: The type is not supported.DBTYPE_DECIMAL So I am hitting a wall here. All I wanna do is work with a decimal!!!

    Read the article

  • Send C++ Structure to MSMQ Message

    - by Gobalakrishnan
    Hi, I am trying to send the below structure through MSMQ Message typedef struct { char cfiller[7]; short MsgCode; char cfiller1[11]; short MsgLength; char cfiller2[2]; } MESSAGECODE; typedef struct { MESSAGECODE Header; char DealerId[16]; char GroupId[16]; long Token; short Periodicity; double Deposit; double GrossExposureLimit; double NetExposureLimit; double NetSaleExposureLimit; double NetPositionLimit; double TurnoverLimit; double PendingOrdersLimit; double MTMLossLimit; double MaxSingleTransValue; long MaxSingleTransQty; double IMLimit; long NetQuantityLimit; } LIMITUPDATE; void main() { // // create queue // open queue // send message // OleInitialize(NULL); // have to init OLE // // declare some variables // IMSMQQueueInfoPtr qinfo("MSMQ.MSMQQueueInfo"); IMSMQQueuePtr qSend; IMSMQMessagePtr m("MSMQ.MSMQMessage"); LIMITUPDATE l1; l1.Header.MsgCode=26001; l1.Header.MsgLength=150; qinfo->PathName = ".\\private$\\q99"; m->Body = l1; qSend = qinfo->Open(MQ_SEND_ACCESS, MQ_DENY_NONE); m->Send(qSend); qSend->Close(); } while compiling i am getting the following error. Error 2 error C2664: 'IMSMQMessage::PutBody' : cannot convert parameter 1 from 'LIMITUPDATE' to 'const _variant_t &' c:\temp\msmq\msmq.cpp 58 msmq thank you.

    Read the article

  • TransactionRequiredException on OptimisticLockException

    - by João Madureira Pires
    Hi there. I have the following class that generates sequencial Card Numbers. I'm trying to recover from OptimisticLockException, by calling recursively the same method. however, i'm getting TransactionRequiredException. Dows anyone knows how to recover from OptimisticLockException in my case? Thanks a lot in advance @Name("simpleAutoIncrementGenerator") public class SimpleAutoIncrementGenerator extends CardNumberGenerator{ private static final long serialVersionUID = 2869548248468809665L; private int numberOfRetries = 0; @Override public String generateNextNumber(CardInstance cardInstance, EntityManager entityManager) { try{ EntityCard card = (EntityCard)entityManager.find(EntityCard.class, cardInstance.getId()); if(card != null){ String nextNumber = ""; String currentNumber = card.getCurrentCardNumber(); if(currentNumber != null && !currentNumber.isEmpty()){ Long numberToInc = Long.parseLong(currentNumber); numberToInc ++; nextNumber = String.valueOf(numberToInc); card.setCurrentCardNumber(nextNumber); // this is just to cause a OptimisticLock Exception try { Thread.sleep(4000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } entityManager.persist(card); entityManager.flush(); return nextNumber; } } }catch (OptimisticLockException oLE) { System.out.println("\n\n\n\n OptimisticLockException \n\n\n\n"); if(numberOfRetries < CentralizedConfig.CARD_NUMBER_GENERATOR_MAX_TRIES){ numberOfRetries ++; return generateNextNumber(cardInstance,entityManager); } }catch (TransactionRequiredException trE) { System.out.println("\n\n\n\n TransactionRequiredException \n\n\n\n"); if(numberOfRetries < CentralizedConfig.CARD_NUMBER_GENERATOR_MAX_TRIES){ numberOfRetries ++; return generateNextNumber(cardInstance,entityManager); } }catch (StaleObjectStateException e) { System.out.println("\n\n\n\n StaleObjectStateException \n\n\n\n"); if(numberOfRetries < CentralizedConfig.CARD_NUMBER_GENERATOR_MAX_TRIES){ numberOfRetries ++; return generateNextNumber(cardInstance,entityManager); } } return null; } }

    Read the article

  • Making it look like the computer is thinking in TicTacToe game

    - by rage
    here is a slice of code that i've written in VB.net Private Sub L00_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles L00.Click, L01.Click, L02.Click, L03.Click, L10.Click, L11.Click, L12.Click, L13.Click, L20.Click, L21.Click, L22.Click, L23.Click, L30.Click, L31.Click, L32.Click, L33.Click Dim ticTac As Label = CType(sender, Label) Dim strRow As String Dim strCol As String 'Once a move is made, do not allow user to change whether player/computer goes first, because it doesn't make sense to do so since the game has already started. ComputerFirstStripMenuItem.Enabled = False PlayerFirstToolStripMenuItem.Enabled = False 'Check to make sure clicked tile is a valid tile i.e and empty tile. If (ticTac.Text = String.Empty) Then ticTac.Text = "X" ticTac.ForeColor = ColorDialog1.Color ticTac.Tag = 1 'After the player has made his move it becomes the computers turn. computerTurn(sender, e) Else MessageBox.Show("Please pick an empty tile to make next move", "Invalid Move") End If End Sub Private Sub computerTurn(ByVal sender As System.Object, ByVal e As System.EventArgs) Call Randomize() row = Int(4 * Rnd()) col = Int(4 * Rnd()) 'Check to make sure clicked tile is a valid tile i.e and empty tile. If Not ticTacArray(row, col).Tag = 1 And Not ticTacArray(row, col).Tag = 4 Then ticTacArray(row, col).Text = "O" ticTacArray(row, col).ForeColor = ColorDialog2.Color ticTacArray(row, col).Tag = 4 checkIfGameOver(sender, e) Else 'Some good ole pseudo-recursion(doesn't require a base case(s)). computerTurn(sender, e) End If End Sub Everything works smoothly, except i'm trying to make it seem like the computer has to "think" before making its move. So what i've tried to do is place a System.Threading.Sleep() call in different places in the code above. The problem is that instead of making the computer look like its thinking, the program waits and then puts the X and O together at the same time. Can someone help me make it so that the program puts an X wherever i click AND THEN wait before it places an O? Edit: in case any of you are wondering, i realize that the computers AI is ridiculously dumb, but its just to mess around right now. Later on i will implement a serious AI..hopefully.

    Read the article

  • How do I tweak columns in a Flat File Destination in SSIS?

    - by theog
    I have an OLE DB Data source and a Flat File Destination in the Data Flow of my SSIS Project. The goal is simply to pump data into a text file, and it does that. Where I'm having problems is with the formatting. I need to be able to rtrim() a couple of columns to remove trailing spaces, and I have a couple more that need their leading zeros preserved. The current process is losing all the leading zeros. The rtrim() can be done by simple truncation and ignoring the truncation errors, but that's very inelegant and error prone. I'd like to find a better way, like actually doing the rtrim() function where needed. Exploring similar SSIS questions & answers on SO, the thing to do seems to be "Use a Script Task", but that's ususally just thrown out there with no details, and it's not at all an intuitive thing to set up. I don't see how to use scripting to do what I need. Do I use a Script Task on the Control Flow, or a Script Component in the Data Flow? Can I do rtrim() and pad strings where needed in a script? Anybody got an example of doing this or similar things? Many thanks in advance.

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • window.open() in an iPad on load of a frame does not work

    - by user278859
    I am trying to modify a site that uses "Morten's JavaScript Tree Menu" to display PDFs in a frames set using the Adobe Reader plug-in. On the iPad the frame is useless, so I want to open the PDF in a new tab. Not wanting to mess with the tree menu I thought I could use JavaScript in the web page being opened in the viewer frame to open a new tab with the PDF. I am using window.open() in $(document).ready(function() to open the pdf in the new tab. The problem is that window.open() does not want to work in the iPad. The body of the HTML normally looks like this... <body> <object data="MypdfFileName.pdf#toolbar=1&amp;navpanes=1&amp;scrollbar=0&amp;page=1&amp;view=FitH" type="application/pdf" width="100%" height="100%"> </object> </body> I changed it to only have a div like this... <body> <div class="myviewer" ></div> </body> Then used the following script... $(document).ready(function() { var isMobile = { Android : function() { return navigator.userAgent.match(/Android/i) ? true : false; }, BlackBerry : function() { return navigator.userAgent.match(/BlackBerry/i) ? true : false; }, iOS : function() { return navigator.userAgent.match(/iPhone|iPad|iPod/i) ? true : false; }, Windows : function() { return navigator.userAgent.match(/IEMobile/i) ? true : false; }, any : function() { return (isMobile.Android() || isMobile.BlackBerry() || isMobile.iOS() || isMobile.Windows()); } }; if(isMobile.any()) { var file = "MypdfFileName.pdf"; window.open(file); }else { var markup = "<object data='MypdfFileName.pdf#toolbar=1&amp;navpanes=1&amp;scrollbar=0&amp;page=1&amp;view=FitH' type='application/pdf' width='100%' height='100%'></object>"; $('.myviewer').append(markup); }; }); Everthing works except for window.open() on the iPad. If I switch things around widow.open() works fine on a computer. In another project I am using window.open() successfully on the iPad from an onclick function. I tried using a timer function. I also tried adding an onclick function to the div and posting a click event. In both cases they worked on a computer but not an iPad. I am stumped. I know it would make more sense to handle the ipad in the tree menu frame, but that code is so complex I can't figure out where to put/modify the onclick event. Is there a way to change the object so that it opens in a new tab? Is anyone familiar enough with Mortens Tree Menu code that can tell me how to channge the on click event so that it opens the pdf in a new tab instead of opening a page in the frame? Thanks

    Read the article

  • Logical Domain Modeling Made Simple

    - by Knut Vatsendvik
    How can logical domain modeling be made simple and collaborative? Many non-technical end-users, managers and business domain experts find it difficult to understand the visual models offered by many UML tools. This creates trouble in capturing and verifying the information that goes into a logical domain model. The tools are also too advanced and complex for a non-technical user to learn and use. We have therefore, in our current project, ended up with using Confluence as tool for designing the logical domain model with the help of a few very useful plugins. Big thanks to Ole Nymoen and Per Spilling for their expertise in this field that made this posting possible. Confluence Plugins Here is a list of Confluence plugins used in this solution. Install these before trying out the macros used below. Plugin Description Copy Space Allows a space administrator to copy a space, including the pages within the space Metadata Supports adding metadata to Wiki pages Label Manages labeling of pages Linking Contains macros for linking to templates, the dashboard and other Table Enhances the table capability in Confluence Creating a Confluence Space First we need to create a new confluence space for the domain model. Click the link Create a Space located below the list of spaces on the Dashboard. Please contact your Confluence administrator is you do not have permissions to do this.   For illustrative purpose all attributes and entities in this posting are based on my imaginary project manager domain model. When a logical domain model is good enough for being implemented, do a copy of the Confluence Space (see Copy Space plugin). In this way you create a stable version of the logical domain model while further design can continue with the new copied space. Typical will the implementation phase result in a database design and/or a XSD schema design. Add Space Templates Go to the Home page of your Confluence Space. Navigate to the Browse drop-down menu and click on Advanced. Then click the Templates option in the left navigation panel. Click Add New Space Template to add the following three templates. Name: attribute {metadata-list} || Name | | || Type | | || Format | | || Description | | {metadata-list} {add-label:attribute} Name: primary-type {metadata-list} || Name | || || Type | || || Format | || || Description | || {metadata-list} {add-label:primary-type} Name: complex-type {metadata-list} || Name | || || Description |  || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | {add-label:complex-type,entity} The metadata-list macro (see Metadata plugin) will save a list of metadata values to the page. The add-label macro (see Label plugin) will automatically label the page. Primary Types Page Our first page to add will act as container for our primary types. Switch to Wiki markup when adding the following content to the page. | (+) {add-page:template=primary-type|parent=@self}Add new primary type{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|root=@self|pages=@descendents} Once the page is created, click the Add new primary type (create-page macro) to start creating a new pages. Here is an example of input to the LocalDate page. Embrace the LocalDate with square brackets [] to make the page linkable. Again switch to Wiki markup before editing. {metadata-list} || Name | [LocalDate] || || Type | Date || || Format | YYYY-MM-DD || || Description | Date in local time zone. YYYY = year, MM = month and DD = day || {metadata-list} {add-label:primary-type} The metadata-report macro will show a tabular report of all child pages.   Attributes Page The next page will act as container for all of our attributes. | (+) {add-page:template=attribute|parent=@self|title=attribute}Add new attribute{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|pages=@descendants} Here is an example of input to the startDate page. {metadata-list} || Name | [startDate] || || Type | [LocalDate] || || Format | {metadata-from:LocalDate|Format} || || Description | The projects start date || {metadata-list} {add-label:attribute} Using the metadata-from macro we fetch the text from the previously created LocalDate page. Complex Types Page The last page in this example shows how attributes can be combined together to form more complex types.   h3. Intro Overview of complex types in the domain model. | (+) {add-page:template=complex-type|parent=@self}Add a new complex type{add-page}\\ | {metadata-report:Name,Description|sort=Name|root=@self|pages=@descendents} Here is an example of input to the ProjectType page. {metadata-list} || Name | [ProjectType] || || Description | Represents a project || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [projectId] | {metadata-from:projectId|Type} | {metadata-from:projectId|Format} | {metadata-from:projectId|Description} | | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | | [description] | {metadata-from:description|Type} | {metadata-from:description|Format} | {metadata-from:description|Description} | | [startDate] | {metadata-from:startDate|Type} | {metadata-from:startDate|Format} | {metadata-from:startDate|Description} | {add-label:complex-type,entity} Gives us this Conclusion Using a web-based corporate Wiki like Confluence to create a logical domain model increases the collaboration between people with different roles in the enterprise. It’s my believe that this helps the domain model to be more accurate, and better documented. In our real project we have more pages than illustrated here to complete the documentation. We do also still use UML tools to create different types of diagrams that Confluence do not support. As a last tip, an ImageMap plugin can make those diagrams clickable when used in pages. Enjoy!

    Read the article

  • how to read values from the remote OPC Server

    - by Shailesh Jaiswal
    I have created one asp.net web service. In this web service I am using the web method as follows. The web service is related to the OPC ( OLE for process control) public string ReadServerItems(string ServerName) { string txt = ""; ArrayList obj = new ArrayList(); XmlServer Srv = new XmlServer("XDAGW.CS.WCF.Eval.1"); RequestOptions opt = new RequestOptions(); ReadRequestItemList iList = new ReadRequestItemList(); iList.Items = new ReadRequestItem[3]; iList.Items[0] = new ReadRequestItem(); iList.Items[0].ItemName = "ServerInfo.ConnectedClients"; iList.Items[1] = new ReadRequestItem(); iList.Items[1].ItemName = "ServerInfo.TotalGroups"; iList.Items[2] = new ReadRequestItem(); iList.Items[2].ItemName = "EventSources.Area1.Tracking"; ReplyItemList rslt; OPCError[] err; ReplyBase reply = Srv.Read(opt, iList, out rslt, out err); if ((rslt == null)) txt += err[0].Text; else { foreach (xmldanet.xmlda.ItemValue iv in rslt.Items) { txt += iv.ItemName; if (iv.ResultID == null) // success { txt += " = " + iv.Value.ToString() + "\r\n"; obj.Add(txt); } else txt += " : Error: " + iv.ResultID.Name + "\r\n"; } } return txt; } I am using the namespaces as follows using xmldanet; using xmldanet.xmlda; I have installed XMLDA.NET client component evaluation. In this there is an in built Test client which successfully reads the values of these data items from the remote OPC server. I also provides the template through which we can build the OPC based applications. In the above code I am trying to read the values of the data items but i am not able to read the values. I have applied the breakpoint. In that I can see that the condition if (iv.ResultID == null) becomes false & also there is null values in the variable rslt. Please tell me where I am doing mistake ? how should I correct my mistake ? can provide me the correct code ?

    Read the article

  • How to develop asp.net web service to create the web method which can take the parameter of type win

    - by Shailesh Jaiswal
    I am developing asp.net web service. I am developing this web service so that OPC ( OLE for process control) client application can use it. In this web service I am using the built-in functions provided by the namespaces using OPC, using OPCDA, using OPCDA.NET. I have also added the namespace using System.Windows.Forms in this web service so that I can use the windows form control. In that we service I have created on web method which takes the parameter of type windows form control as given below. public void getOPCServerItems(TreeView tvServerItems, ListView lvBranchItems) { ArrayList ArrlstObj = new ArrayList(); ItemShowTreeList = OpcSrv.ShowBrowseTreeList(tvServerItems, lvBranchItems); ItemShowTreeList.BrowseModeOneLevel = true; // browse hierachy levels when selected. (default) ItemShowTreeList.Show(OpcSrv.ServerName); } In the above web method I need to pass the values to the built-in function ShowBrowseTreeList() (found in OPC, OPCDA, OPCDA.NET namespaces). This function takes the two parameter of windows form control type. These parameters are Treeview & ListView control of the windows form. In the above web method ShowBrowseTreeList() method automatically create the treeview & listview structure of the available items. Now as I am consuming the web service so I need to pass the values to the webmethod getOPCServerItems(). But as I my consuming application is asp.net application there is no such windows form control. In asp.net application there are also & control. I want to display The data returned in these asp.net controls rather than windows form control. I am not getting the way what should I need to do or how should I pass the values form my client application to this web service ? In the above method getOPCServerItems() when I use the parameter of type treeview & listview it generate s error "Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface.". Can you provide me the the way In which I can write the above web method & how should I pass parameter to the Treeview & Listview control (windows form control) from my asp.net application ? which controls I should use to pass parameters ? Is there any need to do any type of casting ? Can you provide me the the code for above web method so that I can resolve the above issue ?

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Maintenance plans love story

    - by Maria Zakourdaev
    There are about 200 QA and DEV SQL Servers out there.  There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files. First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general.  In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain. Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL. The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package: That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans. DECLARE @PlanID uniqueidentifier   SELECT @PlanID = plan_id FROM msdb.dbo.sysmaintplan_subplans Where name like ‘BackupPlan%’   EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID   The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.   ALTER PROCEDURE BackupAllDatabases                                   @PrintMode BIT = 1 AS BEGIN          DECLARE @BackupLocation VARCHAR(500)        DECLARE @PurgeAferDays INT        DECLARE @PurgingDate VARCHAR(30)        DECLARE @SQLCmd  VARCHAR(MAX)        DECLARE @FileName  VARCHAR(100)               SET @PurgeAferDays = -14        SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername               SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)               SET @FileName = '?_full_'+                      + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')                      +'.bak';          SET @SQLCmd = '               IF ''?'' <> ''tempdb'' BEGIN                      EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;                        BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''                      WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;                        EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;               END'          IF @PrintMode = 1 BEGIN               PRINT @SQLCmd        END               EXEC sp_MSforeachdb @SQLCmd        END

    Read the article

  • Accessing Oracle DB through SQL Server using OPENROWSET

    - by Ken Paul
    I'm trying to access a large Oracle database through SQL Server using OPENROWSET in client-side Javascript, and not having much luck. Here are the particulars: A SQL Server view that accesses the Oracle database using OPENROWSET works perfectly, so I know I have valid connection string parameters. However, the new requirement is for extremely dynamic Oracle queries that depend on client-side selections, and I haven't been able to get dynamic (or even parameterized) Oracle queries to work from SQL Server views or stored procedures. Client-side access to the SQL Server database works perfectly with dynamic and parameterized queries. I cannot count on clients having any Oracle client software. Therefore, access to the Oracle database has to be through the SQL Server database, using views, stored procedures, or dynamic queries using OPENROWSET. Because the SQL Server database is on a shared server, I'm not allowed to use globally-linked databases. My idea was to define a function that would take my own version of a parameterized Oracle query, make the parameter substitutions, wrap the query in an OPENROWSET, and execute it in SQL Server, returning the resulting recordset. Here's sample code: // db is a global variable containing an ADODB.Connection opened to the SQL Server DB // rs is a global variable containing an ADODB.Recordset . . . ss = "SELECT myfield FROM mytable WHERE {param0} ORDER BY myfield;"; OracleQuery(ss,["somefield='" + somevalue + "'"]); . . . function OracleQuery(sql,params) { var s = sql; var i; for (i = 0; i < params.length; i++) s = s.replace("{param" + i + "}",params[i]); var e = "SELECT * FROM OPENROWSET('MSDAORA','(connect-string-values)';" + "'user';'pass','" + s.split("'").join("''") + "') q"; try { rs.Open("EXEC ('" + e.split("'").join("''") + "')",db); } catch (eobj) { alert("SQL ERROR: " + eobj.description + "\nSQL: " + e); } } The SQL error that I'm getting is Ad hoc access to OLE DB provider 'MSDAORA' has been denied. You must access this provider through a linked server. which makes no sense to me. The Microsoft explanation for this error relates to a registry setting (DisallowAdhocAccess). This is set correctly on my PC, but surely this relates to the DB server and not the client PC, and I would expect that the setting there is correct since the view mentioned above works. One alternative that I've tried is to eliminate the enclosing EXEC in the Open statement: rs.Open(e,db); but this generates the same error. I also tried putting the OPENROWSET in a stored procedure. This works perfectly when executed from within SQL Server Management Studio, but fails with the same error message when the stored procedure is called from Javascript. Is what I'm trying to do possible? If so, can you recommend how to fix my code? Or is a completely different approach necessary? Any hints or related information will be welcome. Thanks in advance.

    Read the article

  • How to deploy a visual studio custom tool?

    - by Aen Sidhe
    Hello. I have a my own custom tool for Visual Studio 2008 SP1. It consists of 5 assemblies: 3 assemblies with code that used heavily in my other projects, 1 assembly-wrapper above VS2008 SDK and assembly with the tool. If I'd debug my tool from visual studio, using "Run external program" option with command line "C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe" and arguments "/ranu /rootsuffix Exp" all work perfectly. After that I trying to deploy it to my working VS copy, not to experimental hive doing: gacutil /i Asm1.dll for all my assemblies and doing RegAsm Asm1.dll only for assembly with custom tool. Neither of utils prints any error, all work as planned, even registry keys appeared. But my tool don't work even after PC restart. What did I do wrong? Wrapper looks like that: [ComVisible(true)] public abstract class CustomToolBase : IVsSingleFileGenerator, IObjectWithSite { #region IVsSingleFileGenerator Members int IVsSingleFileGenerator.DefaultExtension(out string pbstrDefaultExtension) { pbstrDefaultExtension = ".cs"; return 0; } int IVsSingleFileGenerator.Generate(string wszInputFilePath, string bstrInputFileContents, string wszDefaultNamespace, IntPtr[] rgbOutputFileContents, out uint pcbOutput, IVsGeneratorProgress pGenerateProgress) { GenerationEventArgs gea = new GenerationEventArgs( bstrInputFileContents, wszInputFilePath, wszDefaultNamespace, new ServiceProvider(Site as Microsoft.VisualStudio.OLE.Interop.IServiceProvider) .GetService(typeof(ProjectItem)) as ProjectItem, new GenerationProgressFacade(pGenerateProgress) ); if (OnGenerateCode != null) { OnGenerateCode(this, gea); } byte[] bytes = gea.GetOutputCodeBytes(); int outputLength = bytes.Length; rgbOutputFileContents[0] = Marshal.AllocCoTaskMem(outputLength); Marshal.Copy(bytes, 0, rgbOutputFileContents[0], outputLength); pcbOutput = (uint)outputLength; return VSConstants.S_OK; } #endregion #region IObjectWithSite Members void IObjectWithSite.GetSite(ref Guid riid, out IntPtr ppvSite) { IntPtr pUnk = Marshal.GetIUnknownForObject(Site); IntPtr intPointer = IntPtr.Zero; Marshal.QueryInterface(pUnk, ref riid, out intPointer); ppvSite = intPointer; } void IObjectWithSite.SetSite(object pUnkSite) { Site = pUnkSite; } #endregion #region Public Members public object Site { get; private set; } public event EventHandler<GenerationEventArgs> OnGenerateCode; [ComRegisterFunction] public static void Register(Type type) { using (var parent = Registry.LocalMachine.OpenSubKey(@"Software\Microsoft\VisualStudio\9.0", true)) foreach (CustomToolRegistrationAttribute ourData in type.GetCustomAttributes(typeof(CustomToolRegistrationAttribute), false)) ourData.Register(x => parent.CreateSubKey(x), (x, name, value) => x.SetValue(name, value)); } [ComUnregisterFunction] public static void Unregister(Type type) { using (var parent = Registry.LocalMachine.OpenSubKey(@"Software\Microsoft\VisualStudio\9.0", true)) foreach (CustomToolRegistrationAttribute ourData in type.GetCustomAttributes(typeof(CustomToolRegistrationAttribute), false)) ourData.Unregister(x => parent.DeleteSubKey(x, false)); } #endregion } My tool code: [ComVisible(true)] [Guid("55A6C192-D29F-4e22-84DA-DBAF314ED5C3")] [CustomToolRegistration(ToolName, typeof(TransportGeneratorTool))] [ProvideObject(typeof(TransportGeneratorTool))] public class TransportGeneratorTool : CustomToolBase { private const string ToolName = "TransportGeneratorTool"; public TransportGeneratorTool() { OnGenerateCode += GenerateCode; } private static void GenerateCode(object s, GenerationEventArgs e) { try { var serializer = new XmlSerializer(typeof (Parser.System)); using (var reader = new StringReader(e.InputText)) using (var writer = new StringWriter(e.OutputCode)) { Generator.System = (Parser.System) serializer.Deserialize(reader); Generator.System.Namespace = e.Namespace; Generator.GenerateSource(writer); } } catch (Exception ex) { e.Progress.GenerateError(ex.ToString()); } } } Resulting registry keys: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Generators] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Generators\{FAE04EC1-301F-11D3-BF4B-00C04F79EFBC}] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Generators\{FAE04EC1-301F-11D3-BF4B-00C04F79EFBC}\TransportGeneratorTool] @="TransportGeneratorTool" "CLSID"="{55a6c192-d29f-4e22-84da-dbaf314ed5c3}" "GeneratesDesignTimeSource"=dword:00000001 "GeneratesSharedDesignTimeSource"=dword:00000001

    Read the article

< Previous Page | 24 25 26 27 28 29  | Next Page >