Search Results

Search found 3824 results on 153 pages for 'conditional statements'.

Page 134/153 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Auditing and Profiling Database Made Easy with SQL Audit and Comply

    - by Pinal Dave
    Do you like auditing your database, or can you think of about a million other things you’d rather do?  Unfortunately, auditing is incredibly important.  As with tax audits, it is important to audit databases to ensure they are following all the rules, but they are also important for troubleshooting and security. There are several ways to audit SQL Server.  There is manual auditing, which is going through your database “by hand,” and obviously takes a long time and is quite inefficient.  SQL Server also provides programs to help you audit your systems.  Different administrators will have different opinions about best practices and which tools to use, and each one will be perfected for certain systems and certain users. Today, though, I would like to talk about Apex SQL Audit.  It is an auditing tool that acts like “track changes” in a word processing document.  It will log what has changed on the database, who made the changes, and what effects these changes have had (i.e. what objects were affected down the line).  All this information is logged, and can be easily viewed or printed for easy access. One of the best features of Apex is that it is so customizable (and easy to use!).  First, start Apex.  Then you can connect to the database you would like to monitor. Once you select your database, you can select which table you want to audit. You can customize right down to the field you’d like to audit, and then select which types of actions you’d like tracked – insert, delete, or update.  Repeat these steps for every database you want monitored. To create the logs, choose “Create triggers” in the menu.  The script written here will be what logs each insert, delete, and update function.  Press F5 to execute.  All this tracking information will be stored in AUDIT_LOG_DATA and AUDIT_LOG_TRANSACTIONS tables.  View these tables using ApexSQL Audit reports. These transaction logs can be extremely detailed – especially on very busy servers, where every move it traced.  Reading them can be overwhelming, to say the least.  Apex has tried to make things easier for the average DBA, though. You can read these tracking logs in Apex, and it will display data and objects that affect your server – even things that were happening on your server before you installed Apex! To read these logs, open Apex, and connect to that database you want to audit. Go to the Transaction Logs tab, and add the logs you want to read. To narrow down what results you want to see, you can use the Filter tab to choose time, operation type, name, users, and more. Click Open, and you can see the results in a grid (as shown below).  You can export these results to CSV, HTML, XML or SQL files and save on the hard disk. One of the advantages is that since there are no triggers here, there are no other processes that will affect SQL Server performance.  Using this method is also how to view history from your database that occurred before Apex was installed.  This type of tracking does require storage space for the data sources, as the database must be fully running, and the transaction logs must exist (things not stored in the transactions logs will not be recoverable). Apex can also replace SQL Server Profiler and SQL Server Traces – which are much more complex and error-prone – with its ApexSQL Comply.  It can do fault tolerant auditing, centralized reporting, and “who saw what” information in an easy-to-use interface.  The tracking settings can be altered by the user, or the default options will provide solutions to the most common auditing problems. To get started: open ApexSQL Comply, and selected Database Filter Settings to choose which database you’d like to audit.  You can select which tracking you’re like in Operation Types – DML, DDL, queries executed, execute statements, and more.  To get started, click Start Auditing. After this, every action will be stored in the central repository database (ApexSQLCrd).  You can view the audit and create a report (or view the standard default report) using a wizard. You can see how easy it is to use ApexSQL Comply.  You can easily set audits, including the type and time, and create customized reports.  Remote users can easily access the reports through the user interface (available online, as well), and security concerns are all taken care of by the program.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • JavaOne pictures and Community Commentary on JCP Awards

    - by heathervc
    We posted some pictures from JCP related events at JavaOne 2012 on the JCP Facebook page today.  The 2012 JCP Program Award winners and some of the nominees responded to the community recognition of their achievements during some of the JCP events last week.     “Our job on the EC is to balance the need of innovation – so we don’t standardize too early, or too late. We try to find that sweet spot that makes innovation and standardization work together, and not against each other.”- Ben Evans, CEO of jClarity and Executive Committee (EC) representative of the London Java Community, 2012 JCP Member/Participant of the Year Winner“SouJava has been evangelizing the Java platform, promoting the Java ecosystem in Brazil, and contributing to JSRs for several years. It’s very gratifying to have our work recognized, on behalf of many developers and Java User Groups around the world. This really is the work of a large group of people, represented by the few that can be here tonight.”- Michael Santos, representative of SouJava, 2012 JCP Member/Participant of the Year Winner "In the last years Credit Suisse has contributed to the development of Java EE specifications through participation in many customer advisory boards, through statements of requirements for extensions to the core Java related products in use, and active participation in JSRs. Winning the JCP Outstanding Spec Lead Award 2012 is very encouraging for our engagement and also demonstrates the level of expertise and commitment to drive the evolution of Java. Victor Grazi is happy and honored to receive this award." - Susanne Cech Previtali, Executive Committee (EC) representative of Credit Suisse, accepting award for 2012 JCP Outstanding Spec Lead Winner "Managing a JSR is difficult. There are so many decisions to be made and so many good and varied opinions, you never really know if you have decided correctly. The key to success is transparency and collaboration. I am truly humbled by receiving this award, there are so many other active JSRs.” Victor added that going forward in the JCP EC, they would like to simplify and open the process of participation – being addressed in the JCP.Next initiative of the JCP EC. "We would also like to encourage the engagement of universities, professors and students – as an important part of the Java community. While innovation is the lifeblood of our community and industry, without strong standards and compatibility requirements, we all end up in a maze of technology where everything is slightly different and doesn’t quite work with everything else." Victo Grazi, Executive Committee (EC) representative of Credit Suisse, 2012 JCP Outstanding Spec Lead Winner“I am very pleased, of course, to accept this award, but the credit really should go to all of those who have participated in the work of the JCP, while pushing for changes in the way it operates.  JCP.Next represents three JSRs. The first two are done, but the final step, JSR 358, is the complicated one, and it will bring in the lawyers. Just to give you an idea of what we’re dealing with, it affects licensing, intellectual property, patents, implementations not based on the Reference Implementation (RI), the role of the RI, compatibility policy, possible changes to the Technical Compatibility Kit (TCK), transparency, where do individuals fit in, open source, and more.”- Patrick Curran, JCP Chair, Spec Lead on JCP.Next JSRs (JSR 348, JSR 355 and JSR 358), 2012 JCP Most Significant JSR Winner“I’m especially glad to see the JCP community recognize JCP.Next for its importance. The governance work it represents is KEY to moving the Java platform forward and the success of the technology.”- John Rizzo, Executive Committee (EC) representative of Aplix Corporation, JSR Expert Group Member “I am deeply honored to be nominated. I had the privilege to receive two awards on behalf of Expert Groups and Spec Leads two years ago. But this time, I am nominated personally, which values my own contribution to the JCP, and of course, participation in JSRs and the EC work. I’m a fan of Agile Principles and Values Working. Being an Agile Coach and Consultant, I use it for some of the biggest EC Member companies and projects. It fuels my ability to help the JCP become more agile, lean and transparent as part of the JCP.Next effort.” - Werner Keil, Individual Executive Committee (EC) Member, a 2012 JCP Member/Participant of the Year Nominee, JSR Expert Group Member“The JCP ever has been some kind of institution for me,” Markus said. “If in technical doubt, I go there, look for the specifications of the implementation I work with at the moment and verify what I had observed. Since the beginning of my Java journey more than 12 years back now, I always had a strong relationship with the JCP. Shaping the future of a technology by joining the JCP – giving feedback and contributing to the road ahead through individual JSRs – that brings you to a whole new level.”Calling himself, “the new kid on the block,” he explained that for years he was afraid to join the JCP and contribute. But in reality, “Every single one of the big names I meet from the different Expert Groups is a nice person. People you can actually work with,” he says. “And nobody blames you for things you don't know. As long as you are committed and bring what is worth the most: passion, experiences and the desire to make a difference.” - Markus Eisele, a 2012 JCP Member of the Year Nominee, JSR Expert Group MemberCongratulations again to all of the nominees and winners of the JCP Program Awards.  Next year, we will add another award for the group of JUG members (not an entire JUG) that makes the best contribution to the Adopt-a-JSR program.  Let us know if you have other suggestions or improvements.

    Read the article

  • MySQL Connector/Net 6.5.5 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.5.5, a new maintenance release of our 6.5 series, has been released.  This release is GA quality and is appropriate for use in production environments.  Please note that 6.6 is our latest driver series and is the recommended product for development. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.5.5 version of MySQL Connector/Net brings the following fixes: - Fix for ArgumentNull exception when using Take().Count() in a LINQ to Entities query (bug MySql #64749, Oracle bug #13913047). - Fix for type varchar changed to bit when saving in Table Designer (Oracle bug #13916560). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix and code contribution for bug Timed out sessions are removed without notification which allow to enable the Expired CallBack when Session Provider times out any session (bug MySql #62266 Oracle bug # 13354935) - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64470, Oracle bug #13790123) - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error". The issue was due to a missing reading of Max_allowed_packet server property when CacheServerProperties is in true, since the value was read only in the first connection but the following pooled connections had a wrong value causing a Packet too large error. Including also a unit test for this scenario. All unit test passed. MySQL Bug #66578 Orabug #14593547. - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Added ANTLR attribution notice (Oracle bug #14379162). - Fixed Entity Framework + mysql connector/net in partial trust throws exceptions (MySql bug #65036, Oracle bug #14668820). - Added support in Parser for Datetime and Time types with precision when using Server 5.6 (No bug Number). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.5.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • Write your Tests in RSpec with IronRuby

    - by kazimanzurrashid
    [Note: This is not a continuation of my previous post, treat it as an experiment out in the wild. ] Lets consider the following class, a fictitious Fund Transfer Service: public class FundTransferService : IFundTransferService { private readonly ICurrencyConvertionService currencyConvertionService; public FundTransferService(ICurrencyConvertionService currencyConvertionService) { this.currencyConvertionService = currencyConvertionService; } public void Transfer(Account fromAccount, Account toAccount, decimal amount) { decimal convertionRate = currencyConvertionService.GetConvertionRate(fromAccount.Currency, toAccount.Currency); decimal convertedAmount = convertionRate * amount; fromAccount.Withdraw(amount); toAccount.Deposit(convertedAmount); } } public class Account { public Account(string currency, decimal balance) { Currency = currency; Balance = balance; } public string Currency { get; private set; } public decimal Balance { get; private set; } public void Deposit(decimal amount) { Balance += amount; } public void Withdraw(decimal amount) { Balance -= amount; } } We can write the spec with MSpec + Moq like the following: public class When_fund_is_transferred { const decimal ConvertionRate = 1.029m; const decimal TransferAmount = 10.0m; const decimal InitialBalance = 100.0m; static Account fromAccount; static Account toAccount; static FundTransferService fundTransferService; Establish context = () => { fromAccount = new Account("USD", InitialBalance); toAccount = new Account("CAD", InitialBalance); var currencyConvertionService = new Moq.Mock<ICurrencyConvertionService>(); currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); fundTransferService = new FundTransferService(currencyConvertionService.Object); }; Because of = () => { fundTransferService.Transfer(fromAccount, toAccount, TransferAmount); }; It should_decrease_from_account_balance = () => { fromAccount.Balance.ShouldBeLessThan(InitialBalance); }; It should_increase_to_account_balance = () => { toAccount.Balance.ShouldBeGreaterThan(InitialBalance); }; } and if you run the spec it will give you a nice little output like the following: When fund is transferred » should decrease from account balance » should increase to account balance 2 passed, 0 failed, 0 skipped, took 1.14 seconds (MSpec). Now, lets see how we can write exact spec in RSpec. require File.dirname(__FILE__) + "/../FundTransfer/bin/Debug/FundTransfer" require "spec" require "caricature" describe "When fund is transferred" do Convertion_Rate = 1.029 Transfer_Amount = 10.0 Initial_Balance = 100.0 before(:all) do @from_account = FundTransfer::Account.new("USD", Initial_Balance) @to_account = FundTransfer::Account.new("CAD", Initial_Balance) currency_convertion_service = Caricature::Isolation.for(FundTransfer::ICurrencyConvertionService) currency_convertion_service.when_receiving(:get_convertion_rate).with(:any, :any).return(Convertion_Rate) fund_transfer_service = FundTransfer::FundTransferService.new(currency_convertion_service) fund_transfer_service.transfer(@from_account, @to_account, Transfer_Amount) end it "should decrease from account balance" do @from_account.balance.should be < Initial_Balance end it "should increase to account balance" do @to_account.balance.should be > Initial_Balance end end I think the above code is self explanatory, treat the require(line 1- 4) statements as the add reference of our visual studio projects, we are adding all the required libraries with this statement. Next, the describe which is a RSpec keyword. The before does exactly the same as NUnit's Setup or MsTest’s TestInitialize attribute, but in the above we are using before(:all) which acts as ClassInitialize of MsTest, that means it will be executed only once before all the test methods. In the before(:all) we are first instantiating the from and to accounts, it is same as creating with the full name (including namespace)  like fromAccount = new FundTransfer.Account(.., ..), next, we are creating a mock object of ICurrencyConvertionService, check that for creating the mock we are not using the Moq like the MSpec version. This is somewhat an interesting issue of IronRuby or maybe the DLR, it seems that it is not possible to use the lambda expression that most of the mocking tools uses in arrange phase in Iron Ruby, like: currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); But the good news is, there is already an excellent mocking tool called Caricature written completely in IronRuby which we can use to mock the .NET classes. May be all the mocking tool providers should give some thought to add the support for the DLR, so that we can use the tool that we are already familiar with. I think the rest of the code is too simple, so I am skipping the explanation. Now, the last thing, how we are going to run it with RSpec, lets first install the required gems. Open you command prompt and type the following: igem sources -a http://gems.github.com This will add the GitHub as gem source. Next type: igem install uuidtools caricature rspec and at last we have to create a batch file so that we can execute it in the Notepad++, create a batch like in the IronRuby bin directory like my previous post and put the following in that batch file: @echo off cls call spec %1 --format specdoc pause Next, add a run menu and shortcut in the Notepad++ like my previous post. Now when we run it it will show the following output: When fund is transferred - should decrease from account balance - should increase to account balance Finished in 0.332042 seconds 2 examples, 0 failures Press any key to continue . . . You will complete code of this post in the bottom. That's it for today. Download: RSpecIntegration.zip

    Read the article

  • CodePlex Daily Summary for Wednesday, May 19, 2010

    CodePlex Daily Summary for Wednesday, May 19, 2010New Projects3FD - Framework For Fast Development: This is a C++ framework that provides a solid error handling structure, garbage collection, multi-threading and portability between compilers. The ...ali test project: test projectAttribute Builder: The Attribute Builder builds an attribute from a lambda expression because it can.BDK0008: it is a food lovers websitecgdigest: cg digest template for non-profit orgCokmez: Bilmuh cokmez duyuru sistemiDot Game: It is a dot game that our Bangladeshi people used to play at their childhood time and their last time when they are poor for working.ESRI Javascript .NET Integration: Visual Studio project that shows how to integrate the Esri Javascript API with .NET Exchange 2010 RBAC Editor (RBAC GUI): Exchange 2010 RBAC Editor (RBAC GUI) Developed in C# and using Powershell behind the scenes RBAC tool to simplfy RBAC administrationFile Validator (Validador de Archivos): Componente que permite realizar la validación de archivos (txt, imagenes, PDF, etc) actualmente solo tiene implementado la parte de los txt, permit...Grip 09 Lab4: GripjPageFlipper: This is a wonderful implementation of page flipper entirely based on HTML 5 <canvas> tag. It means that it can work in any browser that supports HT...Main project: Index bird families and associated species. Malware Analysis and Can Handler: MACH is a tool to organize and catalog your malware analysis canned responses, and to track the topic response lifecycle for forum experts.Perf Web: Performance team web sitePiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统。 PiPiBugNet代码基于ASP.NET 2.0平台开发,编程语言为C#。 PiPiBugNet界面基于Ext JS设计,提供了极佳的用户体验。RemoteDesktop: integrated remote console, desktop and chat utilityRuneScape emulation done right.: RuneScape emulator.Sandkasten: SandkastenSilverlight Metro Theme: Metro Theme for Silverlight.Silverlight Stereoscopy: Stereoscopy with Silverlight.Twitivia: Twitivia is an online trivia service that runs through twitter and is being used as an example set of projects. C#, MVC, Windows Services, Linq ...XPool: A simple school project.New ReleasesDot Game: 'Dot Game' first release: Dot Game first release This is the 'Dot Game' first release.DotNetNuke® Store: 02.01.35: What's New in this release? Bugs corrected: - Fixed a resource for the header in the Category list of the Store Admin module. - Added several test...ESRI Javascript .NET Integration: Map search results in a DataView: Visual Studio 2010 example showing how to pass Map results back to ASP.NET for use in a DataView.Exchange 2010 RBAC Editor (RBAC GUI): RBAC Editor: This binary is still beta (0.0.9.1) but in most case it's very stableExtending C# editor - Outlining, classification: first revision: a couple of bug has been eliminated, performance improvementFloe IRC Client: Floe IRC Client 2010-05 R6: Corrected bug where text would be unexpectedly copied to the clipboard.Floe IRC Client: Floe IRC Client 2010-05 R7: - Fixed bug where text would show up in a query window with someone if they said something on a channel that you are both present on.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.9 GA released: Hi, Today we have released the final version of Visifire v3.0.9 which contains the following enhancements: * Two new properties ActualAxisMin...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.2 GA Released: Hi, Today we have released the final version of Visifire v3.5.2 which contains the following enhancements: Two new properties ActualAxisMinimum a...HB Batch Encoder Mk 2: HB Batch Encoder Mk2 v1.02: Added .mov support.jPageFlipper: jPageFlipper 0.9: This is an initial community preview of jPageFlipper. It's not ready for production usage but has almost all functionality implemented.linq.js - LINQ for JavaScript: ver 2.1.0.0: Add Class Dictionary Lookup Grouping OrderedEnumerable Add Method ToDictionary MemoizeAll Share Let Add Overload ...Microsoft Research Biology Extension for Excel: MSR Biology Extension for Excel - M9: M9 Release includes the following updates to the previous release: > Import / Export support from Excel for multiple file formats > Bug fixes and ...Nifty CSharp Tools: Event Watcher: Event Watcher!Paint.NET Bulk Image Processor: Paint.NET Bulk Image Processor v1.0: This is the initial release of the Paint.NET Bulk Image processor plugin. All feedback is welcome.PiPiBugNet: PiPiBugNet架构设计: PiPiBugNet架构设计,未包含功能实现RuneScape emulation done right.: rc0: Release cantidate 0.Rx Contrib: V1.6: Adding CCR queue as adapter for the ReactiveQueue credits goes to Yuval Mazor http://blogs.microsoft.co.il/blogs/yuvmaz/Silverlight Metro Theme: Silverlight Metro Theme Alpha 1: Silverlight Metro Theme Alpha 1Silverlight Stereoscopy: Silverlight Stereoscopy Alpha 1: Silverlight Stereoscopy Alpha 20100518Stratosphere: Stratosphere 1.0.6.0: Introduced support for batch put Introduced Support for conditional updates and consistent read Added support for select conditions Brought t...VCC: Latest build, v2.1.30518.0: Automatic drop of latest buildVideo Downloader: Example Program - 1.1: Example Program showing the features of the DLL and what can be achieved using it. For DLL Version 1.1.Video Downloader: Version 1.1: Version 1.1 See Home Page for usage and more information regarding new features. Please remember changes at You-Tube can prevent this software from...WatchersNET.TagCloud: WatchersNET.TagCloud 01.06.00: Whats New New Tag Mode: Show Tags from Ventrian.com NewsArticles Module New Tag Mode: Show Tags from Ventrian.com SimpleGallery Module Hyperlin...Windows Double Explorer: WDE v0.4: -optimization -switch to new vst2010 -viewer close now by pressing escape -reorder tabs -send selected fullname or shortnames via email (eye button...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelGMap.NET - Great Maps for Windows Forms & PresentationCustomer Portal Accelerator for Microsoft Dynamics CRMBlogEngine.NETWindows Azure Command-line Tools for PHP DevelopersCassiniDev - Cassini 3.5/4.0 Developers EditionSQL Server PowerShell ExtensionsFluent Ribbon Control Suite

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

  • SUPINFO International University in Mauritius

    Since a while I'm considering to pick up my activities as a student and I'd like to get a degree in Computer Science. Personal motivation I mean after all this years as a professional software (and database) developer I have the personal urge to complete this part of my education. Having various certifications by Microsoft and being awarded as an Microsoft Most Valuable Professional (MVP) twice looks pretty awesome on a resume but having a "proper" degree would just complete my package. During the last couple of years I already got in touch with C-SAC (local business school with degree courses), the University of Mauritius and BCS, the Chartered Institute for IT to check the options to enroll as an experienced software developer. Quite frankly, it was kind of alienating to receive that feedback: Start from scratch! No seriously? Spending x amount of years to sit for courses that might be outdated and form part of your daily routine? Probably being in an awkward situation in which your professional expertise might exceed the lecturers knowledge? I don't know... but if that's path to walk... Well, then I might have to go for it. SUPINFO International University Some weeks ago I was contacted by the General Manager, Education Recruitment and Development of Medine Education Village, Yamal Matabudul, to have a chat on how the local IT scene, namely the Mauritius Software Craftsmanship Community (MSCC), could assist in their plans to promote their upcoming campus. Medine went into partnership with the French-based SUPINFO International University and Mauritius will be the 36th location world-wide for SUPINFO. Actually, the concept of SUPINFO is very likely to the common understanding of an apprenticeship in Germany. Not only does a student enroll into the programme but will also be placed into various internships as part of the curriculum. It's a big advantage in my opinion as the person stays in touch with the daily procedures and workflows in the real world of IT. Statements like "We just received a 'crash course' of information and learned new technology which is equivalent to 1.5 months of lectures at the university" wouldn't form part of the experience of such an education. Open Day at the Medine Education Village Last Saturday, Medine organised their Open Day and it was the official inauguration of the SUPINFO campus in Mauritius. It's now listed on their website, too - but be warned, the site is mainly in French language although the courses are all done in English. Not only was it a big opportunity to "hang out" on the campus of Medine but it was great to see the first professional partners for their internship programme, too. Oh, just for the records, IOS Indian Ocean Software Ltd. will also be among the future employers for SUPINFO students. More about that in an upcoming blog entry. Open Day at Medine Education Village - SUPINFO International University in Mauritius Mr Alick Mouriesse, President of SUPINFO, arrived the previous day and he gave all attendees a great overview of the roots of SUPINFO, the general development of the educational syllabus and their high emphasis on their partnerships with local IT companies in order to assist their students to get future jobs but also feel the heartbeat of technology live. Something which is completely missing in classic institutions of tertiary education in Computer Science. And since I was on tour with my children, as usual during weekends, he also talked about the outlook of having a SUPINFO campus in Mauritius. Apart from the close connection to IT companies and providing internships to students, SUPINFO clearly works on an international level. Meaning students of SUPINFO can move around the globe and can continue their studies seamlessly. For example, you might enroll for your first year in France, then continue to do 2nd and 3rd year in Canada or any other country with a SUPINFO campus to earn your bachelor degree, and then live and study in Mauritius for the next 2 years to achieve a Master degree. Having a chat with Dale Smith, Expand Technologies, after his interesting session on Technological Entrepreneurship - TechPreneur More questions by other craftsmen of the Mauritius Software Craftsmanship Community And of course, this concept works in any direction, giving Mauritian students a huge (!) opportunity to live, study and work abroad. And thanks to this, Medine already announced that there will be new facilities near Cascavelle to provide dormitories and other facilities to international students coming to our island. Awesome! Okay, but why SUPINFO? Well, coming back to my original statement - I'd like to get a degree in Computer Science - SUPINFO has a process called Validation of Acquired Experience (VAE) which is tailor-made for employees in the field of IT, and allows you to enroll in their course programme. I already got in touch with their online support chat but was only redirected to some FAQs on their website, unfortunately. So, during the Open Day I seized the opportunity to have an one-on-one conversation with Alick Mouriesse, and he clearly encouraged me to gather my certifications and working experience. SUPINFO does an individual evaluation prior to their assignment regarding course level, and hopefully my chances of getting some modules ahead of studies are looking better than compared to the other institutes. Don't get me wrong, I don't want to go down the easy route but why should someone sit for "Database 101" or "Principles of OOP" when applying and preaching database normalisation and practicing Clean Code Developer are like flesh and blood? Anyway, I'll be off to get my transcripts of certificates together with my course assignments from the old days at the university. Yes, I studied Applied Chemistry for a couple of years before intersecting into IT and software development particularly... ;-)

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Using a Predicate as a key to a Dictionary

    - by Tom Hines
    I really love Linq and Lambda Expressions in C#.  I also love certain community forums and programming websites like DaniWeb. A user on DaniWeb posted a question about comparing the results of a game that is like poker (5-card stud), but is played with dice. The question stemmed around determining what was the winning hand.  I looked at the question and issued some comments and suggestions toward a potential answer, but I thought it was a neat homework exercise. [A little explanation] I eventually realized not only could I compare the results of the hands (by name) with a certain construct – I could also compare the values of the individual dice with the same construct. That piece of code eventually became a Dictionary with the KEY as a Predicate<int> and the Value a Func<T> that returns a string from the another structure that contains the mapping of an ENUM to a string.  In one instance, that string is the name of the hand and in another instance, it is a string (CSV) representation of of the digits in the hand. An added benefit is that the digits re returned in the order they would be for a proper poker hand.  For instance the hand 1,2,5,3,1 would be returned as ONE_PAIR (1,1,5,3,2). [Getting to the point] 1: using System; 2: using System.Collections.Generic; 3:   4: namespace DicePoker 5: { 6: using KVP_E2S = KeyValuePair<CDicePoker.E_DICE_POKER_HAND_VAL, string>; 7: public partial class CDicePoker 8: { 9: /// <summary> 10: /// Magical construction to determine the winner of given hand Key/Value. 11: /// </summary> 12: private static Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 13: map_prd2fn = new Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 14: { 15: {new Predicate<int>(i => i.Equals(0)), PlayerTie},//first tie 16:   17: {new Predicate<int>(i => i > 0), 18: (m => string.Format("Player One wins\n1={0}({1})\n2={2}({3})", 19: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 20:   21: {new Predicate<int>(i => i < 0), 22: (m => string.Format("Player Two wins\n2={2}({3})\n1={0}({1})", 23: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 24:   25: {new Predicate<int>(i => i.Equals(0)), 26: (m => string.Format("Tie({0}) \n1={1}\n2={2}", 27: m[0].Key, m[0].Value, m[1].Value))} 28: }; 29: } 30: } When this is called, the code calls the Invoke method of the predicate to return a bool.  The first on matching true will have its value invoked. 1: private static Func<DICE_HAND, E_DICE_POKER_HAND_VAL> GetHandEval = dh => 2: map_dph2fn[map_dph2fn.Keys.Where(enm2fn => enm2fn(dh)).First()]; After coming up with this process, I realized (with a little modification) it could be called to evaluate the individual values in the dice hand in the event of a tie. 1: private static Func<List<KVP_E2S>, string> PlayerTie = lst_kvp => 2: map_prd2fn.Skip(1) 3: .Where(x => x.Key.Invoke(RenderDigits(dhPlayerOne).CompareTo(RenderDigits(dhPlayerTwo)))) 4: .Select(s => s.Value) 5: .First().Invoke(lst_kvp); After that, I realized I could now create a program completely without “if” statements or “for” loops! 1: static void Main(string[] args) 2: { 3: Dictionary<Predicate<int>, Action<Action<string>>> main = new Dictionary<Predicate<int>, Action<Action<string>>> 4: { 5: {(i => i.Equals(0)), PlayGame}, 6: {(i => true), Usage} 7: }; 8:   9: main[main.Keys.Where(m => m.Invoke(args.Length)).First()].Invoke(Display); 10: } …and there you have it. :) ZIPPED Project

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >