Search Results

Search found 82 results on 4 pages for 'davide'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • Back from PASS Europe 2010

    - by Davide Mauri
    PASS Europe 2010 is finished and I’m now finally back at home and will stay here for a while. I would like to thanks all the people who has come to my sessions for all their feedback, especially for the “Adaptive BI” session! Slides and demos should be available for download from the PASS European Conference website in a couple of days. Meanwhile if you want to rate my session online, you can do it here: Adaptive BI http://speakerrate.com/talks/3136-adaptive-bi-best-practices Blazing Fast Queries http://speakerrate.com/talks/3135-blazing-fast-queries-when-indexes-are-not-enough Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • SQL Rally Nordic & Amsterdam slides & demos

    - by Davide Mauri
    Last week I had the pleasure to speak at two GREAT conferences (as you can see from the wordcloud I’ve posted, here for Stockholm and here for Amsterdam. I used two different filtering techniques to produce the wordcloud, that’s why they look different. I’m playing a lot with R in these days…so I like to experiment different stuff). The workshop with my friend Thomas Kejser on “Data Warehouse Modeling – Making the Right Choices” and my sessions on “Automating DWH Patterns through Metadata” has been really appreciated by attendees, give the amount of good feedback I had on twitter and on some blog posts (Here and here). Of course many asked for slides & demos to download, so here you are! Automating DWH Patterns through Metadata – Stockholm http://sdrv.ms/1bcRAaW Automating DWH Patterns through Metadata – Amsterdam http://sdrv.ms/1cNDAex I’m still trying to understand if and how I can publicly post slides & demos of the workshop, so for that you have to wait a couple of days. I will post about it as soon as possible. Anyway, if you were in the workshop and would like to get the slide & demos ASAP, just send me an email, I’ll happily sent the protected link to my skydrive folder to you. Enjoy!

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days have been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • DAX editor for SQL Server

    - by Davide Mauri
    One of the major criticism to DAX is the lack of a decent editor and more in general of a dedicated IDE, like the one we have for T-SQL or MDX. Well, this is no more true. On Codeplex a very interesting an promising Visual Studio 2010 extension has been released by the beginning of November 2011: http://daxeditor.codeplex.com/ Intellisense, Syntax Highlighting and all the typical features offered by Visual Studio are available also for DAX. Right now you have to download the source code and compile it, and that’s it!

    Read the article

  • SQL Server 2012 RTM Available!

    - by Davide Mauri
    SQL Server 2012 is available for download! http://www.microsoft.com/sqlserver/en/us/default.aspx The Evaluation version is available here: http://www.microsoft.com/download/en/details.aspx?id=29066 and along with the SQL Server 2012 RTM there’s also the Feature Pack available: http://www.microsoft.com/download/en/details.aspx?id=29065 The Feature Pack is rich of useful and interesting stuff, something needed by some feature, like the Semantic Language Statistics Database some other a very good (I would say needed) download if you use certain technologies, like MDS or Data Mining. Btw, for Data Mining also the updated Excel Addin has been released and it’s available in the Feature Pack. As if this would not be enough, also the SQL Server Data Tools IDE has been released in RTM: http://msdn.microsoft.com/en-us/data/hh297027 Remember that SQL Server Data Tool is completely free and can be used with SQL Server 2005 and after. Happy downloading!

    Read the article

  • Data Warehouse Workshop

    - by Davide Mauri
    I’m really really pleased to announce that it’s possible to register to the Data Warehouse Workshop that I and Thomas Kejser developed togheter.  Several months ago we decided to join forces in order to create a workshop that would contain not only the theoretical stuff, but also the experience we both have and all the best practices and lesson learned that can make the difference between a success and a failure when building a Data Warehouse. The first sheduled date is 7 February in Kista (Sweden): http://www.eventzilla.net/web/event?eventid=2138965081 and until 30th November there is the Super Early Bird to save more the 100€ (150$). The workshop will be very similar to the one I delivered at PASS Summit summit, with some extra technical stuff since it’s one hour longer. In addition to that for this first version both me and Thomas will be present, so it’s a great change  to make sure you super-charge your DW/BI project with insights that aren’t available anywhere else! If you’re into the BI field and you live in Europe, don’t miss this opportunity!

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days has been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • SSIS Dashboard 0.5.2 and Live Demo Website

    - by Davide Mauri
    In the last days I’ve worked again on the SQL Server Integration Service Dashboard and I did some updates: Beta Added support for "*" wildcard in project names. Now you can filter a specific project name using an url like: http://<yourserver>/project/MyPro* Added initial support for Package Execution History. Just click on a package name and you'll see its latest 15 executions and I’ve also created a live demo website for all those who want to give it a try before downloading and using it: http://ssis-dashboard.azurewebsites.net/

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • DevWeek Slides & Demos available for download

    - by Davide Mauri
    Anyone interested can download Slides, Demos and Demo Database (WhitepagesDB) of my “SQL Server best practices for developers” session here: http://www.davidemauri.it/resources/slide--demos.aspx Happy Downloading! :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • Nautilus (File) 3.10.1 how to lock drag and drop bookmarks

    - by Davide Marchi
    From the new Nautilus releases exist the ability to create shortcut/bookmarks simply by drag&drop folder onto the nautilus left panel; However, this behavior can unintentionally create additional bookmarks if you are not very accurate when dragging files to bookmarks, especially by people with little experience in the use of computer. How is possible to disable the drag&drop bookmarks creation? Or ideally should be planning to implement the drag & drop mode only if you are running in conjunction with the press of a key on the keyboard ..

    Read the article

  • The one feature that would make me invest in SSIS 2012

    - by Peter Larsson
    This week I was invited my Microsoft to give two presentations in Slovenia. My presentations went well and I had good energy and the audience was interacting with me. When I had some time over from networking and partying, I attended a few other presentations. At least the ones who where held in English. One of these was "SQL Server Integration Services 2012 - All the News, and More", given by Davide Mauri, a fellow co-worker from SolidQ. We started to talk and soon came into the details of the new things in SSIS 2012. All of the official things Davide talked about are good stuff, but for me, the best thing is one he didn't cover in his presentation. In earlier versions of SSIS than 2012, it is possible to have a stored procedure to act as a data source, as long as it doesn't have a temp table in it. In that case, you will get an error message from SSIS that "Metadata could not be found". This is still true with SSIS 2012, so the thing I am talking about is not really a SSIS feature, it's a SQL Server 2012 feature. And this is the EXECUTE WITH RESULTSETS feature! With this, you can have a stored procedure with a temp table to deliver the resultset to SSIS, if you execute the stored procedure from SSIS and add the "WITH RESULTSETS" option. If you do this, SSIS is able to take the metadata from the code you write in SSIS and not from the stored procedure! And it's very fast too. Let's say you have a stored procedure in earlier versions and when referencing that stored procedure in SSIS forced SSIS to call the stored procedure (which can take hours), to retrieve the metadata. Now, with RESULTSETS, SSIS 2012 can continue in milliseconds! This is because you provide the metadata in the RESULTSETS clause, and if the data from the stored procedure doesn't match this RESULTSETS, you will get an error anyway, so it makes sense Microsoft has provided this optimization for us.

    Read the article

  • Text Formatting toolbar continuously disappears in Impress (open office)

    - by Davide
    This is a very weird and annoying problem. Not sure if it's a bug or a "feature" I'm using OpenOffice 3.2 (within Ubuntu 10.04). The Text Formatting toolbars disappear in many circumstances, e.g. each time I click out of a writing area. It's becoming very time consuming to go to View-Toolbars-TextFormatting to re-enable it each time. 3 questions: is this expected behavior, and if so, is there any setting where I can change it? (note this wasn't happening in the past with other presentation I made)? is there a workaround, such as defining a shortcut like CTRL-whatever that would make the toolbar appearing without menu joggling? is anybody experiencing this too, especially someone using LibreOffice?

    Read the article

  • Silverlight Cream for April 04, 2010 -- #830

    - by Dave Campbell
    In this Issue: Michael Washington, Hassan, David Anson, Jeff Wilcox, UK Application Development Consulting, Davide Zordan, Victor Gaudioso, Anoop Madhusudanan, Phil Middlemiss, and Laurent Bugnion. Shoutouts: Josh Smith has a good-read post up: Design-time data is still data Shawn Hargreaves reported his MIX demo released From SilverlightCream.com: Silverlight MVVM: Enabling Design-Time Data in Expression Blend When Using Web Services Michael Washington has a tutorial up on MVVM and using a web service to get design-time data that works in Blend also... lots of information and screenshots. WP7 Transition Animation Hassan has a new WP7 tutorial up that demonstrates playing media and adding transition animation between pages. Tip: For a truly read-only custom DependencyProperty in Silverlight, use a read-only CLR property instead David Anson's latest tip is in response to comments on his previous post and details one by Dr. WPF who points out that a read-only DependencyProperty doesn't actually need to be a DependencyProperty as long as the class implements INotifyPropertyChanged. Template parts and custom controls (quick tip) Jeff Wilcox has posted a set of tips and recommendations to use when developing control development in Silverlight ... this is a post to bookmark. Flexible Data Template Support in Silverlight The UK Application Development Consulting details a 'problem' in Silverlight that doesn't exist in WPF and that is data templates that vary by type... and discusses a way around it. Multi-Touch enabling Silverlight Simon using Blend behaviors and the Surface sample for Silverlight Davide Zordan brought Multi-Touch to the Silverlight Simon game on CodePlex using Blend Behaviors. New Video Tutorial: How to Use a Behavior to Fire Methods from Objects in Styles Victor Gaudioso has a video tutorial up responding to a question from a developer. He demonstrates development of a Behavior that can be attached to objects in or out of Styles that allows you to specify what Method they need to fire. Creating a Silverlight Client for @shanselman ’s Nerd Dinner, using oData and Bing Maps Anoop Madhusudanan took Scott Hanselman's post on an OData API for StackOverflow, and has created a Silverlight client for Nerd Dinner, including BingMaps. A Chrome and Glass Theme - Part 2 Phil Middlemiss has the next part of his Chrome and Glass Theme up. In this one he creates a very nice chrome-look button with visual state changes. MVVM Light Toolkit V3 SP1 for Windows Phone 7 Laurent Bugnion has released a new version of MVVM Light for WP7. Included is an installation manual and information about what was changed. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 1 2 3 4  | Next Page >