Search Results

Search found 422 results on 17 pages for 'marco galassi'.

Page 4/17 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • #MDX in London and speculation about future books

    - by Marco Russo (SQLBI)
    Chris Webb, who wrote the Expert Cube Development with Microsoft SQL Server 2008 Analysis Services book with me and Alberto , is preparing another Introduction to MDX course in London, this time from October 26th to 28th. It is now a three day course (previously it was two day) and you can find every other detail here . You might be wondering whether we are writing something else... well, we don't have plan to release a new edition of the Analysis Services book - after all, all the content of the...(read more)

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • Templates for forms, tabs etc? - Patterntap alternatives

    - by Marco Demaio
    I used to find http://www.patterntap.com quite useful to get design inspiration for forms, tabs, and other web elements etc. Unfortunately after the ZURB acquisition of Patterntap now they enforce you to sign in with your Twitter account in order to simply view larger images of patterns provided by the crowd. So in some way it's not free anymore. Do you know of alternatives to patterntap that are free and you are not obliged to sign in?

    Read the article

  • PivotViewer demo for photographers

    - by Marco Russo (SQLBI)
    I have a friend (Sandro Rizzetto) that is also a photographer. A good one, but his job is in IT. For these reasons, after reading some posts of mines, he built a PivotViewer page to navigate its photo database. You can drilldown by category, date, lens, focal, iso, shutter speed and many other parameters. The result is awesome and is visible here . If you are interested in technical details and comments about the meaning of the data, read his blog here (it is in Italian)....(read more)

    Read the article

  • Yahoo search: different results shown in two identical searches

    - by Marco Demaio
    Hello,simple question: searching on http://www.yahoo.it for villa matrimonio bologna I noticed Yahoo shows different results. You need to retry few times to get this done maybe exiting the browser and openeing it again, or maybe searching once and then clearing browser cookies and then search again (it's even easier to test if you use two different browsers at the same time to search for the same phrase). Anyway in order to reproduce this easily I write down here the query shown in the address bar after the search, so you can just click on these to see the results shown by entering these query: http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=yfp-t-709 http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=sfp Note the last parameter fr is different, but it's Yahoo that set it (not me), I don't even know what it means. You can see in the search box that the searched phrase is IDENTICAL in both cases. So why Yahoo is giving out different results on same search phrase? I used the same browser and performed the test in few minutes by simply trying more than once. You may also notice that the number of results returned (written on the left side of the page) is different, for the 1st search it returns 274K results, for the 2nd one 5.38M results. Actually you might think that this is just an error on Yahoo, but it's almost 1 year that while looking once in a while at some websites to see how they are ranking on Yahoo and also Google, I noticed that two searches on the same phrase show up different results even on the same day after few minutes/hours. I couldn't reproduce this behaviour also on Google so I can not say for sure, but since it seems to me it happened sometimes I was wondering if anyone of you noticed it too. Do you know if this is the normal behaviour of search engines? Because if it's normal (and it's just me that noticed it only now) I wonder how do you understand how well a site is ranking on a search engine, you could even see one of your customer's website ranking differently compared to what your customer sees on his PC.

    Read the article

  • correct Installation and configuration of openJDK and R

    - by Marco K
    I am relatively new to Ubuntu so I wont know a lot of commands that probably became standard to a lot of you guys. I am trying to set up R and with it the necessary java dependencies to install e.g. JGR, rjava, etc. I read through quite a few instructions to do that but somehow I must have done sth wrong. Here is the state of R and java: R --version R version 2.14.1 (2011-12-22) Copyright (C) 2011 The R Foundation for Statistical Computing ISBN 3-900051-07-0 Platform: x86_64-pc-linux-gnu (64-bit) java -version java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) R CMD javareconf Java interpreter : /usr/bin/java Java version : 1.6.0_23 Java home path : /usr/lib/jvm/java-6-openjdk/jre Java compiler : /usr/bin/javac Java headers gen.: /usr/bin/javah Java archive tool: /usr/bin/jar Java library path: /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib JNI linker flags : -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm JNI cpp flags : But when I try to install 'JavaGD' in R, which is a dependency for JGR I get: ... checking Java support in R... present: interpreter : '/usr/bin/java' cpp flags : '' java libs : '-L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm' configure: error: One or more Java configuration variables are not set. Make sure R is configured with full Java support (including JDK). Run R CMD javareconf as root to add Java support to R. ... Any help would be greatly appreciated. Thanks!

    Read the article

  • Rhythmbox plugin radio-browser crashes

    - by Marco
    I have just installed the plugins package from fossfreedom repository on my Rhythmbox 2.96 running on 12.04 (64 bit). The radio-browser plugin crashes Rhythmbox after a few seconds of playing. When I restarted Rhythmbox after the crash, the plugin was marked as "error" and it is impossible to reinstall it. I've then manually removed (sudo apt-get remove ...) and reinstalled it but still I can't enable it. Unfortunately, radio-browser was the reason for me to install the plugin package, and I can't use it... Help please?

    Read the article

  • Expert Cube Development book finally on Kindle!

    - by Marco Russo (SQLBI)
    The book Expert Cube Development with Microsoft SQL Server 2008 Analysis Services is finally available on Kindle ! I received many requests for that and the last one just a couple of days ago from Greg Low in its useful review . I'm curious to see whether the sales of this book will continue also on Kindle. After 2 years this book is still continuing to sell as in the first months. The content is still fresh and will be good also with the next release of Analysis Services for developing multidimensional...(read more)

    Read the article

  • Share on: FB, Tweet, Digg, Linkedin, Delicious, My mother, ... it's just on fashion, or some real value?

    - by Marco Demaio
    Nowadays your site is not in fashion if you don't show at least a couple of share buttons like these: Is this just fashion, or do people actually get something good out of it? When I say "something good" I mostly mean something that you could measure, and not just the feeling that was good. Maybe I can better explain with an example: did you notice (in some way) that many people clicked on those links to share your page/s on those web 2.0 social sites? And in such a case on which social networks did you see they mostly share your pages? BTW I'm not talking about Google PR, i know all web 2.0 social sites use nofollow everywhere and even hidden links, so they are useless by themselves for PR. UPDATE: According to this video, Google's Alter Ego says that they now use in some way data from social sites in ranking. If this is true, it's obvious that the Share on button for FB, Tweet, etc are definitely of some values. But again my question is more about what you noticed in your real experience to be a direct benefit of adding those type of "Share On" links on your webisite? I.e. did you see more traffic coming in form FB, or some users who bought your products because of FB or Twitter? Or any other benefits? Thanks

    Read the article

  • User Group meeting in Copenhagen for #powerpivot

    - by Marco Russo (SQLBI)
    The next Monday, March 21st, I will join a special event organized by the Danish SQL Server User Group , Excelbi.dk and the Swedish SQL Server User Group . The meeting will start at 18:00 at the Radisson Royal Blu in Copenhagen, and this is the topic we will discuss. PowerPivot / BISM and the future of a BI Solution The next version of Analysis Services will offer the BI Semantic Model (BISM) that is based on Vertipaq, the same engine that runs PowerPivot. DAX and PowerPivot have been created as...(read more)

    Read the article

  • Basket Analysis with #dax in #powerpivot and #ssas #tabular

    - by Marco Russo (SQLBI)
    A few days ago I published a new article on DAX Patterns web site describing how to implement Basket Analysis in DAX. This topic is a very classical one and is also covered in the many-to-many revolution white paper. It has been also discussed in several blog posts, listed here in historical order: Simple Basket Analysis in DAX by Chris Webb PowerPivot, basket analysis and the hidden many to many by Alberto Ferrari Applied Basket Analysis in Power Pivot using DAX by Gerhard Brueckl As usual, in DAX Patterns we try to present the required DAX formulas in a way that is easy to adapt to specific models. We also try to show a good implementation from a performance point of view. Further optimizations are always possible in DAX. However, in order to keep the model simple to adapt in different scenarios, we avoid presenting optimizations that would require particular assumptions or restrictions on the data model. I hope you will find the Basket Analysis pattern useful. Even if you do not need it today, reading the DAX formula is a good exercise to check your knowledge of evaluation contexts in DAX. For example, describing how does it work the following expression is not a trivial task! [Orders with Both Products] := CALCULATE (     DISTINCTCOUNT ( Sales[SalesOrderNumber] ),     CALCULATETABLE (         SUMMARIZE ( Sales, Sales[SalesOrderNumber] ),         ALL ( Product ),         USERELATIONSHIP ( Sales[ProductCode], 'Filter Product'[Filter ProductCode] )     ) ) The good news is that you can use the patterns even if you do not really understand all the details of the DAX formulas you are using! Any feedback on this new pattern is very welcome.

    Read the article

  • #DAX Query Plan in SQL Server 2012 #Tabular

    - by Marco Russo (SQLBI)
    The SQL Server Profiler provides you many information regarding the internal behavior of DAX queries sent to a BISM Tabular model. Similar to MDX, also in DAX there is a Formula Engine (FE) and a Storage Engine (SE). The SE is usually handled by Vertipaq (unless you are using DirectQuery mode) and Vertipaq SE Query classes of events gives you a SQL-like syntax that represents the query sent to the storage engine. Another interesting class of events is the DAX Query Plan , which contains a couple...(read more)

    Read the article

  • How to build a 4x game?

    - by Marco
    I'm trying to study how succefully implement a 4x game. Area of interest: 1) map data: how to store stellars systems (graphs?), how to generate them and so on.. 2) multiplayer: how to organize code in a non graphical server and a client to display it 3) command system: what are patters to catch user and ai decisions and handle them, adding at first "explore" and "colonize" then "combat", "research", "spy" and so on (commands can affect ships, planets, research, etc..) 4) ai system: ai can use commands to expand, upgrade planets and ship I know is a big questions, so help is appreciated :D 1) Map data Best choice is have a graph to model a galaxy. A node is a stellar system and every system have a list of planets. Ship cannot travel outside of predefined paths, like in Ascendancy: http://www.abandonia.com/files/games/221/Ascendancy_2.png Every connection between two stellar systems have a cost, in turns. Generate a galaxy is only a matter of: - dimension: number of stellar systems, - variety: randomize number of planets and types (desertic, earth, etc..), - positions of each stellar system on game space - connections: assure that exist a path between every node, so graph is "connected" (not sure if this a matematically correct term) 2) Multiplayer Game is organized in turns: player 1, player 2, ai1, ai2. Server take care of all data and clients just diplay it and collect data change. Because is a turn game, latency is not a problem :D 3) Command system I would like to design a hierarchy of commands to take care of this aspect: abstract Genericcommand (target) ExploreCommand (Ship) extends genericcommand colonizeCommand (Ship) buildcommand(planet, object) and so on. In my head all this commands are stored in a queue for every planets, ships or reasearch center or spy, and each turn a command is sent to a server to apply command and change data state 4) ai system I don't have any idea about this. Is a big topic and what I want is a simple ai. Something like "expand and fight against everyone". I think about a behaviour tree to control ai moves, so I can develop an ai that try to build ships to expand and then colonize planets, upgrade them throught science and combat enemies. Could be done with a finite state machine too ? any ideas, resources, article are welcome!

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • How to get multiple open-source projects to use a standard way of doing something.

    - by Marco
    Problem In the last couple weeks, I've used 3 different "repository" tools (listed in alphabetical order): gradle ivy maven I'm calling them "repository" tools because I've also used sbt -- which fortunately uses ivy to manage it's cache or local repository. Each of these tools will create it's own repository. The defaults are: ~/.m2/repository for maven ~/.gradle/cache ~/.ivy2/cache Why can't they all use the same cache? Goal I'd like to change the world so that all three build tools could use the same cache. I'm looking for advice about issues I'm likely to run into and smart ways to get around them. By "use the same cache", I do not mean "retrieve from another build tool's cache". I mean "retrieve from and store in another build tool's cache". While I could go ahead and submit issues to the three projects, I know from experience (as a developer on an open source project), that if you want something done, you're best off getting it done yourself. Also, it seems like I need to get all 3 communities on board to some degree. What is the recommended approach for getting this kind of thing done? How do I approach the different communities? Do I work on patches for the 3 different projects, or would it be better off to create my own "interface" project that deals with these issues and have the 3 tools interface with that? Is this a standards question that I need to address on that front? Lastly, if I'm missing something and this is possible (in an globally configurable fashion), then please let me know.

    Read the article

  • The Many-to-Many Revolution 2.0 #ssas #mdx #dax #m2m

    - by Marco Russo (SQLBI)
    In September 2006 I had announced in this blog the release of the first version of The Many-to-Many Revolution, a whitepaper that describes how to leverage the many-to-many dimension relationships feature that had being available since Analysis Services 2005. The paper contains many generic patterns that can be applied in many common data analysis’ scenarios. More than 5 years later and more then 20.000 unique people that downloaded the 1.0 paper, I am proud to announce that we released The Many-to-Many...(read more)

    Read the article

  • Common request: export #Tabular model and data to #PowerPivot

    - by Marco Russo (SQLBI)
    I received this request in many courses, messages and also forum discussions: having an Analysis Services Tabular model, it would be nice being able to extract a correspondent PowerPivot data model. In order of priority, here are the specific feature people (including me) would like to see: Create an empty PowerPivot workbook with the same data model of a Tabular model Change the connections of the tables in the PowerPivot workbook extracting data from the Tabular data model Every table should have an EVALUATE ‘TableName’ query in DAX Apply a filter to data extracted from every table For example, you might want to extract all data for a single country or year or customer group Using the same technique of applying filter used for role based security would be nice Expose an API to automate the process of creating a PowerPivot workbook Use case: prepare one workbook for every employee containing only its data, that he can use offline Common request for salespeople who want a mini-BI tool to use in front of the customer/lead/supplier, regardless of a connection available This feature would increase the adoption of PowerPivot and Tabular (and, therefore, Business Intelligence licenses instead of Standard), and would probably raise the sales of Office 2013 / Office 365 driven by ISV, who are the companies who requests this feature more. If Microsoft would do this, it would be acceptable it only works on Office 2013. But if a third-party will do that, it will make sense (for their revenues) to cover both Excel 2010 and Excel 2013. Another important reason for this feature is that the “Offline cube” feature that you have in Excel is not available when your PivotTable is connected to a Tabular model, but it can only be used when you connect to Analysis Services Multidimensional. If you think this is an important features, you can vote this Connect item.

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Possible SWITCH Optimization in DAX – #powerpivot #dax #tabular

    - by Marco Russo (SQLBI)
    In one of the Advanced DAX Workshop I taught this year, I had an interesting discussion about how to optimize a SWITCH statement (which could be frequently used checking a slicer, like in the Parameter Table pattern). Let’s start with the problem. What happen when you have such a statement? Sales :=     SWITCH (         VALUES ( Period[Period] ),         "Current", [Internet Total Sales],         "MTD", [MTD Sales],         "QTD", [QTD Sales],         "YTD", [YTD Sales],          BLANK ()     ) The SWITCH statement is in reality just syntax sugar for a nested IF statement. When you place such a measure in a pivot table, for every cell of the pivot table the IF options are evaluated. In order to optimize performance, the DAX engine usually does not compute cell-by-cell, but tries to compute the values in bulk-mode. However, if a measure contains an IF statement, every cell might have a different execution path, so the current implementation might evaluate all the possible IF branches in bulk-mode, so that for every cell the result from one of the branches will be already available in a pre-calculated dataset. The price for that could be high. If you consider the previous Sales measure, the YTD Sales measure could be evaluated for all the cells where it’s not required, and also when YTD is not selected at all in a Pivot Table. The actual optimization made by the DAX engine could be different in every build, and I expect newer builds of Tabular and Power Pivot to be better than older ones. However, we still don’t live in an ideal world, so it could be better trying to help the engine finding a better execution plan. One student (Niek de Wit) proposed this approach: Selection := IF (     HASONEVALUE ( Period[Period] ),     VALUES ( Period[Period] ) ) Sales := CALCULATE (     [Internet Total Sales],     FILTER (         VALUES ( 'Internet Sales'[Order Quantity] ),         'Internet Sales'[Order Quantity]             = IF (                 [Selection] = "Current",                 'Internet Sales'[Order Quantity],                 -1             )     ) )     + CALCULATE (         [MTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "MTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [QTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "QTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [YTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "YTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     ) At first sight, you might think it’s impossible that this approach could be faster. However, if you examine with the profiler what happens, there is a different story. Every original IF’s execution branch is now a separate CALCULATE statement, which applies a filter that does not execute the required measure calculation if the result of the FILTER is empty. I used the ‘Internet Sales’[Order Quantity] column in this example just because in Adventure Works it has only one value (every row has 1): in the real world, you should use a column that has a very low number of distinct values, or use a column that has always the same value for every row (so it will be compressed very well!). Because the value –1 is never used in this column, the IF comparison in the filter discharge all the values iterated in the filter if the selection does not match with the desired value. I hope to have time in the future to write a longer article about this optimization technique, but in the meantime I’ve seen this optimization has been useful in many other implementations. Please write your feedback if you find scenarios (in both Power Pivot and Tabular) where you obtain performance improvements using this technique!

    Read the article

  • Finish feature reverted commits from develop

    - by marco-fiset
    I am using git as a version control system, and using git-flow as the branching model. I started a feature branch some weeks ago in order to maintain the system in a clean state while developping that feature. The main development continued on the develop branch, and changes from develop were merged periodically into the feature, to keep it up to date as much as possible. However came the time where the feature was finished, and I used git-flow's finish feature to merge the feature back into develop. The merge was successfully done, but then I found out that some of the commits I made in develop were reverted by the merge commit! Nowhere in develop or in the feature branch were these changes reverted, I can't see any commit that overwrote them. I just can't find anything. The only theory I have for the moment is that git is failing on me, but that would be extremely unlikely. Maybe I did some kind of wrong manipulation that made this situation come true? I can trace back in the history when the commit was made. I can see that the changes from that commit were reverted by the merge commit. Nowhere in the branch I see a commit that reverts those changes. Yet they were reverted. How is this even possible?

    Read the article

  • Distinct Count of Customers in a SCD Type 2 in #DAX

    - by Marco Russo (SQLBI)
    If you have a Slowly Changing Dimension (SCD) Type 2 for your customer and you want to calculate the number of distinct customers that bought a product, you cannot use the simple formula: Customers := DISTINCTCOUNT( FactTable[Customer Id] ) ) because it would return the number of distinct versions of customers. What you really want to do is to calculate the number of distinct application keys of the customers, that could be a lower number than the number you’ve got with the previous formula. Assuming that a Customer Code column in the Customers dimension contains the application key, you should use the following DAX formula: Customers := COUNTROWS( SUMMARIZE( FactTable, Customers[Customer Code] ) ) Be careful: only the version above is really fast, because it is solved by xVelocity (formerly known as VertiPaq) engine. Other formulas involving nested calculations might be more complex and move computation to the formula engine, resulting in slower query. This is absolutely an interesting pattern and I have to say it’s a killer feature. Try to do the same in Multidimensional…

    Read the article

  • TechEd NorthAmerica 2010 (and MS BI Conference 2010) Sessions

    - by Marco Russo (SQLBI)
    I just read the Dave Wickert post about his sessions about PowerPivot from Microsoft at TechEd 2010 in New Orleans (June 7-10, 2010) and there are at least two things I’d like to add. First of all, there is also another conference! In fact, this time the Microsoft Business Intelligence Conference 2010 is co-located with TechEd 2010 and all the BI sessions of TechEd…. are sessions of the MS BI Conference too! The second news is that there are many other sessions about PowerPivot at the conference!...(read more)

    Read the article

  • Approaching events #mstc11 #ppws #sqlbits

    - by Marco Russo (SQLBI)
    The spring season is always full of events and I’m just preparing for a number of them. First of all, we are getting very good interest for the PowerPivot Workshop in Copenhagen on 21-22 March 2011. Tomorrow (Friday March 4) will be the last day to take advantage of the Early Bird rate for this date. We will also participate to an evening meeting of local user groups on March 21 in Copenhagen, more news about this in the next few days. Other scheduled dates are in Dublin (28-29 March 2011) and in...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >