Search Results

Search found 445 results on 18 pages for 'marco ramos'.

Page 6/18 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • PowerPivot FILTER condition optimizations

    - by Marco Russo (SQLBI)
    In the comments of a recent post from Alberto Ferrari there was an interesting note about different performance related to the order of conditions in a FILTER call. I investigated about that and Jeffrey Wang has been so nice to give me some info about actual implementation that I can share on a blog post. First of all, an important disclaimer: PowerPivot is intended to make life easier, not requiring the user to think how to write the order of elements in a formula just to get better performance....(read more)

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • TechEd North America 2012–Day 2 #msTechEd #teched

    - by Marco Russo (SQLBI)
    This is the second day at TechEd North America 2012 and yesterday I had many conversations about PowerPivot and SSAS Tabular. In the evening the book signing at O’Reilley booth has been a big success! I’m writing this post from the speaker’s room. It’s not crowded this morning because the keynote is going on and there are no people also in the hall, everyone is in the keynote room. Today will be a very busy day: I’ll be staffing at Technical Learning Center from 12:30pm to 3:30pm so this is a first chance for joining the conversation about Tabular and DAX. But there is another choice this evening at Community Night starting at 6:30pm until 9:00pm. Join us at this Ask the Expert event! And, well, don’t miss the Many-to-Many Relationships in BISM Tabular from Alberto this afternoon at 5:00 pm in room S330E. Look at my yesterday’s post if you want to look at our full schedule for the week. Enjoy TechEd!

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • Understanding #DAX Query Plans for #powerpivot and #tabular

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote a very interesting white paper about DAX query plans. We published it on a page where we'll gather articles and tools about DAX query plans: http://www.sqlbi.com/topics/query-plans/I reviewed the paper and this is the result of many months of study - we know that we just scratched the surface of this topic, also because we still don't have enough information about internal behavior of many of the operators contained in a query plan. However, by reading the paper you will start reading a query plan and you will understand how it works the optimization found by Chris Webb one month ago to the events-in-progress scenario. The white paper also contains a more optimized query (10 time faster), even if the performance depends on data distribution and the best choice really depends on the data you have. Now you should be curious enough to read the paper until the end, because the more optimized query is the last example in the paper!

    Read the article

  • Parameterize Charts using Excel Slicers in PowerPivot

    - by Marco Russo (SQLBI)
    One new nice feature of Excel 2010 is the Slicer. Usually, slicers are used to filter data in a PivotTable. But they might be also useful to parameterize an algorithm or a chart! We discussed this technique in our book , but Alberto Ferrari wrote a post that shows how to use this technique to allow the user to select two stocks that should be compared in an Excel Chart – as you might imagine, this will work also when you will publish the workbook on SharePoint! This is the result: Nice to see that...(read more)

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • Meet @marcorus and @ferrarialberto at TechEd Europe 2012 #tee2012

    - by Marco Russo (SQLBI)
    I and Alberto are in Amsterdam this week at TechEd Europe 2012. If you are here at the conference, you can meet us here: Wed, Jun 27 10:15 AM - 11:30 AM – Room G106 DBI319 - BISM: Multidimensional vs. Tabular Wed, Jun 27 02:15 PM – 02:30 PM – Microsoft Press Booth in the TechExpo area PowerPivot for Excel 2010 Book Signing Thu, Jun 28 8:30 AM - 9:45 AM – Room E107 Many-to-Many Relationships in BISM Tabular Fri, Jun 29 1:00 PM - 2:45 PM – Breakthrough Insight at Microsoft SQL Server Booth – TechExpo area Staff and Q&A We’ll try to visit the Microsoft Booth very often and we’ll be in the area Breakthrough Insight of SQL Server zone (see the picture to identify it). And don’t miss the PowerPivot for Excel 2010 book signing event:

    Read the article

  • Meet @marcorus and @ferrarialberto at TechEd Europe 2012 #tee2012

    - by Marco Russo (SQLBI)
    I and Alberto are in Amsterdam this week at TechEd Europe 2012. If you are here at the conference, you can meet us here: Wed, Jun 27 10:15 AM - 11:30 AM – Room G106 DBI319 - BISM: Multidimensional vs. Tabular Wed, Jun 27 02:15 PM – 02:30 PM – Microsoft Press Booth in the TechExpo area PowerPivot for Excel 2010 Book Signing Thu, Jun 28 8:30 AM - 9:45 AM – Room E107 Many-to-Many Relationships in BISM Tabular Fri, Jun 29 1:00 PM - 2:45 PM – Breakthrough Insight at Microsoft SQL Server Booth – TechExpo area Staff and Q&A We’ll try to visit the Microsoft Booth very often and we’ll be in the area Breakthrough Insight of SQL Server zone (see the picture to identify it). And don’t miss the PowerPivot for Excel 2010 book signing event:

    Read the article

  • Do we still need to avoid using frame/iframe for good SEO?

    - by Marco Demaio
    I thought frame and iframe are something bad for SEO. Today I was doing some search and searching for "numismatica" in Google.it and the 3rd site is www.lemonete.com (ranks high also in Google.com). It's all made with frames. A competitor site (www.nummus.com) that is all done with no frames ranks much lower when searching for the same word: "numismatica" So can we use frame, don't they hit SEO ranking anymore? Any explanation (possibly simple) would be appreciated also about the fact the 2nd site ranks much lower. :) UPDATE: thanks for all the replies , I noticed now I wrote down the wrong url, it was not lemonete.it (which is just a spam site) but lemonete.com.

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Power Query in Modern Corporate BI–Copenhagen, June 3, 2014–#powerquery

    - by Marco Russo (SQLBI)
    I will be in Copenhagen to deliver the SSAS Tabular Workshop on June 2-4, 2014 (few seats still available, but hurry up!). In the same week I will be a speaker in an evening community event, MsBIP møde nr. 21, delivering the Power Query in Modern Corporate BI session that I also presented at TechEd North America 2014 last week. It’s not just a session about Power Query, there is a broader scope related to Corporate BI vs. Self-Service BI, which could be open to many consideration. I think that the two worlds can (and should) collaborate, instead of fighting against each other, especially when there is an existing investment in Corporate BI. I hope to meet many of you there!

    Read the article

  • API Class with intensive network requests

    - by Marco Acierno
    I'm working an API which works as "intermediary" between a REST API and the developer. In this way, when the programmer do something like this: User user = client.getUser(nickname); it will execute a network request to download from the service the data about the user and then the programmer can use the data by doing things like user.getLocation(); user.getDisplayName(); and so on. Now there are some methods like getFollowers() which execute another network request and i could do it in two ways: Download all the data in the getUser method (and not only the most important) but in this way the request time could be very long since it should execute the request to various urls Download the data when the user calls the method, it looks like the best way and to improve it i could cache the result so the next call to getFollowers returns immediately with the data already download instead of execute again the request. What is the best way? And i should let methods like getUser and getFollowers stop the code execution until the data is ready or i should implement a callback so when the data is ready the callback gets fired? (this looks like Javascript)

    Read the article

  • Slowly Changing Dimensions handling in PowerPivot (and BISM?)

    - by Marco Russo (SQLBI)
    During the PowerPivot Workshop in London we received many interesting questions and Alberto had the inspiration to write this nice post about Slowly Changing Dimensions handling in PowerPivot. It is interesting the consideration about SCD Type I attributes in a SCD Type II dimension – you can probably generate them in a more dynamic way in PowerPivot (thanks to Vertipaq and DAX) instead of relying on a relational table containing all the data you need, which usually requires a more complex ETL process....(read more)

    Read the article

  • Thinking in DAX (#powerpivot and #bism)

    - by Marco Russo (SQLBI)
    Last week Alberto published an interesting post about Counting Products in the Current Status with PowerPivot . Starting from a question raised from a reader, Alberto described how to solve a common issue (let me know the “current status” of each item at a given point in time starting from a transactions table) by using a single DAX formula. I suggest you to read his post to understand the technical details of that. What is inspiring of this example is that we can look at Vertipaq and DAX from several...(read more)

    Read the article

  • Is goto to improve DRY-ness OK?

    - by Marco Scannadinari
    My code has many checks to detect errors in various cases (many conditions would result in the same error), inside a function returning an error struct. Instead of looking like this: err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) { error.error = true; error.description = "invalid input"; return error; } ... case 1024: error.error = true; error.description = "invalid input"; // same error, but different detection scenario return error; break; // don't comment on this break please (EDIT: pun unintended) ... Is use of goto in the following context considered better than the previous example? err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) goto invalid_input; ... case 1024: goto invalid_input; break; return error; invalid_input: error.error = true; error.description = "invalid input"; return error;

    Read the article

  • PowerPivot Workshop in Frankfurt (and London early-bird expiring soon) #ppws

    - by Marco Russo (SQLBI)
    One week ago I described the PowerPivot Workshop Roadshow that we are planning in several European countries. The news today is that the Workshop will be in Frankfurt (Germany) on February 21-22, 2011 ! The registrations are open on www.powerpivotworkshop.com web site. The early-bird price for Frankfurt will expire on February 4, 2011. And if you are willing to attend the London date on Febrary 7-8, remember that early-bird price for London is going to expire on Monday (January 17) ! Save your money...(read more)

    Read the article

  • ALL, ALLEXCEPT and VALUES in DAX

    - by Marco Russo (SQLBI)
    When you use CALCULATE in DAX you are creating a new filter context for the calculation, based on the existing one. There are a few functions that are used to clear or preserve a column filter. These functions are: ALL – it can be used with one or more columns from a table, or with the name of a table. It returns all the values from the column(s) or all the rows from the table, ignoring any existing filter context. In other words, ALL clear an existing filter context on columns or table. We can use...(read more)

    Read the article

  • The updated Survey pattern for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    One of the first models I created for the many-to-many revolution white paper was the Survey one. At the time, it was in Analysis Services Multidimensional, and then we implemented it in Analysis Services Tabular and in Power Pivot, using the DAX language. I recently reviewed the data model and published it in the Survey article on DAX Patterns site. The Survey pattern is the foundation for others, such as the Basket Analysis, and it is widely used in many different business scenario. I was particularly happy to know it has been using to perform data analysis for cancer research! In this article I did some maintenance on the DAX formulas, checking that the proper error handling is part of the formulas, and highlighting some differences in slicers behavior between Excel 2010 and Excel 2013, which could be particularly important for the Survey scenario. As usual, we provide sample workbooks for both Excel 2010 and Excel 2013, and we use DAX Formatter to make the DAX code easier to read. Any feedback will be appreciated!

    Read the article

  • How to allow Google Images search to by pass hotlink protection?

    - by Marco Demaio
    I saw Google Images seems to index my images only if hotlink protection is off. * I use anyway hotlink protection because I don't like the idea of people sucking my bandwidth, i simply this code to protcet my sites from being hotlinked: RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com$ [NC] RewriteRule .*\.(jpg|jpeg|png|gif)$ - [F,NC,L] But in order to allow Google Image search to bypass my hotlink protection (I want Google Images search to show my images) would it suffice to add a line like this one: RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com$ [NC] Because I'm wondring: is the crawler crawling just from google.com? and what about google.it / google.co.uk, etc.? FYI: on Google official guidelines I did not find info about this. I suppose hotlink protection prevents Google Images to show images in its results because I did some tests and it seems hotlink protection does prevent my images to be shown in Google Images search.

    Read the article

  • SEO - PageRank on Facebook pages, but pages have no back links to them?

    - by Marco Demaio
    have a look at these two pages: 1) http://it-it.facebook.com/jeanchristophe.cataliotti (PageRank 2 from Google toolbar) Amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=it-it.facebook.com/jeanchristophe.cataliotti&fr=sfp 2) http://www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0 (PageRank 1 from Google toolbar) Still amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0&fr=sfp How do you explain this? Hoping for an explanation that goes beyond just saying that the PR in Goole toolbar it's not updated, because it can not be the reason for this!

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >