Search Results

Search found 422 results on 17 pages for 'marco scannadinari'.

Page 6/17 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Power Query in Modern Corporate BI–Copenhagen, June 3, 2014–#powerquery

    - by Marco Russo (SQLBI)
    I will be in Copenhagen to deliver the SSAS Tabular Workshop on June 2-4, 2014 (few seats still available, but hurry up!). In the same week I will be a speaker in an evening community event, MsBIP møde nr. 21, delivering the Power Query in Modern Corporate BI session that I also presented at TechEd North America 2014 last week. It’s not just a session about Power Query, there is a broader scope related to Corporate BI vs. Self-Service BI, which could be open to many consideration. I think that the two worlds can (and should) collaborate, instead of fighting against each other, especially when there is an existing investment in Corporate BI. I hope to meet many of you there!

    Read the article

  • API Class with intensive network requests

    - by Marco Acierno
    I'm working an API which works as "intermediary" between a REST API and the developer. In this way, when the programmer do something like this: User user = client.getUser(nickname); it will execute a network request to download from the service the data about the user and then the programmer can use the data by doing things like user.getLocation(); user.getDisplayName(); and so on. Now there are some methods like getFollowers() which execute another network request and i could do it in two ways: Download all the data in the getUser method (and not only the most important) but in this way the request time could be very long since it should execute the request to various urls Download the data when the user calls the method, it looks like the best way and to improve it i could cache the result so the next call to getFollowers returns immediately with the data already download instead of execute again the request. What is the best way? And i should let methods like getUser and getFollowers stop the code execution until the data is ready or i should implement a callback so when the data is ready the callback gets fired? (this looks like Javascript)

    Read the article

  • Slowly Changing Dimensions handling in PowerPivot (and BISM?)

    - by Marco Russo (SQLBI)
    During the PowerPivot Workshop in London we received many interesting questions and Alberto had the inspiration to write this nice post about Slowly Changing Dimensions handling in PowerPivot. It is interesting the consideration about SCD Type I attributes in a SCD Type II dimension – you can probably generate them in a more dynamic way in PowerPivot (thanks to Vertipaq and DAX) instead of relying on a relational table containing all the data you need, which usually requires a more complex ETL process....(read more)

    Read the article

  • Thinking in DAX (#powerpivot and #bism)

    - by Marco Russo (SQLBI)
    Last week Alberto published an interesting post about Counting Products in the Current Status with PowerPivot . Starting from a question raised from a reader, Alberto described how to solve a common issue (let me know the “current status” of each item at a given point in time starting from a transactions table) by using a single DAX formula. I suggest you to read his post to understand the technical details of that. What is inspiring of this example is that we can look at Vertipaq and DAX from several...(read more)

    Read the article

  • PowerPivot Workshop in Frankfurt (and London early-bird expiring soon) #ppws

    - by Marco Russo (SQLBI)
    One week ago I described the PowerPivot Workshop Roadshow that we are planning in several European countries. The news today is that the Workshop will be in Frankfurt (Germany) on February 21-22, 2011 ! The registrations are open on www.powerpivotworkshop.com web site. The early-bird price for Frankfurt will expire on February 4, 2011. And if you are willing to attend the London date on Febrary 7-8, remember that early-bird price for London is going to expire on Monday (January 17) ! Save your money...(read more)

    Read the article

  • ALL, ALLEXCEPT and VALUES in DAX

    - by Marco Russo (SQLBI)
    When you use CALCULATE in DAX you are creating a new filter context for the calculation, based on the existing one. There are a few functions that are used to clear or preserve a column filter. These functions are: ALL – it can be used with one or more columns from a table, or with the name of a table. It returns all the values from the column(s) or all the rows from the table, ignoring any existing filter context. In other words, ALL clear an existing filter context on columns or table. We can use...(read more)

    Read the article

  • The updated Survey pattern for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    One of the first models I created for the many-to-many revolution white paper was the Survey one. At the time, it was in Analysis Services Multidimensional, and then we implemented it in Analysis Services Tabular and in Power Pivot, using the DAX language. I recently reviewed the data model and published it in the Survey article on DAX Patterns site. The Survey pattern is the foundation for others, such as the Basket Analysis, and it is widely used in many different business scenario. I was particularly happy to know it has been using to perform data analysis for cancer research! In this article I did some maintenance on the DAX formulas, checking that the proper error handling is part of the formulas, and highlighting some differences in slicers behavior between Excel 2010 and Excel 2013, which could be particularly important for the Survey scenario. As usual, we provide sample workbooks for both Excel 2010 and Excel 2013, and we use DAX Formatter to make the DAX code easier to read. Any feedback will be appreciated!

    Read the article

  • How to allow Google Images search to by pass hotlink protection?

    - by Marco Demaio
    I saw Google Images seems to index my images only if hotlink protection is off. * I use anyway hotlink protection because I don't like the idea of people sucking my bandwidth, i simply this code to protcet my sites from being hotlinked: RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com$ [NC] RewriteRule .*\.(jpg|jpeg|png|gif)$ - [F,NC,L] But in order to allow Google Image search to bypass my hotlink protection (I want Google Images search to show my images) would it suffice to add a line like this one: RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com$ [NC] Because I'm wondring: is the crawler crawling just from google.com? and what about google.it / google.co.uk, etc.? FYI: on Google official guidelines I did not find info about this. I suppose hotlink protection prevents Google Images to show images in its results because I did some tests and it seems hotlink protection does prevent my images to be shown in Google Images search.

    Read the article

  • SEO - PageRank on Facebook pages, but pages have no back links to them?

    - by Marco Demaio
    have a look at these two pages: 1) http://it-it.facebook.com/jeanchristophe.cataliotti (PageRank 2 from Google toolbar) Amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=it-it.facebook.com/jeanchristophe.cataliotti&fr=sfp 2) http://www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0 (PageRank 1 from Google toolbar) Still amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0&fr=sfp How do you explain this? Hoping for an explanation that goes beyond just saying that the PR in Goole toolbar it's not updated, because it can not be the reason for this!

    Read the article

  • SEO: disallowing Google from indexing forms in iframes or not?

    - by Marco Demaio
    I usually place forms in iframes (i.e. order form, request assistance form, contact forms, ect.). Just the forms, I never place other contents or pages in iframes. From a SEO point of view, would you exclude forms from being indexed/crawled by Google or not? I mean my forms hardly ever contains keyword/keyphrases, moreover I obviously place empty title/meta description tags in pages shown in iframe to display forms, cause those titles are never displaied in browser title bar. So I'm wondering what's the point of letting Google index them? Moreover I think these form pages might suck out PR from all other pages that are more valuable for SEO. If your answer is "yes I would exclude them form indexing" would you simply use robots.txt to exclude them? Thanks!

    Read the article

  • Stock Analysis and Moving Average with PowerPivot

    - by Marco Russo (SQLBI)
    One week ago Alberto Ferrari wrote a post about how to do working days calculation in PowerPivot . You might think this is necessary only for accounting department or something like that… but in reality the same techniques are really useful to implement calculations that might be useful when you want to implement some stock analysis using PowerPivot and Excel! As you might know, in PowerPivot it is important having a Dates table containing all the days, without exceptions. But when you manage stock...(read more)

    Read the article

  • How to upgrade from 11.10 to 12.04?

    - by Marco
    I am using 11.10 and I really want to update to 12.04 but it's impossible. I cannot see it in the update manager (I did select the option for the releases). I tried sudo update-manager d, and sudo-apt get upgrade and sudo apt-get upgrade release d but nothing. And sudo do-release-upgrade is not working as well! (I get "no release found" message.) So finally I did put on a live USB and when I boot, I click on install, then it's telling me that I do have 11.10 and I can select for erase all and install Ubuntu 12.04 or I can install along 11.10 and the second option to update 11.10 to 12.04 is grey. I cannot selected it! Why? Am I running out of options? What else can I do to upgrade to 12.04?

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Speaking at PASS 2012 Summit in Seattle #sqlpass

    - by Marco Russo (SQLBI)
    I will deliver two sessions at the next PASS Summit 2012: one is title Inside DAX Query Plans and the other is Near Real-Time Analytics with xVelocity (without DirectQuery).These will be two sessions that require a lot of preparation and even if I have already much to say, I still have a long work to do this summer in order to go deeper in several details that I want to investigate for completing these sessions.I already look forward to come back in Seattle!In the meantime, you have to study SSAS Tabular and if you want to get a real jumpstart why not attending one of the next SSAS Tabular Workshop Online? We are working on more dates for this fall, but there are a few dates already scheduled.And, last but not least, the early Rough Cuts edition of our upcoming SSAS Tabular book is finally available here (really near to the final print)!

    Read the article

  • PowerPivot FILTER condition optimizations

    - by Marco Russo (SQLBI)
    In the comments of a recent post from Alberto Ferrari there was an interesting note about different performance related to the order of conditions in a FILTER call. I investigated about that and Jeffrey Wang has been so nice to give me some info about actual implementation that I can share on a blog post. First of all, an important disclaimer: PowerPivot is intended to make life easier, not requiring the user to think how to write the order of elements in a formula just to get better performance....(read more)

    Read the article

  • Is it bad practice to call a controller action from a view that was rendered by another controller?

    - by marco-fiset
    Let's say I have an OrderController which handles orders. The user adds products to it through the view, and then the final price gets calculated through an AJAX call to a controller action. The price calculation logic is implemented in a seperate class and used in a controller action. What happens is that I have many views from different controllers that need to use that particular action. I'd like to have some kind of a PriceController that I could call an action on. But then the view would have to know about that PriceController and call an action on it. Is it bad practice for a view to call an action on a different controller from which it was rendered?

    Read the article

  • Is it bad practice to use <?= tag in PHP

    - by marco-fiset
    I've come across this PHP tag <?= ?> recently and I am reluctant to use it, but it itches so hard that I wanted to have your take on it. I know it is bad practice to use short tags <? ?> and that we should use full tags <?php ?> instead, but what about this one : <?= ?>? It would save some typing and it would be better for code readability, IMO. So instead of this: <input name="someVar" value="<?php echo $someVar; ?>"> I could write it like this, which is cleaner : <input name="someVar" value="<?= $someVar ?>"> Is using this operator frowned upon?

    Read the article

  • Speaking at PASS 2012 Summit in Seattle #sqlpass

    - by Marco Russo (SQLBI)
    I will deliver two sessions at the next PASS Summit 2012: one is title Inside DAX Query Plans and the other is Near Real-Time Analytics with xVelocity (without DirectQuery).These will be two sessions that require a lot of preparation and even if I have already much to say, I still have a long work to do this summer in order to go deeper in several details that I want to investigate for completing these sessions.I already look forward to come back in Seattle!In the meantime, you have to study SSAS Tabular and if you want to get a real jumpstart why not attending one of the next SSAS Tabular Workshop Online? We are working on more dates for this fall, but there are a few dates already scheduled.And, last but not least, the early Rough Cuts edition of our upcoming SSAS Tabular book is finally available here (really near to the final print)!

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • TechEd North America 2012–Day 3 #msTechEd #teched

    - by Marco Russo (SQLBI)
    Yesterday I spent the longest day at this TechEd: we talked with many people at Community Night until 9pm and I have to say that just a few months after Analysis Services 2012 has been released, there are many people already using it. And the adoption of PowerPivot is starting to be quite large. Many new ideas and challenging coming from several different real world scenarios. I was tired but really happy. Alberto presented his Many-to-Many Relationships in BISM Tabular session that was in the same time slot of the BI Power Hour. For this reason, very few people attended Alberto’s session so I think many will watch the recorded session (it should be available within a few days). So what about today? I’ll spend some time at Technical Learning Center area (full schedule here) but the most important event today will be the Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone).  Why you should attend? Mainly because you will see live demo over 4 billion rows table with many-to-many relationships involved in complex queries. But for those of you that think this is not enough to attend a 15 minute funny session, well, we’ll give away some 8GB USB Memory Keys to those of you that will guess exact response time of queries before execution. Convinced? Join us at 11:15am and don’t be late, the session will finish at 11:30am! After that, we’ll run a book signing session at the Bookstore at 12:30pm and I will be in the Technical Learning Center area at 3:00pm until 5:00pm. See you there!

    Read the article

  • Parent-child hierarchies and unary operators in PowerPivot

    - by Marco Russo (SQLBI)
    Alberto wrote an excellent post describing how to implement the Unary Operator feature (which is present in Analysis Services) in PowerPivot (there was a previous post about parent-child hierarchies, too). I have to say that the solution is not so easy to implement as in Analysis Services, but it just works and, from a practical point of view, it is not so difficult to implement if you understand how it works and accept its limitations (only sum and subtractions are supported). I think that many...(read more)

    Read the article

  • Community Events in Köln (October) and Copenhagen November #ssas #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Short update about community events in Europe where I will speak.On October 11 I will present DAX in Action in Köln - all details in the PASS local chapter here: http://www.sqlpass.de/Regionen/Deutschland/K%C3%B6lnBonnD%C3%BCsseldorf.aspxI will be speaking at a community event in Copenhagen on November 21, 2012. The session will be Excel 2013 PowerPivot in Action and details about time and location are available here: http://msbip.dk/events/30/msbip-mode-nr-9/I will be in Köln and Copenhagen to teach the SSAS Tabular Workshop. The workshop in Köln is the first in Germany and I look forward to meet new BI developers there.Copenhagen is the second edition after another we delivered this spring. It is a convenient location also for people coming from Malmoe and Göteborg in Sweden. Last event in Copenhagen were conflicting with a large event in Sweden, maybe this time I'll meet more people coming from the other side of the Øresund Bridge!Many other dates and location are available on the SSAS Tabular Workshop website.

    Read the article

  • T9 patented while QWERTY is not?

    - by Marco W.
    I've seen that there are lots of custom keyboards for Android, but all are QWERTY keyboards. I couldn't find any keyboard with T9 layout. Is this because T9 is patented and the QWERTY layout is not? So if I made a T9 keyboard, I would have to pay patent fees? So what does the patent protect when you look at T9? Only the layout? Or the prediction engine? The problem is, this way of predicting words is the only one that makes sense for this layout ...

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >