Search Results

Search found 352 results on 15 pages for 'pablo russo'.

Page 1/15 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Community Events and Workshops in November 2012 #ssas #tabular #powerpivot

    - by Marco Russo (SQLBI)
    I and Alberto have a busy agenda until the end of the month, but if you are based in Northern Europe there are many chance to meet one of us in the next couple of weeks! Belgium, 20 November 2012 – SQL Server Days 2012 with Marco Russo I will present two sessions in this conference, “Data Modeling for Tabular” and “Querying and Optimizing DAX” Copenhagen, 21-22 November, 2012 – SSAS Tabular Workshop with Alberto Ferrari Alberto will be the speaker for 2 days – you can still register if you want a full immersion! Copenhagen, 21 November 2012 – Free Community Event with Alberto Ferrari (hosted in Microsoft Hellerup) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Munich, 27-28 November 2012 - SSAS Tabular Workshop with Alberto Ferrari The SSAS workshop will run also in Germany, this time in Munich. Also here there is still some seat still available. Munich, 27 November 2012 - Free Community Event with Alberto Ferrari (hosted in Microsoft ) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Moscow, 27-28 November 2012 – TechEd Russia 2012 with Marco Russo I will speak during the keynote on November 27 and I will present two session the day after, “Developing an Analysis Services Tabular Project BI Semantic Model” and “Excel 2013 PowerPivot in Action” Stockholm, 29-30 November 2012 - SSAS Tabular Workshop with Marco Russo I will run this workshop in Stockholm – if you want to register here, hurry up! Few seats still available! Stockholm, 29 November 2012 - Free Community Event (sold-out!) with Marco Russo In the evening I will present “Excel 2013 PowerPivot in Action” If you want to attend a SSAS Tabular Workshop online, you can also register to the Online edition of December 5-6, 2012, which is still in early bird and is scheduled with a friendly time zone for America’s countries (which could be good for Europe too, in case you don’t mind attending a workshop until midnight!).

    Read the article

  • Community Events and Workshops in November 2012 #ssas #tabular #powerpivot

    - by Marco Russo (SQLBI)
    I and Alberto have a busy agenda until the end of the month, but if you are based in Northern Europe there are many chance to meet one of us in the next couple of weeks! Belgium, 20 November 2012 – SQL Server Days 2012 with Marco Russo I will present two sessions in this conference, “Data Modeling for Tabular” and “Querying and Optimizing DAX” Copenhagen, 21-22 November, 2012 – SSAS Tabular Workshop with Alberto Ferrari Alberto will be the speaker for 2 days – you can still register if you want a full immersion! Copenhagen, 21 November 2012 – Free Community Event with Alberto Ferrari (hosted in Microsoft Hellerup) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Munich, 27-28 November 2012 - SSAS Tabular Workshop with Alberto Ferrari The SSAS workshop will run also in Germany, this time in Munich. Also here there is still some seat still available. Munich, 27 November 2012 - Free Community Event with Alberto Ferrari (hosted in Microsoft ) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Moscow, 27-28 November 2012 – TechEd Russia 2012 with Marco Russo I will speak during the keynote on November 27 and I will present two session the day after, “Developing an Analysis Services Tabular Project BI Semantic Model” and “Excel 2013 PowerPivot in Action” Stockholm, 29-30 November 2012 - SSAS Tabular Workshop with Marco Russo I will run this workshop in Stockholm – if you want to register here, hurry up! Few seats still available! Stockholm, 29 November 2012 - Free Community Event with Marco Russo In the evening I will present “Excel 2013 PowerPivot in Action” If you want to attend a SSAS Tabular Workshop online, you can also register to the Online edition of December 5-6, 2012, which is still in early bird and is scheduled with a friendly time zone for America’s countries (which could be good for Europe too, in case you don’t mind attending a workshop until midnight!).

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

  • Fix Nautilus URIs in a Python script

    - by Pablo
    I have a very basic Python script I wrote mostly for learning purposes. It opens a terminal in the current folder. However, I can't get it to work in folders with accented characters in the URI (e.g.: /home/pablo/Vídeos or /home/pablo/Área de Trabalho), because it looks like Nautilus URIs are encoded to those %{number} values. Is there a way to convert these URIs to normalized URIs without having to translate every possible accented value by hand? Thanks in advance!

    Read the article

  • Microsoft BI Conference 2010 Recap & books promo

    - by Marco Russo (SQLBI)
    Last week I’ve been at Microsoft BI Conference and I presented an interactive session about PowerPivot DAX Patterns. Unfortunately only the breakout session were recorded and available on TechEd Online . The room was full and there were probably many other people in an overflow room.  I would like to thanks all the attendees of my session and you can write me (marco dot russo [at] sqlbi dot com) if you have other questions and/or feedback about the session. The interest about PowerPivot (especially...(read more)

    Read the article

  • Microsoft BI Conference 2010 Recap & books promo

    - by Marco Russo (SQLBI)
    Last week I’ve been at Microsoft BI Conference and I presented an interactive session about PowerPivot DAX Patterns. Unfortunately only the breakout session were recorded and available on TechEd Online . The room was full and there were probably many other people in an overflow room.  I would like to thanks all the attendees of my session and you can write me (marco dot russo [at] sqlbi dot com) if you have other questions and/or feedback about the session. The interest about PowerPivot (especially...(read more)

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • Software Testing Humor

    - by mbcrump
    I usually don’t share these kind of things unless it really makes me laugh. At least, I can provide a link to a free eBook on the Pablo’s S.O.L.I.D principles eBook. S.O.L.I.D. is a collection of best-practice object-oriented design principles that you can apply to your design to accomplish various desirable goals like loose-coupling, higher maintainability, intuitive location of interesting code, etc You may also want to check out the Pablo’s 31 Days of Refactoring eBook as well.

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • sed problem with scripting

    - by Pablo Ramos
    I am trying to run a script using sed i runing like this for et in 1 # 2 3 do if [ -d ET$et ]; then rm -rf ET$et; fi mkdir ET$et cd ET$et cp $home/step_$i/FDE/diabatA/run.adf . cp $home/step_$i/FDE/diabatA/mas$i.xyz . awk1=`awk '/type=fde/{print NR }' run.adf | head -1` awk2=`$(echo "$a+379" | bc -l )` sed -n "$awk1,"$awk2"p" run.adf > first awk3=`awk '/ATOMS/{print NR +1}' first` awk4=`cat mas$i.xyz | wc -l` awk4=$( echo "$awk4-1" | bc -l ) awk5=`awk "/ATOMS/{print NR +"${awk4}" }" run.adf` sed -n "$awk3,"$awk4"p" first > atoms par=$( echo "$awk4-99" | bc -l ) rho1=$(cat atoms | head -34 ) rho2=$(cat atoms | head -64 | tail -31) rho3=$(cat atoms | head -97 | tail -33) rhoall=$(cat atoms | tail -${par} ) echo -e "$rho1\n$rho2\n$rhoall" > eje done but is telling me this: (standard_in) 1: syntax error sed: -e expression #1, char 6: unexpected `,' sed: -e expression #1, char 1: unknown command: `,' Please, I appreciate any help with this issue... Thanks Pablo

    Read the article

  • Why does quickly package --extras fail (where quickly package doesn't)?

    - by Pablo
    When I attempt to use quickly package --verbose --extras on my application I get these errors at the end: sed -i "s|__soundboard_data_directory__ =.*|__soundboard_data_directory__ = '/opt/extras.ubuntu.com/soundboard/share/soundboard/'|" debian/soundboard/opt/extras.ubuntu.com/soundboard/soundboard*/soundboardconfig.py sed: can't read debian/soundboard/opt/extras.ubuntu.com/soundboard/soundboard*/soundboardconfig.py: No such file or directory make[1]: *** [override_dh_install] Error 2 make[1]: Leaving directory `/home/pablo/soundboard' make: *** [binary] Error 2 dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2 I haven't a clue what is wrong here. When I run package --extras on a clean template it runs fine. soundboardconfig.py is an unmodified appnameconfig.py the template makes. I'm not sure if my full source code is needed for this or not, but can be provided. EDIT: Forgot to mention quickly package creates a working package, only --extras fails.

    Read the article

  • Wine PPA dependency problems after upgrading to 12.04

    - by Pablo
    I recently upgraded my Ubuntu up to 12.04. Untill that, i can't use synaptic at all. After trying to repair the issue through synaptics, it returns me: installArchives() failed: dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however:No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: depeNo apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already ndency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: Package wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: wine1.4-i386:i386 wine1.4 wine1.4-common wine1.4-amd64 wine playonlinux Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however: Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: Package wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured I tried the following: sudo apt-get clean sudo apt-get autoclean sudo apt-get install -f And returnsme: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: wine1.4 wine1.4-common Suggested packages: dosbox The following packages will be upgraded: wine1.4 wine1.4-common 2 upgraded, 0 newly installed, 0 to remove and 110 not upgraded. 6 not fully installed or removed. Need to get 0 B/1,126 kB of archives. After this operation, 9,216 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however: Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. No apport report written because the error message indicates its a followup error from a previous failure. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: PackagNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already e wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: wine1.4-i386:i386 wine1.4 wine1.4-common wine1.4-amd64 wine playonlinux E: Sub-process /usr/bin/dpkg returned an error code (1) Cna anyone helpme? I've been searching for a sollution for days and still can't solve it. Thanks! Pablo

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • SSIS packages incompatibilities between SSIS 2008 and SSIS 2008 R2

    - by Marco Russo (SQLBI)
    When you install SQL 2008 R2 workstation components you get a newer version of BIDS (BI Developer Studio, included in the workstation components) that replaces BIDS 2008 version (BIDS 2005 still live side-by-side). Everything would be good if you can use the newer version to edit any 2008 AND 2008R2 project. SSIS editor doesn't offer a way to set the "compatibility level" of the package, becuase it is almost all unchanged. However, if a package has an ADO.NET Destination Adapter, there is a difference...(read more)

    Read the article

  • How to install SpeedFiler on Outlook 2010 (aka Outlook 14)

    - by Marco Russo (SQLBI)
    This is off-topic here on SQLBlog, I know, but I think there will be many users like me wanting to find the solution for this problem. If you have SpeedFiler there is a problem installing it on Outlook 2010. The setup of SpeedFiler stop showing this message: SpeedFiler 2.0.0.0 works with the following products Microsoft Office Outlook 2003 Microsoft Office Outlook 2007 None of these products seems to be installed on your system. SpeedFiler will not be installed. Well, in reality SpeedFiler works...(read more)

    Read the article

  • Difference between LASTDATE and MAX for semi-additive measures in #DAX

    - by Marco Russo (SQLBI)
    I recently wrote an article on SQLBI about the semi-additive measures in DAX. I included the formulas common calculations and there is an interesting point that worth a longer digression: the difference between LASTDATE and MAX (which is similar to FIRSTDATE and MIN – I just describe the former, for the latter just replace the correspondent names). LASTDATE is a dax function that receives an argument that has to be a date column and returns the last date active in the current filter context. Apparently, it is the same value returned by MAX, which returns the maximum value of the argument in the current filter context. Of course, MAX can receive any numeric type (including date), whereas LASTDATE only accepts a column of type date. But overall, they seems identical in the result. However, the difference is a semantic one. In fact, this expression: LASTDATE ( 'Date'[Date] ) could be also rewritten as: FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) LASTDATE is a function that returns a table with a single column and one row, whereas MAX returns a scalar value. In DAX, any expression with one row and one column can be automatically converted into the corresponding scalar value of the single cell returned. The opposite is not true. So you can use LASTDATE in any expression where a table or a scalar is required, but MAX can be used only where a scalar expression is expected. Since LASTDATE returns a table, you can use it in any expression that expects a table as an argument, such as COUNTROWS. In fact, you can write this expression: COUNTROWS ( LASTDATE ( 'Date'[Date] ) ) which will always return 1 or BLANK (if there are no dates active in the current filter context). You cannot pass MAX as an argument of COUNTROWS. You can pass to LASTDATE a reference to a column or any table expression that returns a column. The following two syntaxes are semantically identical: LASTDATE ( 'Date'[Date] ) LASTDATE ( VALUES ( 'Date'[Date] ) ) The result is the same and the use of VALUES is not required because it is implicit in the first syntax, unless you have a row context active. In that case, be careful that using in a row context the LASTDATE function with a direct column reference will produce a context transition (the row context is transformed into a filter context) that hides the external filter context, whereas using VALUES in the argument preserve the existing filter context without applying the context transition of the row context (see the columns LastDate and Values in the following query and result). You can use any other table expressions (including a FILTER) as LASTDATE argument. For example, the following expression will always return the last date available in the Date table, regardless of the current filter context: LASTDATE ( ALL ( 'Date'[Date] ) ) The following query recap the result produced by the different syntaxes described. EVALUATE     CALCULATETABLE(         ADDCOLUMNS(              VALUES ('Date'[Date] ),             "LastDate", LASTDATE( 'Date'[Date] ),             "Values", LASTDATE( VALUES ( 'Date'[Date] ) ),             "Filter", LASTDATE( FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) ),             "All", LASTDATE( ALL ( 'Date'[Date] ) ),             "Max", MAX( 'Date'[Date] )         ),         'Date'[Calendar Year] = 2008     ) ORDER BY 'Date'[Date] The LastDate columns repeat the current date, because the context transition happens within the ADDCOLUMNS. The Values column preserve the existing filter context from being replaced by the context transition, so the result corresponds to the last day in year 2008 (which is filtered in the external CALCULATETABLE). The Filter column works like the Values one, even if we use the FILTER instead of the LASTDATE approach. The All column shows the result of LASTDATE ( ALL ( ‘Date’[Date] ) ) that ignores the filter on Calendar Year (in fact the date returned is in year 2010). Finally, the Max column shows the result of the MAX formula, which is the easiest to use and only don’t return a table if you need it (like in a filter argument of CALCULATE or CALCULATETABLE, where using LASTDATE is shorter). I know that using LASTDATE in complex expressions might create some issue. In my experience, the fact that a context transition happens automatically in presence of a row context is the main reason of confusion and unexpected results in DAX formulas using this function. For a reference of DAX formulas using MAX and LASTDATE, read my article about semi-additive measures in DAX.

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • #PowerPivot Workshop Online for America’s Time Zones #ppws

    - by Marco Russo (SQLBI)
    After so many request we have finally arranged a PowerPivot Workshop online edition dedicated to America’s time zones! It is scheduled for December 19-20, 2012, with this schedule: US Eastern Time (EST): 10:00am-1:00pm / 2:00pm-5:00pm US Central Time (CST): 9:00am-12:00pm / 1:00pm-4:00pm US Pacific Time (PST): 7:00am-10:00am / 11:00am-1:00pm Bogotá (Colombia): 10:00am-1:00pm / 2:00pm-5:00pm São Paulo (Brazil): 1:00pm-4:00pm / 5:00pm-8:00pm Buenos Aires (Argentina): 12:00pm-3:00pm / 4:00pm-7:00pm...(read more)

    Read the article

  • xVelocity engines compared: VertiPaq vs ColumnStore #ssas #vertipaq #xvelocity #sql #tabular

    - by Marco Russo (SQLBI)
    During the last months I and Alberto worked in several projects using Analysis Services Tabular and we had to face real world issues, such as complex queries, large data volume, frequent data updates and so on. Sometime we faced the challenge of comparing Tabular performance with SQL Server. It seemed a non-sense, because even if the same core xVelocity technology is implemented in both products (SQL Server 2012 uses ColumnStore indexes, whereas Analysis Services 2012 uses VertiPaq), we initially assumed that the better optimization for the in-memory engine used by Analysis Services would have been always better than SQL Server. However, we discovered several important things: Processing time might be different and having data on SQL Server could make ColumnStore way faster for processing. Partitioning in SQL Server might be much more effective for query performance than Analysis Services. A single query can scale easily on more processor on SQL Server, whereas in Analysis Services the formula engine is single-threaded and could be a bottleneck for certain queries. In case of a large workload with many concurrent users, storage engine cache in Analysis Services could be a big advantage over SQL Server, especially for scalability As you can see, these considerations are not always obvious and you might be tempted to make other assumptions based on these information. Well, don’t do that. Before anything else, read the whitepaper VertiPaq vs ColumnStore Comparison written by Alberto Ferrari. Then, measure your workload. Finally, make some conclusion. But don’t make too many assumptions. You might be wrong, as we did at the beginning of this journey.

    Read the article

  • New licensing for SQL Server 2012 and #BISM #Tabular usage

    - by Marco Russo (SQLBI)
    Last week Microsoft announced a new licensing schema for SQL Server 2012. If you are interested in an extensive discussion of the new licensing scheme, Denny Cherry wrote a great blog post about that. I’d like to comment about the new BI Edition license. Teo Lachev already commented about the numbers and I agree with him. I generally like the new licensing mode of SQL 2012. It maintains a very low-entry barrier for SSRS/SSAS/SSIS (Standard Edition). It has a reasonable licensing schema for 20-50...(read more)

    Read the article

  • Converting #MDX to #DAX and PowerPivot Workshop online #ppws

    - by Marco Russo (SQLBI)
    I just published the article Converting MDX to DAX – First Steps on the renewed SQLBI web site about converting MDX to DAX. The reason is that with BISM Tabular in Analysis Services 2012 you will be able to write queries in both DAX and MDX. If you already know MDX, you might wonder how to “translate” your MDX knowledge in DAX. I think that this is another way you can improve your knowledge about DAX: it has different concepts behind and this comparison should be helpful in this purpose. This is...(read more)

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >