Search Results

Search found 5751 results on 231 pages for 'analysis patterns'.

Page 52/231 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Performance analysis for java application

    - by user1827614
    I want to do a performance measurement of my application and would like to be able to configure the stats for specific module like (enable for specific module and disable for some) and I want to measure things like memory usage, threads, average band width etc.. Can any one suggest something please, I am new to this. I think Visual VM is good but it doesnot support configuring for different modules. Does Perf4j or Admin4j work here? any one has used these before?

    Read the article

  • ?# string analysis

    - by Sergei
    I have a string for example like " :)text :)text:) :-) word :-( " i need append it in textbox(or somewhere else), with condition: Instead of ':)' ,':-(', etc. need to call function which enter specific symbol I thinck exists solution with Finite-state machine, but how implement it don't know. Waiting for advises.

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • SSAS – Synchronisation performance

    - by ACALVETT
    I’ve always thought of SSAS synchronisation as a clever file mirroring utility built into SSAS and i have never considered the technology as bringing any performance gains to the table. So, its a good job I like to revisit areas…. :) I decided to compare the performance of robocopy and SSAS Synchronisation between 2 Windows 2003 servers running SSAS 2008 SP1 CU7 with 1gb network links. For the robocopy of the data directory i used the SQLCat Robocopy Script . The results are shown below. SSAS Sync...(read more)

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • SSAS 2008 R2– Little Gems

    - by ACALVETT
    I have spent the last few days working with SSAS 2008 R2 and noticed a few small enhancements which many people probably won’t notice but i will list them here and why they are important to me. New profiler events Commit: This is a new sub class event for “progress report end”. This represents the elapsed time taken for the server to commit your data. It is important because for the duration of this event a server level lock will be in place blocking all incoming connections and causing time out...(read more)

    Read the article

  • Which is Better? The Start Screen in Windows 8 or the Old Start Menu? [Analysis]

    - by Asian Angel
    There has been quite a bit of controversy surrounding Microsoft’s emphasis on the new Metro UI Start Screen in Windows 8, but when it comes down to it which is really better? The Start Screen in Windows 8 or the old Start Menu? Tech blog 7 Tutorials has done a quick analysis to see which one actually works better (and faster) when launching applications and doing searches. Images courtesy of 7 Tutorials. You can view the results and a comparison table by visiting the blog post linked below. Windows 8 Analysis: Is the Start Screen an Improvement vs. the Start Menu? [7 Tutorials] How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • SSAS Multithreaded sync with Windows 2008 R2

    - by ACALVETT
    We have been happily running some of our systems on WIndows 2003 and have had an upgrade to W2K8 R2 on the list for quite some time. The upgrade has now completed and we can start taking advantage of some of the new features which is the reason for this post. For a long time we have used the sample Robocopy script from the SQLCat team to synchronize some of our larger SSAS databases. If your wondering what i mean by large, around 5 TB with a good few thousand partitions. The script works like a dream...(read more)

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • Is there any open source code analyzer for java which I can adopt my software metrics algorithm on it?

    - by daneshkohan
    I am doing my masters dissertation and I have conducted a software metrics. I need to adopt my metrics on an open source tool. I have found PMD and check style on sourceforge.net but there is not adequate explanation about their codes. However, I couldn't to find their source code to customize them. I will be appreciated, if you introduce one open source tool for java which I can customize it's code.

    Read the article

  • SSAS Native v .net Provider

    - by ACALVETT
    Recently I was investigating why a new server which is in its parallel running phase was taking significantly longer to process the daily data than the server its due to replace. The server has SQL & SSAS installed so the problem was not likely to be in the network transfer as its using shared memory. As i dug around the SQL dmv’s i noticed in sys.dm_exec_connections that the SSAS connection had a packet size of 8000 bytes instead of the usual 4096 bytes and from there i found that the datasource...(read more)

    Read the article

  • Help with algorithmic complexity in custom merge sort implementation

    - by bitcycle
    I've got an implementation of the merge sort in C++ using a custom doubly linked list. I'm coming up with a big O complexity of n^2, based on the merge_sort() slice operation. But, from what I've read, this algorithm should be n*log(n), where the log has a base of two. Can someone help me determine if I'm just determining the complexity incorrectly, or if the implementation can/should be improved to achieve n*log(n) complexity? If you would like some background on my goals for this project, see my blog. I've added comments in the code outlining what I understand the complexity of each method to be. Clarification - I'm focusing on the C++ implementation with this question. I've got another implementation written in Python, but that was something that was added in addition to my original goal(s).

    Read the article

  • What skills should a developer/tester learn in order to move into a permanent Systems Analysis role?

    - by shenaz
    I have been with a software services firm for 5 years and have fallen into a "jack of all trades" role, which I am looking to move out of. I've spent about 1 year each in programming (VB/VB.NET), application support, systems analysis, and most recently, software testing, which in my current position is all manual. I've really lost interest in the programming and testing roles; I would prefer a position where I get to work more with people, such as systems analysis. I even got a chance to be a trainer at the same company for a few months, a temporary position which I enjoyed very much. Given that most of my real experience is with software, support, and testing, what knowledge areas and skills should I focus on learning and mastering in order to make myself an attractive candidate for a permanent position as a business/systems analyst?

    Read the article

  • New licensing for SQL Server 2012 and #BISM #Tabular usage

    - by Marco Russo (SQLBI)
    Last week Microsoft announced a new licensing schema for SQL Server 2012. If you are interested in an extensive discussion of the new licensing scheme, Denny Cherry wrote a great blog post about that. I’d like to comment about the new BI Edition license. Teo Lachev already commented about the numbers and I agree with him. I generally like the new licensing mode of SQL 2012. It maintains a very low-entry barrier for SSRS/SSAS/SSIS (Standard Edition). It has a reasonable licensing schema for 20-50...(read more)

    Read the article

  • Strategy to find bottleneck in a network

    - by Simone
    Our enterprise is having some problem when the number of incoming request goes beyond a certain amount. To make things simpler, we have N websites that uses, amongst other, a local web service. This service is hosted by IIS, and it's a .NET 4.0 (C#) application executed in a farm. It's REST-oriented, built around OpenRasta. As already mentioned, by stress testing it with JMeter, we've found that beyond a certain amount of request the service's performance drop. Anyway, this service is, amongst other, a client itself of other 3 distinct web services and also a client for a DB server, so it's not very clear what really is the culprit of this abrupt decay. In turn, these 3 other web services are installed in our farm too, and client of other DB servers (and services, possibly, that are out of my team control). What strategy do you suggest to try to locate where the bottleneck(s) are? Do you have any high-level suggestions?

    Read the article

  • How meaningful is the Big-O time complexity of an algorithm?

    - by james creasy
    Programmers often talk about the time complexity of an algorithm, e.g. O(log n) or O(n^2). Time complexity classifications are made as the input size goes to infinity, but ironically infinite input size in computation is not used. Put another way, the classification of an algorithm is based on a situation that algorithm will never be in: where n = infinity. Also, consider that a polynomial time algorithm where the exponent is huge is just as useless as an exponential time algorithm with tiny base (e.g., 1.00000001^n) is useful. Given this, how much can I rely on the Big-O time complexity to advise choice of an algorithm?

    Read the article

  • Watch @marcorus and @ferrarialberto sessions online #teched #msteched #tee2012

    - by Marco Russo (SQLBI)
    In June I participated to two TechEd editions (North America and Europe). I and Alberto delivered a Pre Conference and two sessions about Tabular. Both conferences provides recorded sessions freely available on Channel 9 so that you can compare which one has been delivered in the best way! If you have to choose between the two versions, consider that in North America we receive more questions during and after the session (still recording), increasing the interaction, whereas in Europe questions usually comes after the session finished (so no recording available). If you’re curious, watch both and let me know which version you prefer, especially for Multidimensional vs Tabular! BISM: Multidimensional vs. Tabular (TechEd North America 2012) BISM: Multidimensional vs. Tabular (TechEd Europe 2012) Many-to-Many Relationships in BISM Tabular (TechEd North America 2012) Many-to-Many Relationships in BISM Tabular (TechEd Europe 2012) If you are interested to learn SSAS Tabular, don’t miss the next SSAS Tabular Workshop online on September 3-4, 2012. We are also planning dates for another roadshow in Europe this fall and I’m happy to announce we’ll have two dates in Germany, too. More updates in the coming weeks.

    Read the article

  • Adventures in the Land of CloudDB/NoSQL/NoAcid

    - by KKline
    Cloud, Bunny, or CloudBunny? Last year, some of my friends from Quest Software attended Hadoop World in New York. In 2009, I never would've guessed that Quest would be there with products, community initiatives, as a major sponsor and with presenters? There were just under 1,000 attendees who weren’t the typical devheads and geekasaurs you'd normally see at very techie events like Code Camps, SQL Saturdays, Cloud Camps and or even other NoSQL events such as the Cassandra Summit. We're talkin' enterprise...(read more)

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

  • Watch @marcorus and @ferrarialberto sessions online #teched #msteched #tee2012

    - by Marco Russo (SQLBI)
    In June I participated to two TechEd editions (North America and Europe). I and Alberto delivered a Pre Conference and two sessions about Tabular. Both conferences provides recorded sessions freely available on Channel 9 so that you can compare which one has been delivered in the best way! If you have to choose between the two versions, consider that in North America we receive more questions during and after the session (still recording), increasing the interaction, whereas in Europe questions usually comes after the session finished (so no recording available). If you’re curious, watch both and let me know which version you prefer, especially for Multidimensional vs Tabular! BISM: Multidimensional vs. Tabular (TechEd North America 2012) BISM: Multidimensional vs. Tabular (TechEd Europe 2012) Many-to-Many Relationships in BISM Tabular (TechEd North America 2012) Many-to-Many Relationships in BISM Tabular (TechEd Europe 2012) If you are interested to learn SSAS Tabular, don’t miss the next SSAS Tabular Workshop online on September 3-4, 2012. We are also planning dates for another roadshow in Europe this fall and I’m happy to announce we’ll have two dates in Germany, too. More updates in the coming weeks.

    Read the article

  • how do you manage application performance reviews

    - by CoolBeans
    I have been trying to figure out ways to effectively do performance reviews before an install happens for all releases done by our team. Do you usually make this a part of code review process or do you handle it as a separate review task? FYI - we do not have a dedicated performance testing team. It is up to the developers to make sure the app performs well. The apps I am referring to are web applications.

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • Given an XML which contains a representation of a graph, how to apply it DFS algorithm? [on hold]

    - by winston smith
    Given the followin XML which is a directed graph: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE graph PUBLIC "-//FC//DTD red//EN" "../dtd/graph.dtd"> <graph direct="1"> <vertex label="V0"/> <vertex label="V1"/> <vertex label="V2"/> <vertex label="V3"/> <vertex label="V4"/> <vertex label="V5"/> <edge source="V0" target="V1" weight="1"/> <edge source="V0" target="V4" weight="1"/> <edge source="V5" target="V2" weight="1"/> <edge source="V5" target="V4" weight="1"/> <edge source="V1" target="V2" weight="1"/> <edge source="V1" target="V3" weight="1"/> <edge source="V1" target="V4" weight="1"/> <edge source="V2" target="V3" weight="1"/> </graph> With this classes i parsed the graph and give it an adjacency list representation: import java.io.IOException; import java.util.HashSet; import java.util.LinkedList; import java.util.Collection; import java.util.Iterator; import java.util.logging.Level; import java.util.logging.Logger; import practica3.util.Disc; public class ParsingXML { public static void main(String[] args) { try { // TODO code application logic here Collection<Vertex> sources = new HashSet<Vertex>(); LinkedList<String> lines = Disc.readFile("xml/directed.xml"); for (String lin : lines) { int i = Disc.find(lin, "source=\""); String data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } Vertex v = new Vertex(); v.setName(data); v.setAdy(new HashSet<Vertex>()); sources.add(v); } } Iterator it = sources.iterator(); while (it.hasNext()) { Vertex ver = (Vertex) it.next(); Collection<Vertex> adyacencias = ver.getAdy(); LinkedList<String> ls = Disc.readFile("xml/graphs.xml"); for (String lin : ls) { int i = Disc.find(lin, "target=\""); String data = ""; if (lin.contains("source=\""+ver.getName())) { Vertex v = new Vertex(); if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setName(data); } i = Disc.find(lin, "weight=\""); data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setWeight(Integer.parseInt(data)); } if (v.getName() != null) { adyacencias.add(v); } } } } for (Vertex vert : sources) { System.out.println(vert); System.out.println("adyacencias: " + vert.getAdy()); } } catch (IOException ex) { Logger.getLogger(ParsingXML.class.getName()).log(Level.SEVERE, null, ex); } } } This is another class: import java.util.Collection; import java.util.Objects; public class Vertex { private String name; private int weight; private Collection ady; public Collection getAdy() { return ady; } public void setAdy(Collection adyacencias) { this.ady = adyacencias; } public String getName() { return name; } public void setName(String nombre) { this.name = nombre; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } @Override public int hashCode() { int hash = 7; hash = 43 * hash + Objects.hashCode(this.name); hash = 43 * hash + this.weight; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Vertex other = (Vertex) obj; if (!Objects.equals(this.name, other.name)) { return false; } if (this.weight != other.weight) { return false; } return true; } @Override public String toString() { return "Vertice{" + "name=" + name + ", weight=" + weight + '}'; } } And finally: /** * * @author user */ /* -*-jde-*- */ /* <Disc.java> Contains the main argument*/ import java.io.*; import java.util.LinkedList; /** * Lectura y escritura de archivos en listas de cadenas * Ideal para el uso de las clases para gráficas. * * @author Peralta Santa Anna Victor Miguel * @since Julio 2011 */ public class Disc { /** * Metodo para lectura de un archivo * * @param fileName archivo que se va a leer * @return El archivo en representacion de lista de cadenas */ public static LinkedList<String> readFile(String fileName) throws IOException { BufferedReader file = new BufferedReader(new FileReader(fileName)); LinkedList<String> textlist = new LinkedList<String>(); while (file.ready()) { textlist.add(file.readLine().trim()); } file.close(); /* for(String linea:textlist){ if(linea.contains("source")){ //String generado = linea.replaceAll("<\\w+\\s+\"", ""); //System.out.println(generado); } }*/ return textlist; }//readFile public static int find(String linea,String palabra){ int i,j; boolean found = false; for(i=0,j=0;i<linea.length();i++){ if(linea.charAt(i)==palabra.charAt(j)){ j++; if(j==palabra.length()){ found = true; return i; } }else{ continue; } } if(!found){ i= -1; } return i; } /** * Metodo para la escritura de un archivo * * @param fileName archivo que se va a escribir * @param tofile la lista de cadenas que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String fileName, LinkedList<String> tofile, boolean append) throws IOException { FileWriter file = new FileWriter(fileName, append); for (int i = 0; i < tofile.size(); i++) { file.write(tofile.get(i) + "\n"); } file.close(); }//writeFile /** * Metodo para escritura de un archivo * @param msg archivo que se va a escribir * @param tofile la cadena que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String msg, String tofile, boolean append) throws IOException { FileWriter file = new FileWriter(msg, append); file.write(tofile); file.close(); }//writeFile }// I'm stuck on what can be the best way to given an adjacency list representation of the graph how to apply it Depth-first search algorithm. Any idea of how to aproach to complete the task?

    Read the article

  • SSAS Tabular Workshop online and other upcoming dates (and updates!) #ssas #tabular

    - by Marco Russo (SQLBI)
    After many conferences and travels, this summer I had some time to write and prepare new sessions for the next wave of conferences. In reality I am just doing that, even if I already restarted traveling for consulting and training. So expect new content about DAX and Tabular coming in the next months! Starting to see real customer adopting Tabular is showing many new challenges and there is still a lot to learn and to create. If you still didn’t started working on Tabular, well, you should. As I always say, as a BI developer you should be able to choose between Tabular and Multidimensional, and in order to do that you should know both of them! One thing that I don’t like very much about marketing is that “Tabular is simpler”, because it’s often translated in “Tabular is for simple projects” when this last statement is not true. Actually, I see a lot of good reasons to adopt Tabular in complex data models, especially in non-traditional scenarios. I know, this is because I love to understand what are the actual limits of a technology, and I’m learning that there is simple a lot of space of improvement also for Tabular. It’s already fast, but it could be faster! How can you start? Well, first of all, by reading our book. Then, by attending to our SSAS Tabular workshop. There is an online edition of the workshop on September 3-4, 2012 (hurry up if you want to register), and there are already several dates planned for the next months (and others will be added soon!). And, of course, by installing SQL Server 2012 and trying to create models over your databases. If you are too lazy, just start with PowerPivot. As soon as you start working with Tabular or PowerPivot, you will see that there is one important skill you need: learning DAX. In the next few days I should publish an article that I’m finishing these days about best practices using SUMMARIZE and ADDCOLUMNS. If only someone published this article one year ago, I would have saved many hours of my life. But, you know, flight manuals are written in blood… and someone has to write! Stay tuned.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >