Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 13/134 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Best tools for "ssh tail -f" style log file monitoring and analysis

    - by dougnukem
    I'm looking for a tool to monitor custom PHP Error logs/Apache and possibly Java logs on remote development servers. I'm not looking for a full production log system like Splunk, but something that's a little more flexible than an ssh terminal doing a "tail -f". Perhaps something that will: * Monitor multiple log files to my local machine for searching/analysis later * Allow "alerts" when certain strings appear in the log * Provide some kind of tabbed/dashboard view of the multiple logs being monitored (in total less than 10 logs).

    Read the article

  • VS2010 / Code Analysis: Turn off a rule for a project without custom ruleset....

    - by TomTom
    ...any change? The scenario is this: For our company we develop a standard how code should look. This will be the MS full rule set as it looks now. For some specific projects we may want to turn off specific rules. Simply because for a specific project this is a "known exception". Example? CA1026 - while perfectly ok in most cases, there are 1-2 specific libraries we dont want to change those. We also want to avoid having a custom rule set. OTOH putting in a suppress attribute on every occurance gets pretty convoluted pretty fast. Any way to turn off a code analysis warning for a complete assembly without a custom rule set? We rather have that in a specific file (GlobalSuppressions.cs) than in a rule set for maintenance reasons, and to be more explicit ;)

    Read the article

  • Why do I get Code Analysis CA1062 on an out parameter in this code?

    - by brickner
    I have a very simple code (simplified from the original code - so I know it's not a very clever code) that when I compile in Visual Studio 2010 with Code Analysis gives me warning CA1062: Validate arguments of public methods. public class Foo { protected static void Bar(out int[] x) { x = new int[1]; for (int i = 0; i != 1; ++i) x[i] = 1; } } The warning I get: CA1062 : Microsoft.Design : In externally visible method 'Foo.Bar(out int[])', validate local variable '(*x)', which was reassigned from parameter 'x', before using it. I don't understand why do I get this warning and how can I resolve it without suppressing it? Can new return null? Is this a Visual Studio 2010 bug?

    Read the article

  • When to stop following the advice of static code analysis?

    - by bananeweizen
    I do use static code analysis on a project with more than 100.000 lines of Java code for quite a while now. I started with Findbugs, which gave me around 1500 issues at the beginning. I fixed the most severe over time and started using additional tools like PMD, Lint4J, JNorm and now Enerjy. With the more severe issues being fixed, there is a huge number of low severity issues. How do you handle these low priority issues? Do you try fixing all of them? Or only in newly written code? Do you regularly disable certain rules? (I found that I do on nearly any of the available tools). And if you ignore or disable rules, do you document those? What do your managers say about "leaving some thousand low priority issues not fixed"? Do you use (multiple) tool specific comments in the code or is there any better way?

    Read the article

  • Java static source analysis/parsing (possibly with antlr), what is a good tool to do this?

    - by Berlin Brown
    I need to perform static source analysis on Java code. Ideally, I want the system to work out of the box without much modification from me. For example, I have used Antlr in the past, but I spent a lot of time building grammar files and still didn't get what I wanted. I want to be able to parse a java file and have return the character position of say: Character position start and end of a Java block comment Character position start and end of a Java class file Character position start and end of a Java method declaration, signature, and implementation. It looks like Antlr will do that, but I have yet to finish a grammar that actually gives me the positions of the code I need. Does anyone have that complete Antlr grammar and Java code to give the character positions of the parts in the Java source.

    Read the article

  • Cannot connect to a SQL Server 2005 Analysis Services cube after installing SQL Server 2008 SP1.

    - by Luc
    I've been developing an application that talks directly to an SSAS 2005 OLAP cube. Note that I also have SQL Server 2008 installed, so the other day I did a Windows Update and decided to include SQL Server 2008 SP1 in my update. After doing that, my SSAS 2005 cube is no longer accessible from my application. I'm able to browse the data just fine within SQL Server 2005 BI Studio Manager, but I'm not able to connect to the cube from my application. Here is my connection string that used to work: Data Source=localhost;Provider=msolap;Initial Catalog=Adventure Works DW Here is the error message I get: Either the user, [Server]/[User], does not have access to the Adventure Works DW database, or the database does not exist. Here is the beginning of my stack trace if it would help: Microsoft.AnalysisServices.AdomdClient.AdomdErrorResponseException was unhandled by user code HelpLink="" Message="Either the user, Luc-PC\\Luc, does not have access to the Adventure Works DW database, or the database does not exist." Source="Microsoft SQL Server 2005 Analysis Services" ErrorCode=-1055391743 StackTrace: at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IDiscoverProvider.Discover(String requestType, IDictionary restrictions, DataTable table) at Microsoft.AnalysisServices.AdomdClient.ObjectMetadataCache.Discover(AdomdConnection connection, String requestType, ListDictionary restrictions, DataTable destinationTable, Boolean doCreate) at Microsoft.AnalysisServices.AdomdClient.ObjectMetadataCache.PopulateSelf() at Microsoft.AnalysisServices.AdomdClient.ObjectMetadataCache.Microsoft.AnalysisServices.AdomdClient.IObjectCache.Populate() at Microsoft.AnalysisServices.AdomdClient.CacheBasedNotFilteredCollection.PopulateCollection() at Microsoft.AnalysisServices.AdomdClient.CacheBasedNotFilteredCollection.get_Count() at Microsoft.AnalysisServices.AdomdClient.CubesEnumerator.MoveNext() at Microsoft.AnalysisServices.AdomdClient.CubeCollection.Enumerator.MoveNext() at blah blah... I've looked for a solution for the last 4+ hours and haven't had any success. Thanks in advance for any help. Luc

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • Frequency of RAM

    - by Vignesh Palani
    I have a very old system hp compaq dx2080. It had 1gb of RAM. I recently bought a EVM DDR2 1 GB PC RAM which had a clock rate of 667Mhz. I have dual booting windows 7 and 8. When I installed it, windows 7 was still using the older 1gb. It showed as 2 gb available and 1gb usable in system properties. I searched around and found that I can change it to max in the msconfig. I did so. I set it 2048. Still, it was using only 1gb. When, I switched to 8, it was using the 2gb. Now, for my question: My system only supported 553Mhz and 667Mhz RAM. In the BIOS, I saw that the new RAM was showing as 800Mhz. Rechecked using speccy and cpu-z. It showed different values between the two. The RAM is labeled as 667Mhz over it. No mistake in that. But, am I missing something? Please help. And, can I continue using it? My point again. There are only two slots.

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Looking for an open source real-time network analysis program

    - by JrSysAdmin
    Can somebody recommend an open source real-time network analysis program? What I'm looking for the program to do is display a graph of bandwidth usage by IP within our internal network that can quickly be viewed any time we need to (typically when we want to quickly find out who is utilizing high amounts of bandwidth and slowing down the network). We ideally simply want to hook up a monitor on the wall of our server room to a system whose NIC will be in permissive mode to log all network activity in a visual manner which can easily be seen and running 24/7. Prefer open source as I do not have a budget for this project and prefer open source projects in general. I'd also prefer for this to be available for CentOS but any linux distro or Windows OS would be acceptable. Thanks!

    Read the article

  • Video streaming and internet browsing on different bands/frequency

    - by user47207
    I have a Netgear WDNR37000 which allows clients on a 2ghz or 5ghz to access the internet and see every client and device on the network. I have a computer with two nics, one that is in the 2ghz range and the other on the 5ghz range. My specific problem is that I would like to serve my video streams (hulu, ps3mediaserver, playon) to my ps3 on the 5ghz band while internet browsing is routed to the 2ghz band. This is so that the video streams aren't affected by general internet use. While the easiest solution would be to disable internet access on the 5ghz apn, I would like to know of a solution that would not require that.

    Read the article

  • Eclipse CDT code analysis thinks size_t is ambiguous

    - by Chris
    It does, after all, get defined in stddef.h AND c++config.h: c++config.h: namespace std { typedef __SIZE_TYPE__ size_t; typedef __PTRDIFF_TYPE__ ptrdiff_t; #ifdef __GXX_EXPERIMENTAL_CXX0X__ typedef decltype(nullptr) nullptr_t; #endif } stddef.h: #define __SIZE_TYPE__ long unsigned int So when a file does using namespace std, the Eclipse CDT code analysis gets confused and says the symbol is ambiguous. I don't know how gcc works around this, but does anybody have any suggestions on what to do for the eclipse code analysis?

    Read the article

  • C++ Professional Code Analysis Tools

    - by Voulnet
    Hello there, I would like to ask about the available (free or not) Static and Dynamic code analysis tools that can be used to C++ applications ESPECIALLY COM and ActiveX. I am currently using Visual Studio's /analyze compiler option, which is good and all but I still feel there is lots of analysis to be done. I'm talking about a C++ application where memory management and code security is of utmost importance.

    Read the article

  • Prinicipal component analysis c#

    - by vj4u
    Hi, im presently working with data in text files i need to use algorithm called principal component analysis so i have counted the words in text filw which occurred more than one time in text file for eg relation occured times help occured 6 times between OCCURED 3 TIMES Analysis occurred 4 times component occured 5 times present occurred 6 times so by taking count of above distinct words i need to form matrix of m x n in c# help me its bit urgent for me

    Read the article

  • SSAS Maestro Training in July 2012 #ssasmaestro #ssas

    - by Marco Russo (SQLBI)
    A few hours ago Chris Webb blogged about SSAS Maestro and I’d like to propagate the news, adding also some background info. SSAS Maestro is the premier certification on Analysis Services that selects the best experts in Analysis Services around the world. In 2011 Microsoft organized two rounds of training/exams for SSAS Maestros and up to now only 11 people from the first wave have been announced – around 10% of attendees of the course! In the next few days the new Maestros from the second round should be announced and this long process is caused by many factors that I’m going to explain. First, the course is just a step in the process. Before the course you receive a list of topics to study, including the slides of the course. During the course, students receive a lot of information that might not have been included in the slides and the best part of the course is class interaction. Students are expected to bring their experience to the table and comparing case studies, experiences and having long debates is an important part of the learning process. And it is also a part of the evaluation: good questions might be also more important than good answers! Finally, after the course, students have their homework and this may require one or two months to be completed. After that, a long (very long) evaluation process begins, taking into account homework, labs, participation… And for this reason the final evaluation may arrive months later after the course. We are going to improve and shorten this process with the next courses. The first wave of SSAS Maestro had been made by invitation only and now the program is opening, requiring a fee to participate in order to cover the cost of preparation, training and exam. The number of attendees will be limited and candidates will have to send their CV in order to be admitted to the course. Only experienced Analysis Services developers will be able to participate to this challenging program. So why you should do that? Well, only 10% of students passed the exam until now. So if you need 100% guarantee to pass the exam, you need to study a lot, before, during and after the course. But the course by itself is a precious opportunity to share experience, create networking and learn mission-critical enterprise-level best practices that it’s hard to find written on books. Oh, well, many existing white papers are a required reading *before* the course! The course is now 5 days long, and every day can be *very* long. We’ll have lectures and discussions in the morning and labs in the afternoon/evening. Plus some more lectures in one or two afternoons. A heavy part of the course is about performance optimization, capacity planning, monitoring. This edition will introduce also Tabular models, and don’t expect something you might find in the SSAS Tabular Workshop – only performance, scalability monitoring and optimization will be covered, knowing Analysis Services is a requirement just to be accepted! I and Chris Webb will be the teachers for this edition. The course is expensive. Applying for SSAS Maestro will cost around 7000€ plus taxes (reduced to 5000€ for students of a previous SSAS Maestro edition). And you will be locked in a training room for the large part of the week. So why you should do that? Well, as I said, this is a challenging course. You will not find the time to check your email – the content is just too much interesting to think you can be distracted by something else. Another good reason is that this course will take place in Italy. Well, the course will take place in the brand new Microsoft Innovation Campus, but in general we’ll be able to provide you hints to get great food and, if you are willing to attach one week-end to your trip, there are plenty of places to visit (and I’m not talking about the classic Rome-Florence-Venice) – you might really need to relax after such a week! Finally, the marking process after the course will be faster – we’d like to complete the evaluation within three months after the course, considering that 1-2 months might be required to complete the homework. If at this point you are not scared: registration will open in mid-April, but you can already write to [email protected] sending your CV/resume and a short description of your level of SSAS knowledge and experience. The selection process will start early and you may want to put your admission form on top of the FIFO queue!

    Read the article

  • Need explanation on theorem form the book [closed]

    - by Pradeep
    I need some explanation on amortization analysis with respect to analysis of algorithm. I need some more explanation on one of the theorem attached. Explanation needed: 1. How did the author derive at Mij is O (ij-ij-1)? 2. Need explanation for quoted from the book " because at most ij-ij-1 -1 elements have been added into the table since the clear operation Mij-1 or since the beginning of the series." 3. Also what does the summation equation mean? need some more thorough explanation and the essence of the theorem. Removed Attached is the scan copy of the page from the Book

    Read the article

  • probability and relative frequency

    - by Alexandru
    If I use relative frequency to estimate the probability of an event, how good is my estimate based on the number of experiments? Is standard deviation a good measure? A paper/link/online book would be perfect. http://en.wikipedia.org/wiki/Frequentist

    Read the article

  • 2D Array values frequency

    - by Morano88
    If I have a 2D array that is arranged as follows : String X[][] = new String [][] {{"127.0.0.9", "60", "75000","UDP", "Good"}, {"127.0.0.8", "75", "75000","TCP", "Bad"}, {"127.0.0.9", "75", "70000","UDP", "Good"}, {"127.0.0.1", "", "70000","UDP", "Good"}, {"127.0.0.1", "75", "75000","TCP", "Bad"} }; I want to know the frequency of each value .. so I27.0.0.9 gets 2. How can I do a general solution for this ? In Java or any algorithm for any language ?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >