Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 19/134 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Using MySQL as data source in Microsoft SQL Server Analysis Services

    - by coldilocks
    Hi, I have installed the latest .net connector (http://www.mysql.com/downloads/connector/net/), I can add MySQL databases as Data Sources, I can even browse through the data from Business Intelligence Studio. The problem is that I CANNOT create a datasource view, or if I do create one without tables, trying to add them after the fact gives me the same error. Specifically it looks like the data source view wizard tries to submit queries against the MySQL database using square brackets/braces, and the query bombs. I get an error message like: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[my_db].[cheatType]' at line 2 So, in summary, has anyone been able to create a data source view using MySQL tables and, if so, can they please show me how this can be done. Thanks for any help!

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • how to use insert command in sql analysis services

    - by imcolin
    Hi all, I have a problem regarding how to use insert command in ssas, i try to send a request to server, server give a response saying "No binding exists for it", my request envelope is:- <Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/"> SSAS SsasCubes Sales Territory Sales Territory Key fds aa 1033 Should i create a attribut binding before using insert command??? any body can help me ? one more question, when i use Update command in ssas, I also got a error say "Attributes cannot appear in Where ". can you tell me why?? Thanks

    Read the article

  • Performing your own runtime analysis of your code in C#

    - by Matt
    I have written a large C# app with many methods in many classes. I'm trying to keep a log of what gets called and how often during my development. (I keep a record in a DB) Every method is padded with the following calls: void myMethod() { log(entering,args[]); log(exiting,args[]); } Since I want to do this for all my methods, is there a better way to do this then having to replicate those lines of code in every method?

    Read the article

  • WinUSB application or User-Mode Driver as a filter driver for USB Analysis/Sniffer/Trending

    - by Robert
    A question to maybe some who have worked extensively with WinUSB APIs or use mode USB drivers - Does anyone know if the WinUSB API or a user mode driver can be used as a passive observer of USB connections, capturing notification of interrupts, control requests, data transfers...etc without interfering with other applications (such as iTunes) which would obviously require concurrent access to the device at the same time my application is monitoring the connection and displaying data on it? Or do you pretty much have to write a kernel-mode filter driver and inject yourself in the USB stack in order to make that happen? In the past, there have been a few credible options (libusb-win32 and usbsnoop to be specific) though both are built around the old DDK, not the Windows Driver Foundation, and are not really supported on a regular basis any more. I'm hesitant to build something significant around them, as a result.

    Read the article

  • Function Point Analysis -- a seriously over-estimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • Code Analysis Error: Declare types in namespaces

    - by George
    Is VS2010, I analyzed my code and got this error: Warning 64 CA1050 : Microsoft.Design : 'ApplicationVariables' should be declared inside a namespace. C:\My\Code\BESI\BESI\App_Code\ApplicationVariables.vb 10 C:\...\BESI\ Here is some reference info on the error. Essentially, I tried to create a class to be used to access data in the Application object in a typed way. The warning message said unless I put my (ApplicationVariables) class in a Namespace, that I wouldn't be able to use it. But I am using it, so what gives? Also, here is a link to another StackOverflow article that talks about how to disable this warning in VS2008, but how would you disable it for 2010? There is no GlobalSuppressions.vb file for VS2010. Here is the code it is complaining a bout: Public Class ApplicationVariables 'Shared Sub New() 'End Sub 'New Public Shared Property PaymentMethods() As PaymentMethods Get Return CType(HttpContext.Current.Application.Item("PaymentMethods"), PaymentMethods) End Get Set(ByVal value As PaymentMethods) HttpContext.Current.Application.Item("PaymentMethods") = value End Set End Property 'Etc, Etc... End Class

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • lexical analysis gives only one output?

    - by Caffè
    I tested this example(lexe.java), but it gave me only one output. I gave this text as a reader: public class LexeTest{ private int a = 14; } And the nextToken() function is : public Category nextToken () { if (inp.findWithinHorizon (tokenPat, 0) == null) return Category.EOF; else { lastLexeme = inp.match ().group (0); if (inp.match ().start (1) != -1) return nextToken (); else if (inp.match ().start (2) != -1) return Category.IDENT; else if (inp.match ().start (3) != -1) return Category.NUMERAL; Category result = tokenMap.get (lastLexeme); if (result == null) return Category.ERROR; else return result; } } Isdie the main method: System.out.println(lexeObject.nextToken()); output is : IDENT Why? but the textfile contains multiple keywords? Anyone know what's the problem?

    Read the article

  • problem with squid and ecap for content analysis

    - by Cornel
    Hi, I need to inspect the content of a form before it is processed on the server. I am using squid proxy server with ecap adapter. In the adapter I can see the text typed in the text area but the content doesn't get to my jsp page when using an ecap adapter that allows me to take a look a the data coming in. I am using Squid proxy 3.1.10 with ecap adapter libecap v0.0.3 and Websphere App server 6.1. Anybody has some idea about what is causing this? Thank you, Cornel

    Read the article

  • Wav analysis in python

    - by mudder
    I'm looking for a python library that will help me analyze the audio in wav files. At the very least I'm hoping to find some kind of interface that understands .wav format so that I don't have to :P at best I need a module with methods for reading wave form parameters like pitch, volume levels, etc

    Read the article

  • Automatic Data sorting/ analysis in Excel

    - by hkf
    I have data in the form of four columns. The first three columns represent time, value1, value 2. The fourth column is binary, all 0's or 1's. Is there a way to tell excel to delete time, value1 and value 2, when the corresponding binary value in column four is 0? I know this is a lot easier in C++ or matlab, but for reasons beyond my control, I must do it in excel.

    Read the article

  • Java Function Analysis

    - by khan
    Okay..I am a total Python guy and have very rarely worked with Java and its methods. The condition is that I have a got a Java function that I have to explain to my instructor and I have got no clue about how to do so..so if one of you can read this properly, kindly help me out in breaking it down and explaining it. Also, i need to find out any flaw in its operation (i.e. usage of loops, etc.) if there is any. Finally, what is the difference between 'string' and 'string[]' types? public static void search(String findfrom, String[] thething){ if(thething.length > 5){ System.err.println("The thing is quite long"); } else{ int[] rescount = new int[thething.length]; for(int i = 0; i < thething.length; i++){ String[] characs = findfrom.split("[ \"\'\t\n\b\f\r]", 0); for(int j = 0; j < characs.length; j++){ if(characs[j].compareTo(thething[i]) == 0){ rescount[i]++; } } } for (int j = 0; j < thething.length; j++) { System.out.println(thething[j] + ": " + rescount[j]); } } }

    Read the article

  • Balanced Search Tree Query, Asymtotic Analysis..

    - by AGeek
    Hi, The situation is as follows:- We have n number and we have print them in sorted order. We have access to balanced dictionary data structure, which supports the operations serach, insert, delete, minimum, maximum each in O(log n) time. We want to retrieve the numbers in sorted order in O(n log n) time using only the insert and in-order traversal. The answer to this is:- Sort() initialize(t) while(not EOF) read(x) insert(x,t); Traverse(t); Now the query is if we read the elements in time "n" and then traverse the elements in "log n"(in-order traversal) time,, then the total time for this algorithm (n+logn)time, according to me.. Please explain the follow up of this algorithm for the time calculation.. How it will sort the list in O(nlogn) time?? Thanks.

    Read the article

  • Code Analysis Warning CA1004 with generic method

    - by Vaccano
    I have the following generic method: // Load an object from the disk public static T DeserializeObject<T>(String filename) where T : class { XmlSerializer xmlSerializer = new XmlSerializer(typeof(T)); try { TextReader textReader = new StreamReader(filename); var result = (T)xmlSerializer.Deserialize(textReader); textReader.Close(); return result; } catch (FileNotFoundException) { } return null; } When I compile I get the following warning: CA1004 : Microsoft.Design : Consider a design where 'MiscHelpers.DeserializeObject(string)' doesn't require explicit type parameter 'T' in any call to it. I have considered this and I don't know a way to do what it requests with out limiting the types that can be deserialized. I freely admit that I might be missing an easy way to fix this. But if I am not, then is my only recourse to suppress this warning? I have a clean project with no warnings or messages. I would like to keep it that way. I guess I am asking "why this is a warning?" At best this seems like it should be a message. And even that seems a bit much. Either it can or it can't be fixed. If it can't then you are just stuck with the warning with no recourse but suppressing it. Am I wrong?

    Read the article

  • How to create a conditional measure in SQL Server 2008 Analysis services

    - by Jonathan
    Hi there I am not sure if the title has the correct terms, I have a developer and am very new to cubes. I have a cube which has data associated to materials that are broken down into chemical compounds. For example a rock material has 10% of this chemical and 10% of that chemical, etc. Samples are taken daily and sample is a dimension with date, etc. So, the measure needs to average by the sample dimension but needs to sum across the chemical compound dimension (To add up to 100% for example). Is this at all possible?

    Read the article

  • MySQL Non Index Queries Analysis

    - by Markii
    I'm using the log queries not using index but it logs all that use indexes but just more advanced or using IFs. Is there a parser or a program out there that can analyze the log and give me a literal output of saying "table.column should be a index" Thanks

    Read the article

  • Image analysis on the iPhone

    - by user362808
    Hi, guys! Hopefully a quick one. Working on a devious little algorithm that'll automatically scan a user's iPhone for a photo of some emotional poignance (e.g. go back in time a ways, look for one of a group of photos that are proximally timestamped, etc.). Need a quick-and-dirty way of picking out a photo containing two people in close proximity to each other. I know that the OpenCV Python lib [for image processing] works on the iPhone, I'm just curious whether it's actually the best way of doing this (from scratch or otherwise). Cheers!

    Read the article

  • Ignore designer and generated files in Resharper analysis

    - by RaYell
    I've been using Resharper for a few days and I really like this tool, but there's one thing that annoys me about it and I wonder if it can be changed. I'm getting lots of issue notifications from generated code (almost 1400 in my project). I'd want to set those files as ignored so they won't be checked as you can do with StyleCop and CodeAnalysis. Unfortunatelly it looks like Resharper ignores Generated Code settings from it's options because I'm still getting the same notifications. I've tried setting a file mask (i.e. for *.resx) and add files manually to generated, but still it doesn't change anything. I don't know if it matters but I'm using VS 2010.

    Read the article

  • Sum and average over null values SQL Server 2008 Analysis Services

    - by Jonathan
    I have a simple problem, I think, but I have googled and can't find the solution. I have a cube that has MeasureA, MeasureB and MeasureC. Not all three measures have values for each record, sometimes they can be null, it's depending if it was applicable. Now for my totals, I need to average but the average must not take nulls into account. Any help will be much appreciated. When I view the measures, the null values show as zeros.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >