Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 14/134 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • order keywords by frequency in PHP mySql

    - by Gusepo
    Hello, I've got a database with video ids and N keywords for each video. I made a table with 1 video ID and 1 keyword ID in each row. What's the easiest way to order keywords by frequency? I mean to extract the number of times a keyword is used and order them. Is it possible to do that with sql or do I need to use php arrays? Thanks

    Read the article

  • Crash dump analysis

    - by Ryan Ries
    I hope this isn't a stupid question, and if it is, then I want to at least get it over with so I don't feel so dumb in the future. Here we are, loading up a Windows crash dump with Windbg. Here are the first few lines of the debugger output: 0: kd> .dumpdebug ----- 64 bit Kernel Summary Dump Analysis DUMP_HEADER64: MajorVersion 0000000f MinorVersion 00001db1 ... The MinorVersion I mostly understand. It's hexadecimal and it translates to 7601 in decimal. Windows admins would already be able to tell from that that this must be either a Win7 x64 machine or a 2k8 R2 machine with SP1. But isn't 7601 the build number? It's supposed to be Major.Minor.Build/Revision... right? Also I don't understand the MajorVersion. It should be 6. This version of Windows is 6. But isn't 0000000f in hexadecimal 15 in decimal? The full version string of this version of Windows, when you launch the Command Prompt for instance, is 6.1.7601. If 7601 is the MinorVersion, then what is 1 and what is 6? And why does the crash dump say 0F?

    Read the article

  • Setting up multiple wireless access points on same network

    - by SqlRyan
    I'd like to add wireless to my network, and I need multiple access points to cover the whole area. I'd like to set them up so that there's only one "wireless network" that the clients see, and it switches them as seamlessly as possible between access points as they wander around (if that's not possible, then at least have it so that they don't need to set up the security by hand on each one the first time, if possible). I've searched online, and there are quite a few sets of mixed instructions (same vs different SSID, frequency, does the security need to match exactly, etc.). Can somebody who has some experience doing this please let me know what they did? I imagine it's pretty simple, but there seems to be no clear cut "yes, you can do this" online, even though I know you can. I have a mid-size LAN with about 20 workstations and two Domain Controllers on it. Also, I'll be doing this with consumer wireless components, if it makes a difference, not enterprise-level components (ie. Linksys rather than Cisco).

    Read the article

  • word frequency problem

    - by davit-datuashvili
    hi in programming pearls i have meet following problem question is this:print words in decreasing frequency or as i understand probllem is this suppose there is given string array let call it s words i have taken randomly it does not matter String s[]={"cat","cat","dog","fox","cat","fox","dog","cat","fox}; and we see that string cat occurs 4 times fox 3 times and dog 2 times so result will be such cat fox dog i have following code in java mport java.util.*; public class string { public static void main(String[] args){ String s[]={"fox","cat","cat","fox","dog","cat","fox","dog","cat"}; Arrays.sort(s); int counts; int count[]=new int[s.length]; for (int i=0;i } } or i have sorted array and create count array where i write number of each word in array problem is that somehow index of integer array element and string array element is not same how to do such that print words according tu maximum elements of integer array? please help me

    Read the article

  • Update frequency for dynamic favicons

    - by rosscowar
    I wanted to learn how to dynamically update the favicon using the Google Chrome browser and I've noticed that the browser seems to throttle how often you can update the favicon per second and that sort of makes things look sloppy. The test page I've made for this is: http://staticadmin.com/countdown.html Which is simply a scrolling message displaying the results of a countdown. I added an input field to tweak how many pixels per second are moved by the script and I've eyeballed the max to be about 5 frames per second smoothly in Google Chrome and I have not tested it in any other browsers. My question is what is the maximum frequency, are there any ways to change, is there a particular reason behind it? NOTE: I've also noticed that this value changes based on window focus as well. It seems to drop to about 1 update per second when the browser's window isn't in focus and returns to "max" when you return.

    Read the article

  • How to simulate different CPU frequency and limit RAM

    - by user351103
    Hi I have to build a simulator with C#. This simulator should be able to run a second thread with configureable CPU speed and limited RAM size, e.g. 144MHz and 50 MB. Of course I know that a simulator can never be as accurate as the real hardware. But I try to get almost similar performance. At the moment I'm thinking about creating a thread which I will stop/sleep from time to time. Depending on the desired CPU speed the simulator should adjust the sleep time of this thread and therefore simulate different cpu frequency. To measure the achieved speed I though about using PerformanceCounters. But with this approach I have the problem that I don't know how to limit the RAM size the thread could use. Do you have any ideas how to realize such a simulator? Thanks in advance!!

    Read the article

  • Data on the Frequency of Edit Operations Required to Correct a Misspelt Word

    - by gvkv
    Does anybody know of any data that relates to the frequency of the types of mistakes the people make when they misspell a word? I'm not referring to words themselves, but tje errors that are made by the typist. For example, I personally make transposition errors the most followed by deletion errors (that is, not including a letter I should), substitution errors and lastly, insertion errors. However, it would not surprise me to find out that typing a wrong letter (a substitution error, e.g., xat instead of cat) is more frequent than not including a letter. My purpose is to be able to make best guesses at correcting a word when I only have the original user's input. The idea being that if one type of error is more frequent than others, then it's more likely that correcting a word via that type of operation is correct. I don't object to using a database of commonly misspelt words but I prefer an algorithmic solution to depending on a corpus--especially if it might be faster.

    Read the article

  • Tracking/Counting Word Frequency

    - by Joel Martinez
    I'd like to get some community consensus on a good design to be able to store and query word frequency counts. I'm building an application in which I have to parse text inputs and store how many times a word has appeared (over time). So given the following inputs: "To Kill a Mocking Bird" "Mocking a piano player" Would store the following values: Word Count ------------- To 1 Kill 1 A 2 Mocking 2 Bird 1 Piano 1 Player 1 And later be able to quickly query for the count value of a given arbitrary word. My current plan is to simply store the words and counts in a database, and rely on caching word count values ... But I suspect that I won't get enough cache hits to make this a viable solution long term. Can anyone suggest algorithms, or data structures, or any other idea that might make this a well-performing solution?

    Read the article

  • FFT in MATLAB: wrong 0Hz frequency

    - by roujhan
    Hello I want to use fft in MATLAB to analize some exprimental data saved as an excell file. my code: A=xlsread('Book.xls'); G=A'; x=G(2, : ); N=length(x); F=[-N/2:N/2-1]/N; X = abs(fft(x-mean(x),N)) X = fftshift(X); plot(F,X) But it plots a graph with a large 0Hz wrong component, my true frequency is about 395Hz and it is not shown in the plotted graph. Please tell me what is wrong. Any help would be appreciated.

    Read the article

  • Matlab: plotting frequency distribution with a curve

    - by Kaly
    I have to plot 10 frequency distributions on one graph. In order to keep things tidy, I would like to avoid making a histogram with bins and would prefer having lines that follow the contour of each histogram plot. I tried the following [counts, bins] = hist(data); plot(bins, counts) But this gives me a very inexact and jagged line. I read about ksdensity, which gives me a nice curve, but it changes the scaling of my y-axis and I need to be able to read the frequencies from the y-axis. Can you recommend anything else?

    Read the article

  • indexing according to frequency

    - by Bfu38
    sorry to bother you. I have a data.frame like this: 24.8 23.2 22.8 22.5 22.5 22.4 22.4 22.4 22.3 22.2 22.2 22.2 22 21.9 21.9 21.8 I would like to add an index according to the frequency, to have the following output: 24.8 1 23.2 1 22.8 1 22.5 2 22.5 2 22.4 3 22.4 3 22.4 3 22.3 1 22.2 3 22.2 3 22.2 3 22 1 21.9 2 21.9 2 21.8 1 How this can be done? in other words, since 28.8 occurs 1 time, it will have the index 1; since 22.5 occurs two times, it will have index 2 and so on.

    Read the article

  • Importing Analysis Services 2008 KPI's in a PerformancePoint scorecard

    - by Colin
    I am trying to import a KPI from Analysis Services into a PerformancePoint Scorecard, and when I do, The Dashboard Designer throws an error: An unknown error has occurred. If the problem persists contact an administrator. There may be additional information in the server application event log. When I examine the event log, I find the following exception: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.AnalysisServices, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. File name: 'Microsoft.AnalysisServices, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' at Microsoft.PerformancePoint.Scorecards.Server.ImportExportHelper.GetImportableAsKpis(IBpm pmService, DataSource asDataSource) at Microsoft.PerformancePoint.Scorecards.Server.PmServer.GetImportableAsKpis(DataSource dataSource) I have found this thread which recommends reinstalling Microsoft ADOMD.NET but the installer for that won't run because the server already has a newer version of the product (The server is running SQL Server Analysis Services 2008 which includes Microsoft.AnalysisServices.AdomdClient.dll version 9.0.3042.0) Anyone have any ideas (short of finding the DLL myself and manually installing it to the GAC)?

    Read the article

  • Java library for HTML analysis

    - by Raj
    Hi, (I've seen similar questions, but I think none of them cater to my specific needs, hence...) I would like to know if there is a Java library for analysis of real-world (read: incomplete, ill-formed) HTML. By analysis, I mean things like: figuring out the most prominent color in an HTML chunk changing that color to some other color (hence, has to support modification of the HTML as well) pruning out unwanted tags fixing up the HTML to result in a well formed HTML snippet Parts of the last two are done by libraries such as Jericho, and jTidy. 'Plugins' on top of these would be great. Thanks in advance!

    Read the article

  • Exclude complete namespace from FxCop code analysis?

    - by hangy
    Is it possible to exclude a complete namespace from all FxCop analysis while still analyzing the rest of the assembly using the SuppressMessageAttribute? In my current case, I have a bunch of classes generated by LINQ to SQL which cause a lot of FxCop issues, and obviously, I will not modify all of those to match FxCop standards, as a lot of those modifications would be gone if I re-generated the classes. I know that FxCop has a project option to suppress analysis on generated code, but it does not seem to recognize the entity and context classes created by LINQ 2 SQL as generated code.

    Read the article

  • SQL Server - Schema/Code Analysis Rules - What would your rules include?

    - by Randy Minder
    We're using Visual Studio Database Edition (DBPro) to manage our schema. This is a great tool that, among the many things it can do, can analyse our schema and T-SQL code based on rules (much like what FxCop does with C# code), and flag certain things as warnings and errors. Some example rules might be that every table must have a primary key, no underscore's in column names, every stored procedure must have comments etc. The number of rules built into DBPro is fairly small, and a bit odd. Fortunately DBPro has an API that allows the developer to create their own. I'm curious as to the types of rules you and your DB team would create (both schema rules and T-SQL rules). Looking at some of your rules might help us decide what we should consider. Thanks - Randy

    Read the article

  • Thoughts on Static Code Analysis Warning CA1806 for TryParse calls

    - by Tim
    I was wondering what people's thoughts were on the CA1806 (DoNotIgnoreMethodResults) Static Code Analysis warning when using FxCop. I have several cases where I use Int32.TryParse to pull in internal configuration information that was saved in a file. I end up with a lot of code that looks like: Int32.TryParse(someString, NumberStyles.Integer, CultureInfo.InvariantCulture, out intResult); MSDN says the default result of intResult is zero if something fails, which is exactly what I want. Unfortunately, this code will trigger CA1806 when performing static code analysis. It seems like a lot of redundant/useless code to fix the errors with something like the following: bool success = Int32.TryParse(someString, NumberStyles.Integer, CultureInfo.InvariantCulture, out intResult); if (!success) { intResult= 0; } Should I suppress this message or bite the bullet and add all this redundant error checking? Or maybe someone has a better idea for handling a case like this? Thanks!

    Read the article

  • Add title to meta analysis forest plot

    - by Timothy Alston
    I am meta-analysing some studies and drawing a forest plot for my results. However I can`t seem to get the forest plot to display the title. An example of my code is: require(meta) parameter1<-metaprop(sm="PLOGIT", event=c(4,16,3,2,10,1,0,2), n=c(90,402,89,29,153,86,21,48), level = 0.95, studlab=c("study 1", "study 2", "study 3", "study 4", "study 5", "study 6", "study 7", "study 8"), title="meta analysis 1") forest(parameter1) When it produces the forest plot, the title "meta analysis 1" is missing. How can I add this in? Thanks in advance, Timothy

    Read the article

  • MS Analysis Services OLAP API for Python

    - by Kaloyan Todorov
    I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that? Someone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place. Ideas? Thanks.

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • need explanation on amortization in algorithm

    - by Pradeep
    I am a learning algorithm analysis and came across a analysis tool for understanding the running time of an algorithm with widely varying performance which is called as amortization. The autor quotes " An array with upper bound of n elements, with a fixed bound N, on it size. Operation clear takes O(n) time, since we should dereference all the elements in the array in order to really empty it. " The above statement is clear and valid. Now consider the next content: "Now consider a series of n operations on an initially empty array. if we take the worst case viewpoint, the running time is O(n^2), since the worst case of a sigle clear operation in the series is O(n) and there may be as many as O(n) clear operations in the series." From the above statement how is the time complexity O(n^2)? I did not understand the logic behind it. if 'n' operations are performed how is it O(n ^2)? Please explain what the autor is trying to convey..

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >