Search Results

Search found 1745 results on 70 pages for 'probability theory'.

Page 7/70 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Semantic algorithms

    - by Mythago
    I have a more theoretical than practical question. I'll start with an example - when I get an email and open it on my iPad, there is a feature, which recognizes the timestamp from the text and offers me to create an event in the calendar. Simply told, I want to know theoretically how it's done - I believe it's some kind of semantic parsing, and I would like if someone could point me to some resources, where I can read more about this.

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • Validation and Verification explanation (Boehm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirements baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Quality Assurance=inspections, reviews..?

    - by user970696
    Studying this subject extensively, the most books state the following: Quality Assurance: prevention activity. Act of inspection, reviewing.. Quality Control: testing While there are some exceptions that mention that QA deals with just processes (planning, strategy, standard application etc.) which is IMHO much closer to real QA, yet I cannot find any good reference in Google Books. I believe that inspections, reviews, testing is all quality control as it is about checking products, no matter if it is the final one or work products. The problem is that so many authors do not agree. I would be grateful for detailed explanation, ideally with a reference.

    Read the article

  • Is there such a thing these days as programming in the small?

    - by WeNeedAnswers
    With all the programming languages that are out there, what exactly does it mean to program in the small and is it still possible, without the possibility of re-purposing to the large. The original article which mentions in the small was dated to 1975 and referred to scripting languages (as glue languages). Maybe I am missing the point, but any language that you can built components of code out of, I would regard to being able to handle "in the large". Is there a confusion on what Objects are and do they really figure as being mandatory to being able to handle "the large". Many have argued that this is the true meaning of "In the large" and that the concepts of objects are best fit for the job.

    Read the article

  • Bug severity classification issues

    - by KyleMinn
    In a book I have, there is a following classification of defect: Critical : A defect receives a “critical” severity level if one or more critical system functionalities are impaired by a defect with is impaired and there is no workaround. High: A defect receives a “high” severity level if some fundamental system functionalities are impaired but a workaround exists. Medium: A defect receives a “medium” severity level if no critical functionality is impaired and a workaround exists for the defect. Low: A defect receives a “low” severity level if the problem involves a cosmetic feature of the system. To be honest, I do not get it.. For example point 2. What if fundamental but not critical feature is impaired and there is NOT a workaround. The same for point 3: what if no critical functionality is affected but there is no workaround? E.g. optional field in the registration form does not work. No workaround but barely an issue.

    Read the article

  • How to use lists in equivalence partitioning?

    - by KhDonen
    I have read that equivalence partitioning can be used typically for intervals or lists, e.g. I assume it can be used for every set of inputs. Anyway if the requirement says that allowed colors are (RED,BLUE,BLACK, GREEN), I cannot treat them like a list, right? I mean, testing one of them would not be enough because developers most likely used some switch-case and thus it is not real "set" where one could represent also the others. So how it is meant with lists? Also what is not that clear to me, I do not think it is always possible to do the initial partioning and then design the test cases. What about checking two lines intersection: Y=MX+C. (two inputs) 1) The lines are paraller. M1=M1 but C1 must be different from C2. 2) Lines are intersecting. M1 must be different from M2. 3) Coincident. The are the same. How can I use partitioning here? THis is actually taken from a book and it says that these sets are eq.classes.

    Read the article

  • Very original V&V explanation (Bohm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirments baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Legal Applications of Metamorphic Code

    - by V_P
    Firstly, I would like to state that I already understand the 'vx' applications for Metamorphic code. I am not here to ask a question related to any of those topics as that would be inappropriate in this context. I would like to know if anyone has ever used 'Metamorphic' code in practice, for purposes other than those previously stated, if so, what was the reasoning for using said concept. In essence I am trying to discover a purpose for this concept, if any, other than circumventing anti-virus scanners and the like.

    Read the article

  • Weighted random selection using Walker's Alias Method (c# implementation)

    - by Chuck Norris
    I was looking for this algorithm (algorithm which will randomly select from a list of elements where each element has different probability of being picked (weight) ) and found only python and c implementations, after I did a C# one, a bit different (but I think simpler) I thought I should share it, and ask your opinion ? this is it: using System; using System.Collections.Generic; using System.Linq; namespace ChuckNorris { class Program { static void Main(string[] args) { var oo = new Dictionary<string, int> { {"A",7}, {"B",1}, {"C",9}, {"D",8}, {"E",11}, }; var rnd = new Random(); var pick = rnd.Next(oo.Values.Sum()); var sum = 0; var res = ""; foreach (var o in oo) { sum += o.Value; if(sum >= pick) { res = o.Key; break; } } Console.WriteLine("result is "+ res); } } } if anyone can remake it in f# please post your code

    Read the article

  • What is the best Java numerical method package?

    - by Bob Cross
    I am looking for a Java-based numerical method package that provides functionality including: Solving systems of equations using different numerical analysis algorithms. Matrix methods (e.g., inversion). Spline approximations. Probability distributions and statistical methods. In this case, "best" is defined as a package with a mature and usable API, solid performance and numerical accuracy. Edit: derick van brought up a good point in that cost is a factor. I am heavily biased in favor of free packages but others may have a different emphasis.

    Read the article

  • Reinforcement learning toy project

    - by Betamoo
    My toy project to learn & apply Reinforcement Learning is: - An agent tries to reach a goal state "safely" & "quickly".... - But there are projectiles and rockets that are launched upon the agent in the way. - The agent can determine rockets position -with some noise- only if they are "near" - The agent then must learn to avoid crashing into these rockets.. - The agent has -rechargable with time- fuel which is consumed in agent motion - Continuous Actions: Accelerating forward - Turning with angle I need some hints and names of RL algorithms that suit that case.. - I think it is POMDP , but can I model it as MDP and just ignore noise? - In case POMDP, What is the recommended way for evaluating probability? - Which is better to use in this case: Value functions or Policy Iterations? - Can I use NN to model environment dynamics instead of using explicit equations? - If yes, Is there a specific type/model of NN to be recommended? - I think Actions must be discretized, right? I know it will take time and effort to learn such a topic, but I am eager to.. You may answer some of the questions if you can not answer all... Thanks

    Read the article

  • PCA extended face recognition

    - by cMinor
    The state of the art says that we can use PCA to perform face recognition. like this, this or this I am working with a project that involves training a classifier to detect a person who is wearing glasess or hats or even a mustache. The purpose of doing this is to detect when a person that has robbed a bank, store, or have commeted some sort of crime(s) (we have their image in a database), enters a certain place ( historically we know these guys have robbed, so we should take care to avoid problems). We came first to have a distributed database with all images of criminals, then I thought to have a layer of them clasifying these criminals using accesories like hats, mustache or anything that hides their face etc... Then, to apply that knowledge to detect when a particular or a suspect person enters a comercial place. ( In practice when someone is going to rob not all the times they are using an accesorie...) What do you think about this idea of doing PCA to first detect principal components of the face and then the components of an accesory. I was thinking that maybe a probabilistic approach is better so we can compute the probability the criminal is the person that entered a place and call the respective authorities.

    Read the article

  • The mathematics of Schellings segregation model

    - by Bruce
    For those who don't know the model. You can read this pdf. I want to find what is the probability that 2 nodes are each others neighbors when the algorithm converges (i.e. when all nodes are happy). Here's the model in a gist. You have a grid (say 10x10). You have nodes of two kind (red and green) 45 each. So we have 10 empty spaces. We randomly place the nodes on the grid. Now we scan through this grid (Exact order does not matter according to Schelling). Each node wants a specific percentage of people of same kind in its Moore neighborhood (say b = 50% for each red and green). We calculate the happiness of each node (a = Number of neighbors of same kind/Number of neighbors of different kind). If a node is unhappy (a < b) it moves to an empty cell where it knows it will be happy. This movement can change the dynamics of old as well as new neighborhood. Algorithm converges when all nodes are happy. PS - I am looking for links for any mathematical analysis of the Schelling's model.

    Read the article

  • Why does a hard disk suddenly look to Windows as if it "needs to be formatted"?

    - by pufferfish
    This is more of a theory question, but what are the reason(s) for a disk to suddenly cause Windows to start saying it "needs to be formatted"? It happens to an IDE disk that I have in a cheap external enclosure, and I can usually get most of the data back by using software like recuva. It's now happened to an internal disk I have. I'm not looking for software to fix this (although links would be appreciated), but rather a low-level explanation as to what gets corrupted on the disk.

    Read the article

  • Why does a hard disk suddenly look to Windows as if it "needs to be formatted"?

    - by pufferfish
    This is more of a theory question, but what are the reason(s) for a disk to suddenly cause Windows to start saying it "needs to be formatted"? It happens to an IDE disk that I have in a cheap external enclosure, and I can usually get most of the data back by using software like recuva. It's now happened to an internal disk I have. I'm not looking for software to fix this (although links would be appreciated), but rather a low-level explanation as to what gets corrupted on the disk.

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • Generating all unique combinations for "drive ya nuts" puzzle

    - by Yuval A
    A while back I wrote a simple python program to brute-force the single solution for the drive ya nuts puzzle. The puzzle consists of 7 hexagons with the numbers 1-6 on them, and all pieces must be aligned so that each number is adjacent to the same number on the next piece. The puzzle has ~1.4G non-unique possibilities: you have 7! options to sort the pieces by order (for example, center=0, top=1, continuing in clockwise order...). After you sorted the pieces, you can rotate each piece in 6 ways (each piece is a hexagon), so you get 6**7 possible rotations for a given permutation of the 7 pieces. Totalling: 7!*(6**7)=~1.4G possibilities. The following python code generates these possible solutions: def rotations(p): for i in range(len(p)): yield p[i:] + p[:i] def permutations(l): if len(l)<=1: yield l else: for perm in permutations(l[1:]): for i in range(len(perm)+1): yield perm[:i] + l[0:1] + perm[i:] def constructs(l): for p in permutations(l): for c in product(*(rotations(x) for x in p)): yield c However, note that the puzzle has only ~0.2G unique possible solutions, as you must divide the total number of possibilities by 6 since each possible solution is equivalent to 5 other solutions (simply rotate the entire puzzle by 1/6 a turn). Is there a better way to generate only the unique possibilities for this puzzle?

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

  • Histogram matching - image processing - c/c++

    - by Raj
    Hello I have two histograms. int Hist1[10] = {1,4,3,5,2,5,4,6,3,2}; int Hist1[10] = {1,4,3,15,12,15,4,6,3,2}; Hist1's distribution is of type multi-modal; Hist2's distribution is of type uni-modal with single prominent peak. My questions are Is there any way that i could determine the type of distribution programmatically? How to quantify whether these two histograms are similar/dissimilar? Thanks

    Read the article

  • Confusion Matrix of Bayesian Network

    - by iva123
    Hi, I'm trying to understand bayesian network. I have a data file which has 10 attributes, I want to acquire the confusion table of this data table ,I thought I need to calculate tp,fp, fn, tn of all fields. Is it true ? if it's then what i need to do for bayesian network. Really need some guidance, I'm lost.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >