Search Results

Search found 19586 results on 784 pages for 'machine instruction'.

Page 67/784 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • computing z-scores for 2D matrices in scipy/numpy in Python

    - by user248237
    How can I compute the z-score for matrices in Python? Suppose I have the array: a = array([[ 1, 2, 3], [ 30, 35, 36], [2000, 6000, 8000]]) and I want to compute the z-score for each row. The solution I came up with is: array([zs(item) for item in a]) where zs is in scipy.stats.stats. Is there a better built-in vectorized way to do this? Also, is it always good to z-score numbers before using hierarchical clustering with euclidean or seuclidean distance? Can anyone discuss the relative advantages/disadvantages? thanks.

    Read the article

  • Naive Bayesian for Topic detection using "Bag of Words" approach

    - by AlgoMan
    I am trying to implement a naive bayseian approach to find the topic of a given document or stream of words. Is there are Naive Bayesian approach that i might be able to look up for this ? Also, i am trying to improve my dictionary as i go along. Initially, i have a bunch of words that map to a topics (hard-coded). Depending on the occurrence of the words other than the ones that are already mapped. And depending on the occurrences of these words i want to add them to the mappings, hence improving and learning about new words that map to topic. And also changing the probabilities of words. How should i go about doing this ? Is my approach the right one ? Which programming language would be best suited for the implementation ?

    Read the article

  • hierarchical clustering with gene expression matrix in python

    - by user248237
    how can I do a hierarchical clustering (in this case for gene expression data) in Python in a way that shows the matrix of gene expression values along with the dendrogram? What I mean is like the example here: http://www.mathworks.cn/access/helpdesk/help/toolbox/bioinfo/ug/a1060813239b1.html shown after bullet point 6 (Figure 1), where the dendrogram is plotted to the left of the gene expression matrix, where the rows have been reordered to reflect the clustering. How can I do this in Python using numpy/scipy or other tools? Also, is it computationally practical to do this with a matrix of about 11,000 genes, using euclidean distance as a metric? thanks.

    Read the article

  • How to create a container that holds different types of function pointers in C++?

    - by Alex
    I'm doing a linear genetic programming project, where programs are bred and evolved by means of natural evolution mechanisms. Their "DNA" is basically a container (I've used arrays and vectors successfully) which contain function pointers to a set of functions available. Now, for simple problems, such as mathematical problems, I could use one type-defined function pointer which could point to functions that all return a double and all take as parameters two doubles. Unfortunately this is not very practical. I need to be able to have a container which can have different sorts of function pointers, say a function pointer to a function which takes no arguments, or a function which takes one argument, or a function which returns something, etc (you get the idea)... Is there any way to do this using any kind of container ? Could I do that using a container which contains polymorphic classes, which in their turn have various kinds of function pointers? I hope someone can direct me towards a solution because redesigning everything I've done so far is going to be painful.

    Read the article

  • How to perform FST (Finite State Transducer) composition

    - by Tasbeer
    Consider the following FSTs : T1 0 1 a : b 0 2 b : b 2 3 b : b 0 0 a : a 1 3 b : a T2 0 1 b : a 1 2 b : a 1 1 a : d 1 2 a : c How do I perform the composition operation on these two FSTs (i.e. T1 o T2) I saw some algorithms but couldn't understand much. If anyone could explain it in a easy way it would be a major help. Please note that this is NOT a homework. The example is taken from the lecture slides where the solution is given but I couldn't figure out how to get to it.

    Read the article

  • Alright, I'm still stuck on this homework problem. C++

    - by Josh
    Okay, the past few days I have been trying to get some input on my programs. Well I decided to scrap them for the most part and try again. So once again, I'm in need of help. For the first program I'm trying to fix, it needs to show the sum of SEVEN numbers. Well, I'm trying to change is so that I don't need the mem[##] = ####. I just want the user to be able to input the numbers and the program run from there and go through my switch loop. And have some kind of display..saying like the sum is?.. Here's my code so far. #include <iostream> #include <iomanip> #include <ios> using namespace std; int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 42; const int HALT = 43; int mem[100] = {0}; //Making it 100, since simpletron contains a 100 word mem. int operation; //taking the rest of these variables straight out of the book seeing as how they were italisized. int operand; int accum = 0; // the special register is starting at 0 int counter; for ( counter=0; counter < 100; counter++) mem[counter] = 0; // This is for part a, it will take in positive variables in //a sent-controlled loop and compute + print their sum. Variables from example in text. mem[0] = 1009; mem[1] = 1109; mem[2] = 2010; mem[3] = 2111; mem[4] = 2011; mem[5] = 3100; mem[6] = 2113; mem[7] = 1113; mem[8] = 4300; counter = 0; //Makes the variable counter start at 0. while(true) { operand = mem[ counter ]%100; // Finds the op codes from the limit on the mem (100) operation = mem[ counter ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case READ: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n Input a positive variable: "; cin >> mem[ operand ]; counter++; break; case WRITE: // takes a word from location cout << "\n\nThe content at location " << operand << " is " << mem[operand]; counter++; break; case LOAD:// loads accum = mem[ operand ];counter++; break; case STORE: //stores mem[ operand ] = accum;counter++; break; case ADD: //adds accum += mem[operand];counter++; break; case SUBTRACT: // subtracts accum-= mem[ operand ];counter++; break; case DIVIDE: //divides accum /=(mem[ operand ]);counter++; break; case MULTIPLY: // multiplies accum*= mem [ operand ];counter++; break; case BRANCH: // Branches to location counter = operand; break; case BRANCHNEG: //branches if acc. is < 0 if (accum < 0) counter = operand; else counter++; break; case BRANCHZERO: //branches if acc = 0 if (accum == 0) counter = operand; else counter++; break; case HALT: // Program ends break; } } return 0; } part B int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 41; const int HALT = 43; int mem[100] = {0}; int operation; int operand; int accum = 0; int pos = 0; int j; mem[22] = 7; // loop 7 times mem[25] = 1; // increment by 1 mem[00] = 4306; mem[01] = 2303; mem[02] = 3402; mem[03] = 6410; mem[04] = 3412; mem[05] = 2111; mem[06] = 2002; mem[07] = 2312; mem[08] = 4210; mem[09] = 2109; mem[10] = 4001; mem[11] = 2015; mem[12] = 3212; mem[13] = 2116; mem[14] = 1101; mem[15] = 1116; mem[16] = 4300; j = 0; while ( true ) { operand = memory[ j ]%100; // Finds the op codes from the limit on the memory (100) operation = memory[ j ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case 1: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n enter #: "; cin >> memory[ operand ]; break; case 2: // takes a word from location cout << "\n\nThe content at location " << operand << "is " << memory[operand]; break; case 3:// loads accum = memory[ operand ]; break; case 4: //stores memory[ operand ] = accum; break; case 5: //adds accum += mem[operand];; break; case 6: // subtracts accum-= memory[ operand ]; break; case 7: //divides accum /=(memory[ operand ]); break; case 8: // multiplies accum*= memory [ operand ]; break; case 9: // Branches to location j = operand; break; case 10: //branches if acc. is < 0 break; case 11: //branches if acc = 0 if (accum == 0) j = operand; break; case 12: // Program ends exit(0); break; } j++; } return 0; }

    Read the article

  • How to compute the probability of a multi-class prediction using libsvm?

    - by Cuga
    I'm using libsvm and the documentation leads me to believe that there's a way to output the believed probability of an output classification's accuracy. Is this so? And if so, can anyone provide a clear example of how to do it in code? Currently, I'm using the Java libraries in the following manner SvmModel model = Svm.svm_train(problem, parameters); SvmNode x[] = getAnArrayOfSvmNodesForProblem(); double predictedValue = Svm.svm_predict(model, x);

    Read the article

  • Training sets for AdaBoost algorithm

    - by palau1
    How do you find the negative and positive training data sets of Haar features for the AdaBoost algorithm? So say you have a certain type of blob that you want to locate in an image and there are several of them in your entire array - how do you go about training it? I'd appreciate a nontechnical explanation as much as possible. I'm new to this. Thanks.

    Read the article

  • Reinforcement learning And POMDP

    - by Betamoo
    I am trying to use Multi-Layer NN to implement probability function in Partially Observable Markov Process.. I thought inputs to the NN would be: current state, selected action, result state; The output is a probability in [0,1] (prob. that performing selected action on current state will lead to result state) In training, I fed the inputs stated before, into the NN, and I taught it the output=1.0 for each case that already occurred. The problem : For nearly all test case the output probability is near 0.95.. no output was under 0.9 ! Even for nearly impossible results, it gave that high prob. PS:I think this is because I taught it happened cases only, but not un-happened ones.. But I can not at each step in the episode teach it the output=0.0 for every un-happened action! Any suggestions how to over come this problem? Or may be another way to use NN or to implement prob function? Thanks

    Read the article

  • Update Rule in Temporal difference

    - by Betamoo
    The update rule TD(0) Q-Learning: Q(t-1) = (1-alpha) * Q(t-1) + (alpha) * (Reward(t-1) + gamma* Max( Q(t) ) ) Then take either the current best action (to optimize) or a random action (to explorer) Where MaxNextQ is the maximum Q that can be got in the next state... But in TD(1) I think update rule will be: Q(t-2) = (1-alpha) * Q(t-2) + (alpha) * (Reward(t-2) + gamma * Reward(t-1) + gamma * gamma * Max( Q(t) ) ) My question: The term gamma * Reward(t-1) means that I will always take my best action at t-1 .. which I think will prevent exploring.. Can someone give me a hint? Thanks

    Read the article

  • Explaining training method for AdaBoost algorithm

    - by konzti8
    Hi, I'm trying to understand the Haar feature method used for the training step in the AdaBoost algorithm. I don't understand the math that well so I'd appreciate more of a conceptual answer (as much as possible, anyway). Basically, what does it do? How do you choose positive and negative sets for what you want to select? Can it be generalized? What I mean by that is, can you choose it to find any kind of feature that you want no matter what the background is? So, for example, if I want to find some kind of circular blob - can I do that? I've also read that it is used on small patches for the images around the possible feature - does that mean you have to manually select that image patch or can it be automated to process the entire image? Is there matlab code for the training step? Thanks for any help...

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Using CallExternalMethodActivity/HandleExternalEventActivity in StateMachine

    - by AngrySpade
    I'm attempting to make a StateMachine execute some database action between states. So I have a "starting" state that uses CallExternalMethodActivity to call a "BeginExecuteNonQuery" function on an class decorated with ExternalDataExchangeAttribute. After that it uses a SetStateActivity to change to an "ending" state. The "ending" state uses a HandleExternalEventActivity to listen to a "EndExecuteNonQuery" event. I can step through the local service, into the "BeginExecuteNonQuery" function. The problem is that the "EndExecuteNonQuery" is null. public class FailoverWorkflowController : IFailoverWorkflowController { private readonly WorkflowRuntime workflowRuntime; private readonly FailoverWorkflowControlService failoverWorkflowControlService; private readonly DatabaseControlService databaseControlService; public FailoverWorkflowController() { workflowRuntime = new WorkflowRuntime(); workflowRuntime.WorkflowCompleted += workflowRuntime_WorkflowCompleted; workflowRuntime.WorkflowTerminated += workflowRuntime_WorkflowTerminated; ExternalDataExchangeService dataExchangeService = new ExternalDataExchangeService(); workflowRuntime.AddService(dataExchangeService); databaseControlService = new DatabaseControlService(); workflowRuntime.AddService(databaseControlService); workflowRuntime.StartRuntime(); } ... } ... public void BeginExecuteNonQuery(string command) { Guid workflowInstanceID = WorkflowEnvironment.WorkflowInstanceId; ThreadPool.QueueUserWorkItem(delegate(object state) { try { int result = ExecuteNonQuery((string)state); EndExecuteNonQuery(null, new ExecuteNonQueryResultEventArgs(workflowInstanceID, result)); } catch (Exception exception) { EndExecuteNonQuery(null, new ExecuteNonQueryResultEventArgs(workflowInstanceID, exception)); } }, command); } What am I doing wrong with my implementation? -Stan

    Read the article

  • Mahout Naive Bayes Classifier for Items

    - by Nimesh Parikh
    Team, I am working on a project where i need to classify Items into certain category. I have a single file as input; which contains target variable and space separated features. My training data will look like Category Name [Tab] DataString Plumbing [Tab] Pipe Tap Plastic Pipe PVC Pipe Cold Water Line Hot Water Line Tee outlet up Elbow turned up Elbow turned down Gate valve Globe valve Paint [Tab] Ivory Black Burnt Umber Caput Mortuum Violet Earth Red Yellow Ochre Titanium White Cadmium Yellow Light Cadmium Yellow Deep Cloths [Tab] Shirt T-Shirt Pent Jeans Tee Cargo Well, I have really big set of Category. I have couple of question here am i using correct data for Training? If no then what should i use? Once I train and Test my model, what is next step? How can i use output? Please help me with this Thanks, Nimesh

    Read the article

  • hierarchical clustering on correlations in Python scipy/numpy?

    - by user248237
    How can I run hierarchical clustering on a correlation matrix in scipy/numpy? I have a matrix of 100 rows by 9 columns, and I'd like to hierarchically clustering by correlations of each entry across the 9 conditions. I'd like to use 1-pearson correlation as the distances for clustering. Assuming I have a numpy array "X" that contains the 100 x 9 matrix, how can I do this? I tried using hcluster, based on this example: Y=pdist(X, 'seuclidean') Z=linkage(Y, 'single') dendrogram(Z, color_threshold=0) however, pdist is not what I want since that's euclidean distance. Any ideas? thanks.

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • Neural Network: Handling unavailable inputs (missing or incomplete data)

    - by Mike
    Hopefully the last NN question you'll get from me this weekend, but here goes :) Is there a way to handle an input that you "don't always know"... so it doesn't affect the weightings somehow? Soo... if I ask someone if they are male or female and they would not like to answer, is there a way to disregard this input? Perhaps by placing it squarely in the centre? (assuming 1,0 inputs at 0.5?) Thanks

    Read the article

  • Disk2vhd Hyper-V server question

    - by user297378
    Hello all I have a backed up about 30 servers using disk2vhd and now I have built my first of many hyper-v severs I did not realize this is all command line I did download CoreConfigurator and that does have some functionality I have been looking for. My question is how do I get the VHD files to run a Vitual Machines? its all command line I tried via vbs to mount the VHD's and I have not been able to any help on this would be great! Thanks!

    Read the article

  • A simple explanation of Naive Bayes Classification

    - by Jaggerjack
    I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explained it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example training set--- round-red round-orange oblong-yellow round-red dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red

    Read the article

  • How to figure out optimal C / Gamma parameters in libsvm?

    - by Cuga
    I'm using libsvm for multi-class classification of datasets with a large number of features/attributes (around 5,800 per each item). I'd like to choose better parameters for C and Gamma than the defaults I am currently using. I've already tried running easy.py, but for the datasets I'm using, the estimated time is near forever (ran easy.py at 20, 50, 100, and 200 data samples and got a super-linear regression which projected my necessary runtime to take years). Is there a way to more quickly arrive at better C and Gamma values than the defaults? I'm using the Java libraries, if that makes any difference.

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >