Search Results

Search found 34337 results on 1374 pages for 'build machine'.

Page 127/1374 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Naive Bayes matlab, row classification

    - by Jungle Boogie
    How do you classify a row of seperate cells in matlab? Atm I can classify single coloums like so: training = [1;0;-1;-2;4;0;1]; % this is the sample data. target_class = ['posi';'zero';'negi';'negi';'posi';'zero';'posi']; % target_class are the different target classes for the training data; here 'positive' and 'negetive' are the two classes for the given training data % Training and Testing the classifier (between positive and negative) test = 10*randn(25, 1); % this is for testing. I am generating random numbers. class = classify(test,training, target_class, 'diaglinear') % This command classifies the test data depening on the given training data using a Naive Bayes classifier Unlike the above im looking at wanting to classify: A B C Row A | 1 | 1 | 1 = a house Row B | 1 | 2 | 1 = a garden Can anyone help? Here is a code example from matlabs site: nb = NaiveBayes.fit(training, class) nb = NaiveBayes.fit(..., 'param1',val1, 'param2',val2, ...) I dont understand what param1 is or what val1 etc should be?

    Read the article

  • Rails: Multi-Step New User Signup Form (FSM?)

    - by neezer
    I've read the "Create Multi-Step Wizard" in Advanced Rails Recipes. I've also read and re-read the documentation for the updated FSM I'm using called Workflow, and looked here and here. The Advanced Rails Recipe focuses on records (quizzes) that already exist, and doesn't cover creating new ones. The Workflow docs don't cover any code for controllers or views, so I've no idea what to do with all this model magic, and the last two links barely touch on implementation either. From the aforementioned resources, I have a good understanding of what a FSM in Rails is and how to play with it in the console or IRB, but I've got very little direction or understanding how to implement one into my Rails app. What I would like is this: a simple, multi-step user signup process. Step 1: User enters in their critical details (with validations). Step 2: User enters in their search criteria, for their profile (with validations). Step 3: User agrees to the Terms of Service (with validations). Step 4: User is greeted by a confirmation page, including a link that takes them to their newly created account. I'd also like full navigation between the steps and full capture (saves to the database) with each transition. Can someone please give me a clear implementation of something similar to this? I would LOVE an example app that includes a multi-step signup process where I can look at the code (FULL source code--models AND controllers and views) under the hood, but I've been unable to find anything like that. Any guidance would be appreciated! EDIT: Please help make this a Railscast! Ryan B. (a.k.a. Superman), if you're reading this, we need you! http://feedback.railscasts.com/forums/77-episode-suggestions/suggestions/35553-multi-step-forms-and-wizards

    Read the article

  • Ngram IDF smoothing

    - by adi92
    I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents. I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf.. For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf.. I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest. If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores. Anybody has any better ideas?

    Read the article

  • Java text classification problem

    - by yox
    Hello, I have a set of Books objects, classs Book is defined as following : Class Book{ String title; ArrayList<tags> taglist; } Where title is the title of the book, example : Javascript for dummies. and taglist is a list of tags for our example : Javascript, jquery, "web dev", .. As I said a have a set of books talking about different things : IT, BIOLOGY, HISTORY, ... Each book has a title and a set of tags describing it.. I have to classify automaticaly those books into separated sets by topic, example : IT BOOKS : Java for dummies Javascript for dummies Learn flash in 30 days C++ programming HISTORY BOOKS : World wars America in 1960 Martin luther king's life BIOLOGY BOOKS : .... Do you guys know a classification algorithm/method to apply for that kind of problems ? A solution is to use an external API to define the category of the text, but the problem here is that books are in different languages : french, spanish, english ..

    Read the article

  • which Server I have to buy?

    - by sri
    Hello, Recently i have registered startup (home office). My intension is to have virtual(company) server for LAMP website development(virtual). Initially thinking, 2-5 people will be logging virtually to the server. a. but not sure where to shop or what to shop. b. Should I go for my own Server or i have to share some server. c. If i have to go My Server, should I go with rack or tower model. d. which brand and model i have to buy. d. If I have to go with shared(cloud), not sure which is the best place It will be great help, if some one provide in site. Thanks in advance. Sri

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • Neural Networks test cases

    - by Betamoo
    Does increasing the number of test cases in case of Precision Neural Networks may led to problems (like over-fitting for example)..? Does it always good to increase test cases number? Will that always lead to conversion ? If no, what are these cases.. an example would be better.. Thanks,

    Read the article

  • "Anagram solver" based on statistics rather than a dictionary/table?

    - by James M.
    My problem is conceptually similar to solving anagrams, except I can't just use a dictionary lookup. I am trying to find plausible words rather than real words. I have created an N-gram model (for now, N=2) based on the letters in a bunch of text. Now, given a random sequence of letters, I would like to permute them into the most likely sequence according to the transition probabilities. I thought I would need the Viterbi algorithm when I started this, but as I look deeper, the Viterbi algorithm optimizes a sequence of hidden random variables based on the observed output. I am trying to optimize the output sequence. Is there a well-known algorithm for this that I can read about? Or am I on the right track with Viterbi and I'm just not seeing how to apply it?

    Read the article

  • problem with hierarchical clustering in Python

    - by user248237
    I am doing a hierarchical clustering a 2 dimensional matrix by correlation distance metric (i.e. 1 - Pearson correlation). My code is the following (the data is in a variable called "data"): from hcluster import * Y = pdist(data, 'correlation') cluster_type = 'average' Z = linkage(Y, cluster_type) dendrogram(Z) The error I get is: ValueError: Linkage 'Z' contains negative distances. What causes this error? The matrix "data" that I use is simply: [[ 156.651968 2345.168618] [ 158.089968 2032.840106] [ 207.996413 2786.779081] [ 151.885804 2286.70533 ] [ 154.33665 1967.74431 ] [ 150.060182 1931.991169] [ 133.800787 1978.539644] [ 112.743217 1478.903191] [ 125.388905 1422.3247 ]] I don't see how pdist could ever produce negative numbers when taking 1 - pearson correlation. Any ideas on this? thank you.

    Read the article

  • Operant conditioning algorithm?

    - by Ken
    What's the best way to implement real time operant conditioning (supervised reward/punishment-based learning) for an agent? Should I use a neural network (and what type)? Or something else? I want the agent to be able to be trained to follow commands like a dog. The commands would be in the form of gestures on a touchscreen. I want the agent to be able to be trained to follow a path (in continuous 2D space), make behavioral changes on command (modeled by FSM state transitions), and perform sequences of actions. The agent would be in a simulated physical environment.

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • git can I speed up committing?

    - by AndreasT
    I have a big repository in a shared folder. I use git from within a VM on that folder. Everything works nice, but the repository is big and git's searching through all directories and files when committing is slow. I cannot move this repository out of the shared folder. I tried to git add specific files and directories, but when I do git commit -m "something" it still goes off onto it's oddyssey through the directory tree. Can I do commits that ignore the rest of the tree?

    Read the article

  • How do polymorphic inline caches work with mutable types?

    - by kingkilr
    A polymorphic inline cache works by caching the actual method by the type of the object, in order to avoid the expensive lookup procedures (usually a hashtable lookup). How does one handle the type comparison if the type objects are mutable (i.e. the method might be monkey patched into something different at run time). The one idea I've come up with would be a "class counter" that gets incremented each time a method is adjusted, however this seems like it would be exceptionally expensive in a heavily monkey patched environ since it would kill all the PICs for that class, even if the methods for them weren't altered. I'm sure there must be a good solution to this, as this issue is directly applicable to Javascript and AFAIK all 3 of the big JS VMs have PICs (wow acronym ahoy).

    Read the article

  • Why are so many new languages written for the Java VM?

    - by sdudo
    There are more and more programming languages (Scala, Clojure,...) coming out that are made for the Java VM and are therefore compatible with the Java Byte-Code. I'm beginning to ask myself: Why the Java VM? What makes it so powerful or popular that there are new programming languages, which seem gaining popularity too, created for it? Why don't they write a new VM for a new language?

    Read the article

  • What is the point of padding?

    - by ktm5124
    In particular, I'm reading into the Mach-O binary file format for Intel 32 on OS X. After the FAT header there is a whole bunch of padding before the offset of the first archive. What is the point of all this padding? To be more specific, there is upwards of 4000 bytes of padding between the FAT header and the first archive (in particular, the mach_header). Why include all these extra bytes?! Is OS X fond of adding 4 MB to all their universal binaries?

    Read the article

  • What is the difference between causal models and directed graphical models?

    - by Neil G
    What is the difference between causal models and directed graphical models? or: What is the difference between causal relationships and directed probabilistic relationships? or, even better: What would you put in the interface of a DirectedProbabilisticModel class, and what in a CausalModel class? Would one inherit from the other? Collaborative solution: interface DirectedModel { map<Node, double> InferredProbabilities(map<Node, double> observed_probabilities, set<Node> nodes_of_interest) } interface CausalModel: DirectedModel { bool NodesDependent(set<Node> nodes, map<Node, double> context) map<Node, double> InferredProbabilities(map<Node, double> observed_probabilities, map<Node, double> externally_forced_probabilities, set<Node> nodes_of_interest) }

    Read the article

  • GA Framework for Virtual Machines

    - by PeanutPower
    Does anyone know of any .NET genetic algorithm frameworks for evolving instructions sets in virtual machines to solve abstract problems? I would be particularly interested in a framework which allows virtual machines to self propagate within a pool and evolve against a fitness function determined by a data set with "good" outputs given expected inputs.

    Read the article

  • Playground for Artificial Intelligence?

    - by Dolph Mathews
    In school, one of my professors had created a 3D game (not just an engine), where all the players were entirely AI-controlled, and it was our assignment to program the AI of a single player. We were basically provided an API to interact with the game world. Our AI implementations were then dropped into the game together, and we watched as our programs went to battle against each other. It was like robot soccer, but virtual, and with lots of big guns. I'm now looking for anything similar (and open source) to play with. (Preferably in Java, but I'm open to any language.) I'm not looking for a game engine, or a framework... I'm looking for a complete game that simply lacks AI code... preferably set up for this kind of exercise. Suggestions?

    Read the article

  • Hebbian learning

    - by Bane
    I have asked another question on Hebbian learning before, and I guess I got a good answer which I accepted, but, the problem is that I now realize that I've mistaken about Hebbian learning completely, and that I'm a bit confused. So, could you please explain how it can be useful, and what for? Because the way Wikipedia and some other pages describe it - it doesn't make sense! Why would we want to keep increasing the weight between the input and the output neuron if the fire together? What kind of problems can it be used to solve, because when I simulate it in my head, it certainly can't do the basic AND, OR, and other operations (say you initialize the weights at zero, the output neurons never fire, and the weights are never increased!)

    Read the article

  • Php code works on guest os but doesn't work on host os

    - by Ieyasu Sawada
    Can you give me some guide on how to determine whats the problem if the same piece of code works on guest os. And doesn't work on the host os? I've created the project on Windows 7 but now it seems to be working on XP only. Here's what I have installed on the host os(Windows 7): And here's what I got on the guest os: And here's the screenshot. The guest os and host os side by side: Other things which are the same: php version mysql version apache same data stored on the database Here's the code of checkout.php: http://cu.pastebin.com/YeBR9rTs Forgive me if its messy.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >