Search Results

Search found 3771 results on 151 pages for 'social intelligence'.

Page 38/151 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • What is the 'order' of a perceptron

    - by Martin
    A few simple marks for those who know the answer. I'm doing revision for exams at the moment and one of the past questions is: What is meant by the order of a perceptron? I can't find any information about this in my lecture notes, and even google seems at a loss. My guess is that the order is the number of layers in a neural network, but this doesn't seem quite right.

    Read the article

  • Why is Lisp used for AI?

    - by Cristián Romo
    I've been learning Lisp to expand my horizons because I have heard that it is used in AI programming. After doing some exploring, I have yet to find AI examples or anything in the language that would make it more inclined towards it. Was Lisp used in the past because it was available, or is there something that I'm just missing?

    Read the article

  • Question about Convolutional neural network.

    - by Nhu Phuong
    I readed few book and acticles about Convolutional neural network, it seem I understand the concept but I don't know how to put it up like in image below: from 28x28 normalized pixel INPUT we get 4 feature map 24x24. but how to get them ? size the INPUT image ? or perform image transformation? but what kind of transformation? or cut up the input image to 4 piece 24x24 by 4 corner? I don't understand the process to me it seem they cut up or resize the image to more smaller at each step. please help thanks.

    Read the article

  • Prolog: Finding the Nth Element in a List

    - by Thomas
    I am attempting to locate the nth element of a List in Prolog. Here is the code I am attempting to use: Cells = [OK, _, _, _, _, _] . ... next_safe(_) :- facing(CurrentDirection), delta(CurrentDirection, Delta), in_cell(OldLoc), NewLoc is OldLoc + Delta, nth1(NewLoc, Cells, SafetyIdentifier), SafetyIdentifier = OK . Basically, I am trying to check to see if a given cell is "OK" to move into. Am I missing something?

    Read the article

  • Counting Sublist Elements in Prolog

    - by idea_
    How can I count nested list elements in prolog? I have the following predicates defined, which will count a nested list as one element: length([ ], 0). length([H|T],N) :- length(T,M), N is M+1. Usage: ?- length([a,b,c],Out). Out = 3 This works, but I would like to count nested elements as well i.e. length([a,b,[c,d,e],f],Output). ?- length([a,b,[c,d,e],f],Output). Output = 6

    Read the article

  • criteria of software program being intelligent

    - by bobah
    Just out of curiosity, assuming there exists an software life form. How would you detect him/her? What are your criteria of figuring out if something/someone is intelligent or not? It seems to me that it should be quite simple to create such software once you set the right target (not just following a naive "mimic human-pass Turing Test" way). When posting an answer try also finding a counter example. I have real difficuly inventing anything consistent which I myself agree with. Warmup

    Read the article

  • Searching in graphs trees with Depth/Breadth first/A* algorithms

    - by devoured elysium
    I have a couple of questions about searching in graphs/trees: Let's assume I have an empty chess board and I want to move a pawn around from point A to B. A. When using depth first search or breadth first search must we use open and closed lists ? This is, a list that has all the elements to check, and other with all other elements that were already checked? Is it even possible to do it without having those lists? What about A*, does it need it? B. When using lists, after having found a solution, how can you get the sequence of states from A to B? I assume when you have items in the open and closed list, instead of just having the (x, y) states, you have an "extended state" formed with (x, y, parent_of_this_node) ? C. State A has 4 possible moves (right, left, up, down). If I do as first move left, should I let it in the next state come back to the original state? This, is, do the "right" move? If not, must I transverse the search tree every time to check which states I've been to? D. When I see a state in the tree where I've already been, should I just ignore it, as I know it's a dead end? I guess to do this I'd have to always keep the list of visited states, right? E. Is there any difference between search trees and graphs? Are they just different ways to look at the same thing?

    Read the article

  • Correct formulation of the A* algorithm

    - by Eli Bendersky
    Hello, I'm looking at definitions of the A* path-finding algorithm, and it seems to be defined somewhat differently in different places. The difference is in the action performed when going through the successors of a node, and finding that a successor is on the closed list. One approach (suggested by Wikipedia, and this article) says: if the successor is on the closed list, just ignore it Another approach (suggested here and here, for example) says: if the successor is on the closed list, examine its cost. If it's higher than the currently computed score, remove the item from the closed list for future examination. I'm confused - which method is correct ? Intuitively, the first makes more sense to me, but I wonder about the difference in definition. Is one of the definitions wrong, or are they somehow isomorphic ?

    Read the article

  • Improved Genetic algorithm for multiknapsack problem

    - by user347918
    Hello guys, Recently i've been improving traditional genetic algorithm for multiknapsack problem. So My Improved Genetic Algorithm is working better then Traditional Genetic Algorithm. I tested. (i used publically available from OR-Library (http://people.brunel.ac.uk/~mastjjb/jeb/orlib/mknapinfo.html) were used to test the GAs.) Does anybody know other improved GA. I wanted to compare with other improved genetic algorithm. Actually i searched in internet. But couldn't find good algorithm to compare.

    Read the article

  • Do you think the AI industry will ever come back?

    - by Isaiah
    I just spent some time reading about the collapse of the AI industry and realized a lot of the reason it failed was because technology was slow to catch up with their theories on when it would be available. I also read that it is believed computers that will be able to emulate human synapses may be made round 2015-2025. It's 2010 now and were getting pretty close to that time frame. I was wondering if anyone thinks that the AI industry will return as the technology lands? And if so, will it change the language market? Could Lisp like languages suddenly experience a burst of growth if it does? Idk I just thought it was interesting thinking about it.

    Read the article

  • Hebbian learning

    - by Bane
    I have asked another question on Hebbian learning before, and I guess I got a good answer which I accepted, but, the problem is that I now realize that I've mistaken about Hebbian learning completely, and that I'm a bit confused. So, could you please explain how it can be useful, and what for? Because the way Wikipedia and some other pages describe it - it doesn't make sense! Why would we want to keep increasing the weight between the input and the output neuron if the fire together? What kind of problems can it be used to solve, because when I simulate it in my head, it certainly can't do the basic AND, OR, and other operations (say you initialize the weights at zero, the output neurons never fire, and the weights are never increased!)

    Read the article

  • Identifying voice as male or female

    - by duder
    I'm not much into audio engineering, so please be easy on me. I'm receiving an audio file as input, and need to detect whether the speaker is male or female. Any ideas how to go about doing this? I'm using php, but am open to using other languages, and don't mind learning a little bit of sound theory as long as the time is proportionate to the task.

    Read the article

  • Measuring the performance of classification algorithm

    - by Silver Dragon
    I've got a classification problem in my hand, which I'd like to address with a machine learning algorithm ( Bayes, or Markovian probably, the question is independent on the classifier to be used). Given a number of training instances, I'm looking for a way to measure the performance of an implemented classificator, with taking data overfitting problem into account. That is: given N[1..100] training samples, if I run the training algorithm on every one of the samples, and use this very same samples to measure fitness, it might stuck into a data overfitting problem -the classifier will know the exact answers for the training instances, without having much predictive power, rendering the fitness results useless. An obvious solution would be seperating the hand-tagged samples into training, and test samples; and I'd like to learn about methods selecting the statistically significant samples for training. White papers, book pointers, and PDFs much appreciated!

    Read the article

  • How to program simple chat bot AI?

    - by Larsenal
    I want to build a bot that asks someone a few simple questions and branches based on the answer. I realize parsing meaning from the human responses will be challenging, but how do you setup the program to deal with the "state" of the conversation? EDIT: It will be a one-to-one conversation between a human and the bot.

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • large test data for knapsack problem

    - by user347918
    i am researcher student. I am searching large data for knapsack problem. I wanted test my algorithm for knapsack problem. But i couldn't find large data. I need data has 1000 item and capacity is no matter. The point is item as much as huge it's good for my algorithm. Is there any huge data available in internet. Does anybody know please guys i need urgent.

    Read the article

  • Finding patterns in Puzzle games.

    - by José Joel.
    I was wondering, which are the most commonly used algorithms applied to finding patterns in puzzle games conformed by grids of cells. I know that depends of many factors, like the kind of patterns You want to detect, or the rules of the game...but I wanted to know which are the most commonly used algorithms in that kind of problems... For example, games like columns, bejeweled, even tetris. I also want to know if detecting patterns by "brute force" ( like , scanning all the grid trying to find three adyacent cells of the same color ) is significantly worst that using particular algorithms in very small grids, like 4 X 4 for example ( and again, I know that depends of the kind of game and rules ...) Which structures are commonly used in this kind of games ?

    Read the article

  • How to store visited states in iterative deepening / depth limited search?

    - by colinfang
    Update: Search for the first solution. for a normal Depth First Search it is simple, just use a hashset bool DFS (currentState) = { if (myHashSet.Contains(currentState)) { return; } else { myHashSet.Add(currentState); } if (IsSolution(currentState) return true; else { for (var nextState in GetNextStates(currentState)) if (DFS(nextState)) return true; } return false; } However, when it becomes depth limited, i cannot simply do this bool DFS (currentState, maxDepth) = { if (maxDepth = 0) return false; if (myHashSet.Contains(currentState)) { return; } else { myHashSet.Add(currentState); } if (IsSolution(currentState) return true; else { for (var nextState in GetNextStates(currentState)) if (DFS(nextState, maxDepth - 1)) return true; } return false; } Because then it is not going to do a complete search (in a sense of always be able to find a solution if there is any) before maxdepth How should I fix it? Would it add more space complexity to the algorithm? Or it just doesn't require to memoize the state at all. Update: for example, a decision tree is the following: A - B - C - D - E - A | F - G (Goal) Starting from state A. and G is a goal state. Clearly there is a solution under depth 3. However, using my implementation under depth 4, if the direction of search happens to be A(0) -> B(1) -> C(2) -> D(3) -> E(4) -> F(5) exceeds depth, then it would do back track to A, however E is visited, it would ignore the check direction A - E - F - G

    Read the article

  • C++ AI Design Question

    - by disney
    Hi, I am currently writing a bot for a MMORPG. Though, currently I am stuck at trying to figure out how to nicely implement this. The design problem is related to casting the character spells in the correct order. Here is a simple example to what I need to archieve. It's not related to casting them, but doing it in the correct order. I would know how simply cast them randomly, by checking which skill has not yet been casted, but in right order as being shown in the GUI, not really. note: the skill amount may differ, it's not always 3, maximum 10 though. Charactername < foobar has 3 skills. Skill 1: Name ( random1 ) cooldown ( 1000 ms ) cast duration ( 500 ms ) Skill 2: Name ( random2 ) cooldown ( 1500 ms ) cast duration ( 700 ms ) Skill 3: Name ( random3 ) cooldown ( 2000 ms ) cast duration ( 900 ms ) I don't really know how I could implement this, if anyone has some thoughts, feel free to share. I do know that most of the people don't like the idea of cheating in games, I don't like it either, nor I am actually playing the game, but its an interesting field for me. Thank you.

    Read the article

  • Approaches for Content-based Item Recommendations

    - by PartlyCloudy
    Hello, I'm currently developing an application where I want to group similar items. Items (like videos) can be created by users and also their attributes can be altered or extended later (like new tags). Instead of relying on users' preferences as most collaborative filtering mechanisms do, I want to compare item similarity based on the items' attributes (like similar length, similar colors, similar set of tags, etc.). The computation is necessary for two main purposes: Suggesting x similar items for a given item and for clustering into groups of similar items. My application so far is follows an asynchronous design and I want to decouple this clustering component as far as possible. The creation of new items or the addition of new attributes for an existing item will be advertised by publishing events the component can then consume. Computations can be provided best-effort and "snapshotted", which means that I'm okay with the best result possible at a given point in time, although result quality will eventually increase. So I am now searching for appropriate algorithms to compute both similar items and clusters. At important constraint is scalability. Initially the application has to handle a few thousand items, but later million items might be possible as well. Of course, computations will then be executed on additional nodes, but the algorithm itself should scale. It would also be nice if the algorithm supports some kind of incremental mode on partial changes of the data. My initial thought of comparing each item with each other and storing the numerical similarity sounds a little bit crude. Also, it requires n*(n-1)/2 entries for storing all similarities and any change or new item will eventually cause n similarity computations. Thanks in advance! UPDATE tl;dr To clarify what I want, here is my targeted scenario: User generate entries (think of documents) User edit entry meta data (think of tags) And here is what my system should provide: List of similar entries to a given item as recommendation Clusters of similar entries Both calculations should be based on: The meta data/attributes of entries (i.e. usage of similar tags) Thus, the distance of two entries using appropriate metrics NOT based on user votings, preferences or actions (unlike collaborative filtering). Although users may create entries and change attributes, the computation should only take into account the items and their attributes, and not the users associated with (just like a system where only items and no users exist). Ideally, the algorithm should support: permanent changes of attributes of an entry incrementally compute similar entries/clusters on changes scale something better than a simple distance table, if possible (because of the O(n²) space complexity)

    Read the article

  • Is it possible to "learn" a regular expression by user-provided examples?

    - by DR
    Is it possible to "learn" a regular expression by user-provided examples? To clarify: I do not want to learn regular expressions. I want to create a program which "learns" a regular expression from examples which are interactively provided by a user, perhaps by selecting parts from a text or selecting begin or end markers. Is it possible? Are there algorithms, keywords, etc. which I can Google for? EDIT: Thank you for the answers, but I'm not interested in tools which provide this feature. I'm looking for theoretical information, like papers, tutorials, source code, names of algorithms, so I can create something for myself.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >