Search Results

Search found 25519 results on 1021 pages for 'virtual machine'.

Page 131/1021 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • A simple explanation of Naive Bayes Classification

    - by Jaggerjack
    I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explained it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example training set--- round-red round-orange oblong-yellow round-red dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • How to figure out optimal C / Gamma parameters in libsvm?

    - by Cuga
    I'm using libsvm for multi-class classification of datasets with a large number of features/attributes (around 5,800 per each item). I'd like to choose better parameters for C and Gamma than the defaults I am currently using. I've already tried running easy.py, but for the datasets I'm using, the estimated time is near forever (ran easy.py at 20, 50, 100, and 200 data samples and got a super-linear regression which projected my necessary runtime to take years). Is there a way to more quickly arrive at better C and Gamma values than the defaults? I'm using the Java libraries, if that makes any difference.

    Read the article

  • Alright, I'm still stuck on this homework problem. C++

    - by Josh
    Okay, the past few days I have been trying to get some input on my programs. Well I decided to scrap them for the most part and try again. So once again, I'm in need of help. For the first program I'm trying to fix, it needs to show the sum of SEVEN numbers. Well, I'm trying to change is so that I don't need the mem[##] = ####. I just want the user to be able to input the numbers and the program run from there and go through my switch loop. And have some kind of display..saying like the sum is?.. Here's my code so far. #include <iostream> #include <iomanip> #include <ios> using namespace std; int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 42; const int HALT = 43; int mem[100] = {0}; //Making it 100, since simpletron contains a 100 word mem. int operation; //taking the rest of these variables straight out of the book seeing as how they were italisized. int operand; int accum = 0; // the special register is starting at 0 int counter; for ( counter=0; counter < 100; counter++) mem[counter] = 0; // This is for part a, it will take in positive variables in //a sent-controlled loop and compute + print their sum. Variables from example in text. mem[0] = 1009; mem[1] = 1109; mem[2] = 2010; mem[3] = 2111; mem[4] = 2011; mem[5] = 3100; mem[6] = 2113; mem[7] = 1113; mem[8] = 4300; counter = 0; //Makes the variable counter start at 0. while(true) { operand = mem[ counter ]%100; // Finds the op codes from the limit on the mem (100) operation = mem[ counter ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case READ: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n Input a positive variable: "; cin >> mem[ operand ]; counter++; break; case WRITE: // takes a word from location cout << "\n\nThe content at location " << operand << " is " << mem[operand]; counter++; break; case LOAD:// loads accum = mem[ operand ];counter++; break; case STORE: //stores mem[ operand ] = accum;counter++; break; case ADD: //adds accum += mem[operand];counter++; break; case SUBTRACT: // subtracts accum-= mem[ operand ];counter++; break; case DIVIDE: //divides accum /=(mem[ operand ]);counter++; break; case MULTIPLY: // multiplies accum*= mem [ operand ];counter++; break; case BRANCH: // Branches to location counter = operand; break; case BRANCHNEG: //branches if acc. is < 0 if (accum < 0) counter = operand; else counter++; break; case BRANCHZERO: //branches if acc = 0 if (accum == 0) counter = operand; else counter++; break; case HALT: // Program ends break; } } return 0; } part B int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 41; const int HALT = 43; int mem[100] = {0}; int operation; int operand; int accum = 0; int pos = 0; int j; mem[22] = 7; // loop 7 times mem[25] = 1; // increment by 1 mem[00] = 4306; mem[01] = 2303; mem[02] = 3402; mem[03] = 6410; mem[04] = 3412; mem[05] = 2111; mem[06] = 2002; mem[07] = 2312; mem[08] = 4210; mem[09] = 2109; mem[10] = 4001; mem[11] = 2015; mem[12] = 3212; mem[13] = 2116; mem[14] = 1101; mem[15] = 1116; mem[16] = 4300; j = 0; while ( true ) { operand = memory[ j ]%100; // Finds the op codes from the limit on the memory (100) operation = memory[ j ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case 1: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n enter #: "; cin >> memory[ operand ]; break; case 2: // takes a word from location cout << "\n\nThe content at location " << operand << "is " << memory[operand]; break; case 3:// loads accum = memory[ operand ]; break; case 4: //stores memory[ operand ] = accum; break; case 5: //adds accum += mem[operand];; break; case 6: // subtracts accum-= memory[ operand ]; break; case 7: //divides accum /=(memory[ operand ]); break; case 8: // multiplies accum*= memory [ operand ]; break; case 9: // Branches to location j = operand; break; case 10: //branches if acc. is < 0 break; case 11: //branches if acc = 0 if (accum == 0) j = operand; break; case 12: // Program ends exit(0); break; } j++; } return 0; }

    Read the article

  • Prolog: Not executing code as expected.

    - by Louis
    Basically I am attempting to have an AI agent navigate a world based on given percepts. My issue is handling how the agent moves. Basically, I have created find_action/4 such that we pass in the percepts, action, current cell, and the direction the agent is facing. As it stands the entire code looks like: http://wesnoth.pastebin.com/kdNvzZ6Y My issue is mainly with lines 102 to 106. Basically, in it's current form the code does not work and the find_action is skipped even when the agent is in fact facing right (I have verified this). This broken code is as follows: % If we are headed right, take a left turn find_action([_, _, _, _, _], Action, _, right) :- retractall(facing(_)), assert(facing(up)), Action = turnleft . However, after some experimentation I have concluded that the following works: % If we are headed right, take a left turn find_action([_, _, _, _, _], Action, _, _) :- facing(right), retractall(facing(_)), assert(facing(up)), Action = turnleft . I am not entire sure why this is. I've attempted to create several identical find_action's as well, each checking a different direction using the facing(_) format, however swipl does not like this and throws an error. Any help would be greatly appreciated.

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • Why does GetClusterShape return null when the cluster specification was retrieved through the GetClu

    - by Markus Olsson
    Suppose I have a virtual earth shape layer called shapeLayer1 (my creative energy is apparently at an alltime low). When i call the GetClusteredShapes method I get an array of VEClusterSpecification objects that represent each and every one of my currently visible clusters; no problem there. But when I call the GetClusterShape() method it returns null... null! Why on earth would it do that? I used firebug to confirm that the private variable of the VEClusterSpecification that's supposed to hold a reference to the shape is indeed null so it's not the method that's causing the problem. Some have suggested that this is actually documented behavior Returns null if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method But looking at the current MSDN documentation for the VEShape class it says: Returns if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method Is this a bug or a feature? Is there any known workarounds or (if it is a bug) some plan on when they are going to fix it?

    Read the article

  • Rails: How to test state_machine?

    - by petRUShka
    Please, help me. I'm confused. I know how to write state-driven behavior of model, but I don't know what should I write in specs... My model.rb file look class Ratification < ActiveRecord::Base belongs_to :user attr_protected :status_events state_machine :status, :initial => :boss do state :boss state :owner state :declarant state :done event :approve do transition :boss => :owner, :owner => :done end event :divert do transition [:boss, :owner] => :declarant end event :repeat do transition :declarant => :boss end end end I use state_machine gem. Please, show me the course.

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

  • Cocoa app not launching on build & go but launching manually

    - by Matt S.
    I have quite the interesting problem. Yesterday my program worked perfectly, but now today I'm getting exc_bad_access when I hit build and go, but if I launch the app from the build folder it launches perfectly and there seems to be nothing wrong. The last bunch of lines from the debugger are: #0 0xffff07c2 in __memcpy #1 0x969f7961 in CFStringGetBytes #2 0x96a491b9 in CFStringCreateMutableCopy #3 0x991270cc in -[NSCFString mutableCopyWithZone:] #4 0x96a5572a in -[NSObject(NSObject) mutableCopy] #5 0x9913e6c7 in -[NSString stringByReplacingOccurrencesOfString:withString:options:range:] #6 0x9913e62f in -[NSString stringByReplacingOccurrencesOfString:withString:] #7 0x99181ad0 in -[NSScanner(NSDecimalNumberScanning) scanDecimal:] #8 0x991ce038 in -[NSDecimalNumberPlaceholder initWithString:locale:] #9 0x991cde75 in -[NSDecimalNumberPlaceholder initWithString:] #10 0x991ce44a in +[NSDecimalNumber decimalNumberWithString:] Why did my app work perfectly yesterday but not today?

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Tag/Keyword based recommendation

    - by Hellnar
    Hello I am wondering what algorithm would be clever to use for a tag driven e-commerce enviroment: Each item has several tags. IE: Item name: "Metallica - Black Album CD", Tags: "metallica", "black-album", "rock", "music" Each user has several tags and friends(other users) bound to them. IE: Username: "testguy", Interests: "python", "rock", "metal", "computer-science" Friends: "testguy2", "testguy3" I need to generate recommendations to such users by checking their interest tags and generating recommendations in a sophisticated way. Ideas: A Hybrid recommendation algorithm can be used as each user has friends.(mixture of collaborative + context based recommendations). Maybe using user tags, similar users (peers) can be found to generate recommendations. Maybe directly matching tags between users and items via tags. Any suggestion is welcome. Any python based library is also welcome as I will be doing this experimental engine on python language.

    Read the article

  • Are there programs that iteratively write new programs?

    - by chris
    For about a year I have been thinking about writing a program that writes programs. This would primarily be a playful exercise that might teach me some new concepts. My inspiration came from negentropy and the ability for order to emerge from chaos and new chaos to arise out of order in infinite succession. To be more specific, the program would start by writing a short random string. If the string compiles the programs will log it for later comparison. If the string does not compile the program will try to rewrite it until it does compile. As more strings (mini 'useless' programs) are logged they can be parsed for similarities and used to generate a grammar. This grammar can then be drawn on to write more strings that have a higher probability of compilation than purely random strings. This is obviously more than a little silly, but I thought it would be fun to try and grow a program like this. And as a byproduct I get a bunch of unique programs that I can visualize and call art. I'll probably write this in Ruby due to its simple syntax and dynamic compilation and then I will visualize in processing using ruby-processing. What I would like to know is: Is there a name for this type of programming? What currently exists in this field? Who are the primary contributors? BONUS! - In what ways can I procedurally assign value to output programs beyond compiles(y/n)? I may want to extend the functionality of this program to generate a program based on parameters, but I want the program to define those parameters through running the programs that compile and assigning meaning to the programs output. This question is probably more involved than reasonable for a bonus, but if you can think of a simple way to get something like this done in less than 23 lines or one hyperlink, please toss it into your response. I know that this is not quite meta-programming and from the little I know of AI and generative algorithms they are usually more goal oriented than what I am thinking. What would be optimal is a program that continually rewrites and improves itself so I don't have to ^_^

    Read the article

  • Candidate Elimination Question---Please help!

    - by leon
    Hi , I am doing a question on Candidate Elimination Algorithm. I am a little confused with the general boundary G. Here is an example, I got G and S to the fourth case, but I am not sure with the last case. Sunny,Warm,Normal,Strong,Warm,Same,EnjoySport=yes Sunny,Warm,High,Strong,Warm,Same,EnjoySport=yes Rainy,Cold,High,Strong,Warm,Change,EnjoySport=no Sunny,Warm,High,Strong,Cool,Change,EnjoySport=yes Sunny,Warm,Normal,Weak,Warm,Same,EnjoySport=no What I have here is : S 0 :{0,0,0,0,0,0} S 1 :{Sunny,Warm,Normal,Strong,Warm,Same} S 2 , S 3 : {Sunny,Warm,?,Strong,Warm,Same} S 4 :{Sunny,Warm,?,Strong,?,?} G 4 :{Sunny,?,?,?,?,?,?,Warm,?,?,?,?} G 3 :{Sunny,?,?,?,?,?,?,Warm,?,?,?,?,?,?,?,?,?,Same} G 0 , G 1 , G 2 : {?,?,?,?,?,?} What would be the result of G5? Is it G5 empty? {}? or {???Strong??) ? Thanks

    Read the article

  • Naive Bayes matlab, row classification

    - by Jungle Boogie
    How do you classify a row of seperate cells in matlab? Atm I can classify single coloums like so: training = [1;0;-1;-2;4;0;1]; % this is the sample data. target_class = ['posi';'zero';'negi';'negi';'posi';'zero';'posi']; % target_class are the different target classes for the training data; here 'positive' and 'negetive' are the two classes for the given training data % Training and Testing the classifier (between positive and negative) test = 10*randn(25, 1); % this is for testing. I am generating random numbers. class = classify(test,training, target_class, 'diaglinear') % This command classifies the test data depening on the given training data using a Naive Bayes classifier Unlike the above im looking at wanting to classify: A B C Row A | 1 | 1 | 1 = a house Row B | 1 | 2 | 1 = a garden Can anyone help? Here is a code example from matlabs site: nb = NaiveBayes.fit(training, class) nb = NaiveBayes.fit(..., 'param1',val1, 'param2',val2, ...) I dont understand what param1 is or what val1 etc should be?

    Read the article

  • Rails: Multi-Step New User Signup Form (FSM?)

    - by neezer
    I've read the "Create Multi-Step Wizard" in Advanced Rails Recipes. I've also read and re-read the documentation for the updated FSM I'm using called Workflow, and looked here and here. The Advanced Rails Recipe focuses on records (quizzes) that already exist, and doesn't cover creating new ones. The Workflow docs don't cover any code for controllers or views, so I've no idea what to do with all this model magic, and the last two links barely touch on implementation either. From the aforementioned resources, I have a good understanding of what a FSM in Rails is and how to play with it in the console or IRB, but I've got very little direction or understanding how to implement one into my Rails app. What I would like is this: a simple, multi-step user signup process. Step 1: User enters in their critical details (with validations). Step 2: User enters in their search criteria, for their profile (with validations). Step 3: User agrees to the Terms of Service (with validations). Step 4: User is greeted by a confirmation page, including a link that takes them to their newly created account. I'd also like full navigation between the steps and full capture (saves to the database) with each transition. Can someone please give me a clear implementation of something similar to this? I would LOVE an example app that includes a multi-step signup process where I can look at the code (FULL source code--models AND controllers and views) under the hood, but I've been unable to find anything like that. Any guidance would be appreciated! EDIT: Please help make this a Railscast! Ryan B. (a.k.a. Superman), if you're reading this, we need you! http://feedback.railscasts.com/forums/77-episode-suggestions/suggestions/35553-multi-step-forms-and-wizards

    Read the article

  • Ngram IDF smoothing

    - by adi92
    I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents. I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf.. For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf.. I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest. If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores. Anybody has any better ideas?

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

  • WITH_OBJECT_HEADERS enabled GC from Dalvik?

    - by Wonil
    Hello, As I know Dalvik VM does not support generational GC as default. But, I found "WITH_OBJECT_HEADERS" compilation flag which could be related with generational GC from HeapInternal.h file. typedef struct DvmHeapChunk { #if WITH_OBJECT_HEADERS u4 header; const Object *parent; const Object *parentOld; const Object *markFinger; const Object *markFingerOld; u2 birthGeneration; u2 markCount; u2 scanCount; u2 oldMarkGeneration; u2 markGeneration; u2 oldScanGeneration; u2 scanGeneration; #endif Does anyone try to build Dalvik with this option enabled? Do you know anything about generational GC support from Dalvik? Regards, Wonil.

    Read the article

  • Java text classification problem

    - by yox
    Hello, I have a set of Books objects, classs Book is defined as following : Class Book{ String title; ArrayList<tags> taglist; } Where title is the title of the book, example : Javascript for dummies. and taglist is a list of tags for our example : Javascript, jquery, "web dev", .. As I said a have a set of books talking about different things : IT, BIOLOGY, HISTORY, ... Each book has a title and a set of tags describing it.. I have to classify automaticaly those books into separated sets by topic, example : IT BOOKS : Java for dummies Javascript for dummies Learn flash in 30 days C++ programming HISTORY BOOKS : World wars America in 1960 Martin luther king's life BIOLOGY BOOKS : .... Do you guys know a classification algorithm/method to apply for that kind of problems ? A solution is to use an external API to define the category of the text, but the problem here is that books are in different languages : french, spanish, english ..

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • Neural Networks test cases

    - by Betamoo
    Does increasing the number of test cases in case of Precision Neural Networks may led to problems (like over-fitting for example)..? Does it always good to increase test cases number? Will that always lead to conversion ? If no, what are these cases.. an example would be better.. Thanks,

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >