Search Results

Search found 6697 results on 268 pages for 'e learning'.

Page 48/268 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Algorithm to generate numerical concept hierarchy

    - by Christophe Herreman
    I have a couple of numerical datasets that I need to create a concept hierarchy for. For now, I have been doing this manually by observing the data (and a corresponding linechart). Based on my intuition, I created some acceptable hierarchies. This seems like a task that can be automated. Does anyone know if there is an algorithm to generate a concept hierarchy for numerical data? To give an example, I have the following dataset: Bangladesh 521 Brazil 8295 Burma 446 China 3259 Congo 2952 Egypt 2162 Ethiopia 333 France 46037 Germany 44729 India 1017 Indonesia 2239 Iran 4600 Italy 38996 Japan 38457 Mexico 10200 Nigeria 1401 Pakistan 1022 Philippines 1845 Russia 11807 South Africa 5685 Thailand 4116 Turkey 10479 UK 43734 US 47440 Vietnam 1042 for which I created the following hierarchy: LOWEST ( < 1000) LOW (1000 - 2500) MEDIUM (2501 - 7500) HIGH (7501 - 30000) HIGHEST ( 30000)

    Read the article

  • What problems have you solved using artificial neural networks?

    - by knorv
    I'd like to know about specific problems you - the SO reader - have solved using artificial neural network techniques and what libraries/frameworks you used if you didn't roll your own. Questions: What problems have you used artificial neural networks to solve? What libraries/frameworks did you use? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • SVM Classification - minimum number of input sets for each class

    - by Amol Joshi
    Im trying to build an app to detect images which are advertisements from the webpages. Once I detect those Ill not be allowing those to be displayed on the client side. From the help that I got here in stackoverflow, I thought SVM is the best approach to my aim. So, I have coded SVM and an SMO myself. The dataset which I have got from UCI data repository has 3280 instances ( Link to Dataset- http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements )where around 400 of them are from class representing Advertisement images and rest of them representing non-advertisement images. Right now Im taking the first 2800 input sets and training the SVM. But after looking at the accuracy rate I realised that most of those 2800 input sets are from non-advertisement image class. So Im getting very good accuracy for that class. So what can I do here? About how many input set shall I give to SVM to train and how many of them for each class? Thanks. Cheers. ( Basically made a new question because the context was different from my previous question. http://stackoverflow.com/questions/1991113/optimization-of-neural-network-input-data )

    Read the article

  • Nominal Attributes in LibSVM

    - by Chris S
    When creating a libsvm training file, how do you differentiate between a nominal attribute verses a numeric attribute? I'm trying to encode certain nominal attributes as integers, but I want to ensure libsvm doesn't misinterpret them as numeric values. Unfortunately, libsvm's site seems to have very little documentation. Pentaho's docs seem to imply libsvm makes this distinction, but I'm still not clear how it's made.

    Read the article

  • What's the Easiest Way to Learn Programming?

    - by Chris
    If a friend of yours wanted to get into development and didn't have any experience, what would you suggest? What language/resources would you suggest to break into programming? With all of the technologies out right now and buzz words where should one even start explaining this stuff to people?

    Read the article

  • Precomputed Kernels with LibSVM in Python

    - by Lyyli
    I've been searching the net for ~3 hours but I couldn't find a solution yet. I want to give a precomputed kernel to libsvm and classify a dataset, but: How can I generate a precomputed kernel? (for example, what is the basic precomputed kernel for Iris data?) In the libsvm documentation, it is stated that: For precomputed kernels, the first element of each instance must be the ID. For example, samples = [[1, 0, 0, 0, 0], [2, 0, 1, 0, 1], [3, 0, 0, 1, 1], [4, 0, 1, 1, 2]] problem = svm_problem(labels, samples) param = svm_parameter(kernel_type=PRECOMPUTED) What is a ID? There's no further details on that. Can I assign ID's sequentially? Any libsvm help and an example of precomputed kernels really appreciated.

    Read the article

  • Forth: free video tutorials?

    - by Peter Mortensen
    Can you recommend any free Forth video tutorials (except for following) ? The only one I know of is Samuel A. Falvo's excellent "Over The Shoulder Episode 1: Text Preprocessing in Forth". MPEG. 102 MB. There are also videos from the annual Forth Day, but I don't consider those to be tutorials. (Unfortunately Forth is, like R, C, C++, Java, C#, D, COM, .NET, F# and Frontier, an unspecific search term. Search tip for Forth: qualify it with "ans" - as in ANS Forth, the ANSI Forth Standard.) Accumulated based on answers and other information: Introductions to Forth Forth. By Ben Stiglitz. At RubyConf 2008 Orlando Florida, U.S.A. 13 min 35 secs. 32 MB. MP4. Advanced Over The Shoulder Episode 1: Text Preprocessing in Forth. By Samuel A. Falvo. 1 h 06 min 25 secs. 102 MB. MPEG.

    Read the article

  • What should programmers practice every day?

    - by Jacinda S
    Musicians practice scales, arpeggios, etc. every day before they begin playing "real" music. The top sports players spend time every day practicing fundamentals like dribbling before playing the "real" game. Are there fundamentals that programmers should practice every day before writing "real" code?

    Read the article

  • Where do I find a good explanation of Javascript-ese

    - by tzenes
    I realize that title may require explanation. The language I first learned was C, and it shows in all my programs... even those not written in C. For example, when I first learned F# I wrote my F# programs like C programs. It wasn't until someone explained the pipe operator and mapping with anonymous functions that I started to understand the F#-ese, how to write F# like a F# programmer and not a C programmer. Now I've written a little javascript, mostly basic stuff using jquery, but I was hoping there was a good resource where I could learn to write javascript programs like a javascript programmer.

    Read the article

  • Java equivalent to VS solution file

    - by Chris
    I'm a C# guy trying to learn Java. I understand the syntax and the basic architecture of the Java platform, and have no problem doing smaller projects myself, but I'd really like to be able to download some open source projects to learn from the work of others. However, I'm running into a stumbling block that I can't seem to find any information on. When I download an open source .NET project, I can open the .sln file with visual studio and everything just loads. Sure, there's occasionally a missing reference or something, but there's really very little configuration required to get things going. I'm not sensing the same ease of use with Java. I'm using eclipse at the moment, and it feels like for every project I have to create a brand new Eclipse project using "create from existing source", and almost nothing compiles properly without significant reconfiguration. In the case of web projects, it's even worse, because Eclipse doesn't appear to support creating a web project from existing source. I have to create a standard Java project from source, then then apparently modify the project file to include the bindings for the web toolkit stuff to work properly. Assuming I want to be able to contribute to a project later on, I shouldn't have to be making such drastic changes to the file structure to get my IDE to a workable state. What am I missing?

    Read the article

  • building a backend for generating webquests with rails

    - by buk
    hello, i want to learn rails and as a project to learn rails i came across webquests. what a webquest is is clearly written here1 and this is a example how a webquest look like. i started with script/generate nifty_scaffold introduction index and repeat this for every section like task , Process, Evaluation and so on. but i dont think thats the right way because i have a lot of code for the same thing. on the other side i am more flexible on designing views or controllers instead of having only one controller foer all pages. i am not asking here to get code. i am asking who to "build" such a backend where you can click on "New Webquest" a form comes up and you can enter all the text who belongs to the topic. maybee i can add some drawings later. i hope anyone can show me how to do that. or post me some links or some rtfms :D regards, buk

    Read the article

  • Calculating Nearest Match to Mean/Stddev Pair With LibSVM

    - by Chris S
    I'm new to SVMs, and I'm trying to use the Python interface to libsvm to classify a sample containing a mean and stddev. However, I'm getting nonsensical results. Is this task inappropriate for SVMs or is there an error in my use of libsvm? Below is the simple Python script I'm using to test: #!/usr/bin/env python # Simple classifier test. # Adapted from the svm_test.py file included in the standard libsvm distribution. from collections import defaultdict from svm import * # Define our sparse data formatted training and testing sets. labels = [1,2,3,4] train = [ # key: 0=mean, 1=stddev {0:2.5,1:3.5}, {0:5,1:1.2}, {0:7,1:3.3}, {0:10.3,1:0.3}, ] problem = svm_problem(labels, train) test = [ ({0:3, 1:3.11},1), ({0:7.3,1:3.1},3), ({0:7,1:3.3},3), ({0:9.8,1:0.5},4), ] # Test classifiers. kernels = [LINEAR, POLY, RBF] kname = ['linear','polynomial','rbf'] correct = defaultdict(int) for kn,kt in zip(kname,kernels): print kt param = svm_parameter(kernel_type = kt, C=10, probability = 1) model = svm_model(problem, param) for test_sample,correct_label in test: pred_label, pred_probability = model.predict_probability(test_sample) correct[kn] += pred_label == correct_label # Show results. print '-'*80 print 'Accuracy:' for kn,correct_count in correct.iteritems(): print '\t',kn, '%.6f (%i of %i)' % (correct_count/float(len(test)), correct_count, len(test)) The domain seems fairly simple. I'd expect that if it's trained to know a mean of 2.5 means label 1, then when it sees a mean of 2.4, it should return label 1 as the most likely classification. However, each kernel has an accuracy of 0%. Why is this? On a side note, is there a way to hide all the verbose training output dumped by libsvm in the terminal? I've searched libsvm's docs and code, but I can't find any way to turn this off.

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • what is the best way to generate fake data for classification problem ?

    - by Berkay
    i'm working on a project and i have a subset of user's key-stroke time data.This means that the user makes n attempts and i will use these recorded attempt time data in various kinds of classification algorithms for future user attempts to verify that the login process is done by the user or some another person. (Simply i can say that this is biometrics) I have 3 different times of the user login attempt process, ofcourse this is subset of the infinite data. until now it is an easy classification problem, i decided to use WEKA but as far as i understand i have to create some fake data to feed the classification algorithm. can i use some optimization algorithms ? or is there any way to create this fake data to get min false positives ? Thanks

    Read the article

  • learn dbms online

    - by siva
    Hi I want to learn DBMS including the concepts of complex SQL writing and normalisation and other stuff. Can anyone please help me to find some useful online resources.....

    Read the article

  • Bag of words Classification

    - by AlgoMan
    I need find words training words and their classification. Simple classification such as . Sports Entertainment and Politics things like that. Where Can i find the words and their classifications. I know many universities have done Bag of words classifications. Is there any repository of training examples ?

    Read the article

  • What really happens when I use varchar(10) in the sqlite command-line shell?

    - by romandas
    I'm messing around with SQLite for the first time by working through some of the SQLite documentation. In particular, I'm using Command Line Shell For SQLite and the SoupToNuts SQLite Tutorial on Sourceforge. According to the SQLite datatype documentation, there are only 5 datatypes in SQLite. However, in the two tutorial documents above, I see where the authors use commands such as create table tbl1(one varchar(10), two smallint); or create table t1 (t1key INTEGER PRIMARY KEY,data TEXT,num double,timeEnter DATE); which contain datatypes that aren't listed by SQLite, yet these commands work just fine. Additionally, when I ran .dump to see the SQL statements, these datatype specifications are preserved. So, what gives? Does SQLite keep a reference for any datatype specified in the SQL yet converts it behind the scenes to one of its 5 datatypes? Or is there something else I'm missing?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >