Search Results

Search found 13429 results on 538 pages for 'model reject'.

Page 72/538 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Using Hidden Markov Model for designing AI mp3 player

    - by Casper Slynge
    Hey guys. Im working on an assignment, where I want to design an AI for a mp3 player. The AI must be trained and designed with the use of a HMM method. The mp3 player shall have the functionality of adapting to its user, by analyzing incoming biological sensor data, and from this data the mp3 player will choose a genre for the next song. Given in the assignment is 14 samples of data: One sample consist of Heart Rate, Respiration, Skin Conductivity, Activity and finally the output genre. Below is the 14 samples of data, just for you to get an impression of what im talking about. Sample HR RSP SC Activity Genre S1 Medium Low High Low Rock S2 High Low Medium High Rock S3 High High Medium Low Classic S4 High Medium Low Medium Classic S5 Medium Medium Low Low Classic S6 Medium Low High High Rock S7 Medium High Medium Low Classic S8 High Medium High Low Rock S9 High High Low Low Classic S10 Medium Medium Medium Low Classic S11 Medium Medium High High Rock S12 Low Medium Medium High Classic S13 Medium High Low Low Classic S14 High Low Medium High Rock My time of work regarding HMM is quite low, so my question to you is if I got the right angle on the assignment. I have three different states for each sensor: Low, Medium, High. Two observations/output symbols: Rock, Classic In my own opinion I see my start probabilities as the weightened factors for either a Low, Medium or High state in the Heart Rate. So the ideal solution for the AI is that it will learn these 14 sets of samples. And when a users sensor input is received, the AI will compare the combination of states for all four sensors, with the already memorized samples. If there exist a matching combination, the AI will choose the genre, and if not it will choose a genre according to the weightened transition probabilities, while simultaniously updating the transition probabilities with the new data. Is this a right approach to take, or am I missing something ? Is there another way to determine the output probability (read about Maximum likelihood estimation by EM, but dont understand the concept)? Best regards, Casper

    Read the article

  • Updating Model with entity collection

    - by jean27
    There are a group of entities named Book and Magazine which inherits from the abstract class PublishedItem. PublishedItem have these properties: ID, Name, Publisher, List of Authors, List of Genres. The Book entity has the ISBN property and the Magazine entity has the ISSN property. I just want to ask how I can update the book's or magazine's list of genres or list of authors?

    Read the article

  • When to use which mysql partitioning model

    - by Industrial
    Ok guys, just starting out with partitioning some tables in mySQL. There's a couple of different ways describing this, but what I cant find is a more practical approach. - Which type of data does each way of partitioning have the best effect on? Or doesn't it really matter?

    Read the article

  • Perceptron Classification and Model Training

    - by jake pinedo
    I'm having an issue with understanding how the Perceptron algorithm works and implementing it. cLabel = 0 #class label: corresponds directly with featureVectors and tweets for m in range(miters): for point in featureVectors: margin = answers[cLabel] * self.dot_product(point, w) if margin <= 0: modifier = float(lrate) * float(answers[cLabel]) modifiedPoint = point for x in modifiedPoint: if x != 0: x *= modifier newWeight = [modifiedPoint[i] + w[i] for i in range(len(w))] w = newWeight self._learnedWeight = w This is what I've implemented so far, where I have a list of class labels in answers and a learning rate (lrate) and a list of feature vectors. I run it for the numbers of iterations in miter and then get the final weight at the end. However, I'm not sure what to do with this weight. I've trained the perceptron and now I have to classify a set of tweets, but I don't know how to do that. EDIT: Specifically, what I do in my classify method is I go through and create a feature vector for the data I'm given, which isn't a problem at all, and then I take the self._learnedWeight that I get from the earlier training code and compute the dot-product of the vector and the weight. My weight and feature vectors include a bias in the 0th term of the list so I'm including that. I then check to see if the dotproduct is less than or equal to 0: if so, then I classify it as -1. Otherwise, it's 1. However, this doesn't seem to be working correctly.

    Read the article

  • Mocking using 'traditional' Record/Replay vs Moq model

    - by fung
    I'm new to mocks and am deciding on a mock framework. The Moq home quotes Currently, it's the only mocking library that goes against the generalized and somewhat unintuitive (especially for novices) Record/Reply approach from all other frameworks. Can anyone explain simply what the Record/Replay approach is and how Moq differs? What are the pros and cons of each especially from the point of deciding a framework? Thanks.

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • Ejb Update Model from selectOneMenu

    - by Ignacio
    Can anyone please tell me why the following isn't working? h:selectOneMenu value="#{modelosController.selected.idMarca}" f:selectItems value="#{marcasController.itemsAvailableSelectOne}" / /h:selectOneMenu h:commandButton action="#{modelosController.createByMarcas}" value="Buscar" / public String createByMarcas() { current = new Modelos(selectedItemIndex, current.getIdMarca()); items =(DataModel)ejbFacade.findByMarcas(); getPagination().getItemsCount(); recreateModel(); return "List"; } public List<Modelos> findByMarcas(){ CriteriaQuery cq = (CriteriaQuery) em.createNamedQuery("SELECT m FROM Modelos WHERE m.id_marca :id_marca"); cq.select(cq.from(Modelos.class)); return em.createQuery(cq).getResultList(); } Thank you very much!

    Read the article

  • Reloading the model of a TTTableViewController

    - by user341338
    My problem is that I have a Register Controller and a Login Controller. The Login Screen displays a Login Screen or a Logout Screen depending if a user is logged in. Now when a user registers, does not close the app, and then goes to the Login Screen it will still display a Login Screen, although there is a logged in user already. This is because the Screen is created when the application loads and does not change afterwards. I tried doing this: - (id)init { if (self = [super init]) { [self invalidateModel]; [self reload]; but that did not work, since it is only called on the first init. then i tried: - (void)viewDidLoad { [self invalidateModel]; [self reload]; } But that method had the same problem. Then I found this method: - (TTNavigationMode)navigationModeForURL:(NSString*)URL; with the following options: typedef enum { TTNavigationModeNone, TTNavigationModeCreate, // a new view controller is created each time TTNavigationModeShare, // a new view controller is created, cached and re-used TTNavigationModeModal, // a new view controller is created and presented modally TTNavigationModeExternal, // an external app will be opened } TTNavigationMode; It seems like TTNavigationModeCreate would be the right thing to use, but I have no clue how to use it. Any help? Thnx.

    Read the article

  • Why can't I save my model with a generic relation twice in Django?

    - by e-satis
    I got a model TrackedItem with a generic relation linking to any model it is supposed to track. If I do that: t = TrackedItem(content_object=MyModel) t.save() t.save() I get : IntegrityError: (1062, "Duplicate entry '1' for key 'PRIMARY'") Indeed, the first save has created an entry with "1" as a PK. But the second save should not insert, it should update. How am I suppose to update a model I can't save twice? With an ordinary model I can save as much as I want.

    Read the article

  • Java paint speed relative to color model

    - by Jon
    I have a BufferedImage with an IndexColorModel. I need to paint that image onto the screen, but I've noticed that this is slow when using an IndexColorModel. However, if I run the BufferedImage through an identity affine transform it creates an image with a DirectColorModel and the painting is significantly faster. Here's the code I'm using AffineTransformOp identityOp = new AffineTransformOp(new AffineTransform(), AffineTransformOp.TYPE_BILINEAR); displayImage = identityOp.filter(displayImage, null); I have three questions 1. Why is painting the slower on an IndexColorModel? 2. Is there any way to speed up the painting of an IndexColorModel? 3. If the answer to 2. is no, is this the most efficient way to convert from an IndexColorModel to a DirectColorModel? I've noticed that this conversion is dependent on the size of the image, and I'd like to remove that dependency. Thanks for the help

    Read the article

  • Database model for saving random boolean expressions

    - by zarko.susnjar
    I have expressions like this: (cat OR cats OR kitten OR kitty) AND (dog OR dogs) NOT (pigeon OR firefly) Anyone having idea how to make tables to save those? Before I got request for usage of brackets, I limited usage of operators to avoid ambiguous situations. So only ANDs and NOTs or only ORs and saved those in this manner: operators id | name 1 | AND 2 | OR 3 | NOT keywords id | keyword 1 | cat 2 | dog 3 | firefly expressions id | operator | keywordId 1 | 0 | 1 1 | 1 | 2 1 | 3 | 3 which was: cat AND dog NOT firefly But now, I'm really puzzled...

    Read the article

  • Nginx. How do I reject request to unlisted ssl virtual server?

    - by Osw
    I have a wildcard SSL certificate and several subdomains on the same ip. Now I want my nginx to handle only mentioned server names and drop connection for others so that it'd look like nginx is not running for unlisted server names (not responding, rejecting, dead, not a single byte in response). I do the following ssl_certificate tls/domain.crt; ssl_certificate_key tls/domain.key; server { listen 1.2.3.4:443 ssl; server_name validname.domain.com; // } server { listen 1.2.3.4:443 ssl; server_name _; // deny all; // return 444; // return 404; //location { // deny all; //} } I've tried almost everything in the last server block, but no success. I get either valid response from known virtual server or error code. Please help.

    Read the article

  • Back Orders for ERP: data model references ?

    - by Patrick Honorez
    I have built an ERP using Sql Server as a back-end. These are the different types of Client documents (there are also Supplier Docs): Order -- impact: BO Delivery Note (also used for returns, with negative quantity) --impact: BO, Stock Invoice --impact: accounting only Credit Note --impact: accounting, BO I use a complex system of self joins (at detail level) to find out the quantities in each OrderDetail that still have a backorder (BO). It'd like to simplify this using a [group] field that could be used through all detail line related to an original order. There are many difficult things to trace: a Return of a product may be due to a defect and thus increase the BO, or it can be just a return, joined with a Credit Note, and then has no impact on BO. My question is: do you know of any real good reference (book, web) for this matter ?

    Read the article

  • rails semi-complex STI with ancestry data model planning the routes and controllers

    - by ere
    I'm trying to figure out the best way to manage my controller(s) and models for a particular use case. I'm building a review system where a User may build a review of several distinct types with a Polymorphic Reviewable. Country (has_many reviews & cities) Subdivision/State (optional, sometimes it doesnt exist, also reviewable, has_many cities) City (has places & review) Burrow (optional, also reviewable ex: Brooklyn) Neighborhood (optional & reviewable, ex: williamsburg) Place (belongs to city) I'm also wondering about adding more complexity. I also want to include subdivisions occasionally... ie for the US, I might add Texas or for Germany, Baveria and have it be reviewable as well but not every country has regions and even those that do might never be reviewed. So it's not at all strict. I would like it to as simple and flexible as possible. It'd kinda be nice if the user could just land on one form and select either a city or a country, and then drill down using data from say Foursquare to find a particular place in a city and make a review. I'm really not sure which route I should take? For example, what happens if I have a Country, and a City... and then I decide to add a Burrow? Could I give places tags (ie Williamsburg, Brooklyn) belong_to NY City and the tags belong to NY? Tags are more flexible and optionally explain what areas they might be in, the tags belong to a city, but also have places and be reviewable? So I'm looking for suggestions for anyone who's done something related. Using Rails 3.2, and mongoid.

    Read the article

  • 'e-Commerce' scalable database model

    - by Ruben Trancoso
    I would like to understand database scalability so I've just heard a talk about Habits of Highly Scalable Web Applications http://techportal.ibuildings.com/2010/03/02/habits-of-highly-scalable-web-applications/ On it, the presenter mainly talk about relational database scalability. I also have read something about MapReduce and Column oriented tables, big tables, hypertable etc... trying to understand which are the most up to date methods to scale web application data. But the second group, to me, is being hard to understand where it fits. It serves as transactional, reliable data store? or not, its just for large access and processing and to handle fine graned operations we will ever need to rely on RDBMSs? Could someone give a comprehensive landscape for those new technologies and how to use it?

    Read the article

  • Implementing Excel 2003 COM Add-in UDF in Asyc Programming model using C#(VS 2005)

    - by Venu
    Hi: I am trying to implement a UDF using Excel COM Add-in(2003) with Visual Studio 2005 in C#. I would like to implement the UDF using async programming. The UDF is a slow operation as its results are fetched from a server. As an illustration(not a real world example),the following UDF works fine without any issue: public double mul(double number1, double number2) { return number1 * number2; } How can I do the same functionality in an async way: For example, I would like the UDF return immediately and later when the results are available from a server, I would like to update the desired cells. // This method returns immediately. public object mul(double number1, double number2) { return "calculating"; } // This method of a worker thread will update the results. public OnResultsAvailable(object result) { // Question: how should I update the cells that triggerred the calcualtions above? } Constraints: I cannot use Excel RTD as I have to work with existing codebase written using Excel C# COM Add-in. Thanks for the help. -Venu

    Read the article

  • which iphone data model to choose?

    - by Tronic
    i need to get some data somewhere to put in a timeline. the data strcture is like this: - Item - Name - Year - ShortInfo (mainly keywords and short texts) - LongInfo (much text with videos/audios (urls) a friend of mine told me i should you a plist and get all that stuff in there, but what about a sqlite database? any advice? regards

    Read the article

  • Using attr_accessible in a join model with has_many :through relationship

    - by Paulo Oliveira
    I have a USER that creates a COMPANY and become an EMPLOYEE in the process. The employees table has an :user_id and a :company_id. class User has_many :employees has_many :companies, :through => :employees class Employee belongs_to :user belongs_to :company attr_accessible :active class Company has_many :employees has_many :users, :through => employees Pretty basic. But here's the thing, the resource EMPLOYEE has other attributes than its foreign keys, like the boolean :active. I would like to use attr_accessible, but this causes some problems. The attribute :user_id is set right, but :company_id is nil. @user.companies << Company.new(...) Employee id:1 user_id:1 company_id:nil So my question is: if :user_id is set right, despite it is not an attr_accessible, why :company_id isn't set right just the same? It shouldn't be an attr_accessible. I'm using Rails 3.0.8, and have also tested with 3.0.7.

    Read the article

  • Ruby is already using the class name of my model

    - by Octopus Inc
    I'm making a forum application with various levels of authorization, one of which is a Monitor. I am doing this by extending my User class, and I plan on fine tuning this with "-ship" classes (e.g. administratorship, authorship, moderatorship, etc.). Apparently the Monitor class is part of ruby mixin. How do I keep my resource name without the collisions?

    Read the article

  • How do I reject if exists? for non-nested attributes?

    - by GoodGets
    Currently my controller lets a user submit muliple "links" at a time. It collects them into an array, creates them for that user, but catches any errors for the User to go back and fix. How can I ignore the creation of any links that already exist for that user? I know that I can use validates_uniqueness_of with a scope for that user, but I'd rather just ignore their creation completely. Here's my controller: @links = params[:links].values.collect{ |link| current_user.links.create(link) }.reject { |p| p.errors.empty? } Each link has a url, so I thought about checking if that link.url already exists for that user, but wasn't really sure how, or where, to do that. Should I tack this onto my controller somehow? Or should it be a new method in the model, like as in a before_validation Callback? (Note: these "links" are not nested, but they do belong_to :user.) So, I'd like to just be able to ignore the creation of these links if possible. Like if a user submits 5 links, but 2 of them already exist for him, then I'd just like for those 2 to be ignored, while the other 3 are created. How should I go about doing this?

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >