Search Results

Search found 5104 results on 205 pages for 'evolutionary algorithm'.

Page 181/205 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • 2D Ball Collisions with Corners

    - by Aaron
    I'm trying to write a 2D simulation of a ball that bounces off of fixed vertical and horizontal walls. Simulating collisions with the faces of the walls was pretty simple--just negate the X-velocity for a vertical wall or the Y-velocity for a horizontal wall. The problem is that the ball can also collide with the corners of the walls, where a horizontal wall meets with a vertical wall. I have already figured out how to detect when a collision with a corner is occurring. My question is how the ball should react to this collision--that is, how its X and Y velocities will change as a result. Here's a list of what I already know or know how to find: *The X and Y coordinates of the ball's center during the frame when a collision is detected *The X and Y components of the ball's velocity *The X and Y coordinates of the corner *The angle between the ball's center and the corner *The angle in which the ball is traveling just before the collision *The amount that the ball is overlapping the corner when the collision is detected I'm guessing that it's best to pretend that the corner is an infinitely small circle, so I can treat a collision between the ball and that circle as if the ball were colliding with a wall that runs tangent to the circles at the point of collision. It seems to me that all I need to do is rotate the coordinate system to line up with this imaginary wall, reverse the X component of the ball's velocity under this system, and rotate the coordinates back to the original system. The problem is that I have no idea how to program this. By the way, this is an ideal simulation. I'm not taking anything like friction or the ball's rotation into account. I'm using Objective-C, but I'd really just like a general algorithm or some advice. Many thanks if anyone can help!

    Read the article

  • Java Access Token PKCS11 Not found Provider

    - by oracleruiz
    Hello I'm trying to access the keystore from my smartcard in Java. And I'm using the following code.. I'm using the Pkcs11 implementation of OpenSc http://www.opensc-project.org/opensc File windows.cnf = name=dnie library=C:\WINDOWS\system32\opensc-pkcs11.dll Java Code = String configName = "windows.cnf" String PIN = "####"; Provider p = new sun.security.pkcs11.SunPKCS11(configName); Security.addProvider(p); KeyStore keyStore = KeyStore.getInstance("PKCS11", "SunPKCS11-dnie"); =)(= char[] pin = PIN.toCharArray(); keyStore.load(null, pin); When the execution goes by the line with =)(= throws me the following exeption java.security.KeyStoreException: PKCS11 not found at java.security.KeyStore.getInstance(KeyStore.java:635) at ObtenerDatos.LeerDatos(ObtenerDatos.java:52) at ObtenerDatos.obtenerNombre(ObtenerDatos.java:19) at main.main(main.java:27) Caused by: java.security.NoSuchAlgorithmException: no such algorithm: PKCS11 for provider SunPKCS11-dnie at sun.security.jca.GetInstance.getService(GetInstance.java:70) at sun.security.jca.GetInstance.getInstance(GetInstance.java:190) at java.security.Security.getImpl(Security.java:662) at java.security.KeyStore.getInstance(KeyStore.java:632) I think the problem is "SunPKCS11-dnie", but I don't know to put there. I had tried with a lot of combinations... Anyone can help me...

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • boost lambda::bind return type selection

    - by psaghelyi
    I would like to call a member through lambda::bind. Unfortunately I have got to members with the same name but different return type. Is there a way to help the lambda::bind to deduce the right return type for a member function call? #include <vector> #include <iostream> #include <algorithm> #include <boost/bind.hpp> #include <boost/lambda/lambda.hpp> #include <boost/lambda/bind.hpp> using namespace std; using namespace boost; struct A { A (const string & name) : m_name(name) {} string & name () { return m_name; } const string & name () const { return m_name; } string m_name; }; vector<A> av; int main () { av.push_back (A ("some name")); // compiles fine find_if(av.begin(), av.end(), bind<const string &>(&A::name, _1) == "some name"); // error: call of overloaded 'bind(<unresolved overloaded function type>, const boost::lambda::lambda_functor<boost::lambda::placeholder<1> >&)' is ambiguous find_if(av.begin(), av.end(), lambda::bind(&A::name, lambda::_1) == "some name"); return 0; }

    Read the article

  • ASP.NET Emails blocked by Spam filter on Exchange

    - by Amadiere
    I'm trying to send an email via some C# ASP.NET code. This is being sent to our internal mailrelay server, with our standard "from" address (e.g. [email protected]). In some instances, this is getting through OK, in others, it's getting blocked by the Spam Filter. An example of our Web.config <mailSettings> <smtp from="[email protected]"> <network host="mailrelay.domain.com" defaultCredentials="true" /> </smtp> </mailSettings> I've spoken with our Exchange Server team and they inform me that on occasions, our mail looks sufficiently like spam and is automatically blocked. The algorithm appears to be points based and blocks on a score of 45. 20 points are instantly added because our system is not sending the hostname with the domain name suffixed. e.g. the server is hoping for myServerName.domain.com, but despite being part of that domain, the server is sending from myServerName. I've been asked to look at altering the EHLO string that is sent and/or influencing the host so that it is its fully qualified name. However, this makes little sense to me, and although I understand the concept of what I need to change - I don't know where to begin looking for the fix.

    Read the article

  • Atlas style map index for static google map

    - by Ben Holland
    Hello, I'm using a static google map, but really this problem could apply to any maps project. I want to divide a map into multiple quadrants (of say 50x50 pixels) and label the columns as A, B, C.... and the rows as 1, 2, 3... Next I plan to do something like, 1) Find the markers which are the farthest north, east, south, and west 2) Use that info to to define the bounding boxes of each row and column box 3) Classify each marker by its row and column (Example Marker 1 = [A,2]) A few requirements, I don't know the zoom level because I let Google set the zoom level appropriately for me and I would rather not use an algorithm that is dependent on a zoom level. I do however know the locations of all of the markers that are shown on the map. Here is an example of a map that I would like to classify the markers for, static map example link. I found these which look like a good start, Resource 1, Resource 2 But I think I'm still in need of some help getting started. Can anyone help write out some pseudo code or post a few more resources? I'm kind of in a rut at the moment. Thanks! Much appreciated of any help!

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • How to loop through a boost::mpl::list?

    - by Kyle
    This is as far as I've gotten, #include <boost/mpl/list.hpp> #include <algorithm> namespace mpl = boost::mpl; class RunAround {}; class HopUpAndDown {}; class Sleep {}; template<typename Instructions> int doThis(); template<> int doThis<RunAround>() { /* run run run.. */ return 3; } template<> int doThis<HopUpAndDown>() { /* hop hop hop.. */ return 2; } template<> int doThis<Sleep>() { /* zzz.. */ return -2; } int main() { typedef mpl::list<RunAround, HopUpAndDown, Sleep> acts; // std::for_each(mpl::begin<acts>::type, mpl::end<acts>::type, doThis<????>); return 0; }; How do I complete this? (I don't know if I should be using std::for_each, just a guess based on another answer here)

    Read the article

  • Generating permutation in Python with specific rule

    - by twfx
    Let say a=[A, B, C, D], each element has a weight w, and is set to 1 if selected, 0 if otherwise. I'd like to generate permutation in the below order 1,1,1,1 1,1,1,0 1,1,0,1 1,1,0,0 1,0,1,1 1,0,1,0 1,0,0,1 1,0,0,0 Let's w=[1,2,3,4] for item A,B,C,D ... and max_weight = 4. For each permutation, if the accum weight has exceeded max_weight, stop calculation for that permutation, move to next permutation. For eg. 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,0,1 --> 7 > 4 finished, move to next 1,1,0,0 --> 3 finished, move to next 1,0,1,1 --> 8 > 4, finished, stop, move to next 1,0,1,0 --> 4 finished, move to next 1,0,0,1 --> 5 > 4 finished, move to next 1,0,0,0 --> 1 finished, move to next [1,0,1,0] is the best combination which does not exceeded max_weight 4 My questions are What's the algorithm which generate the required permutation? Or any suggestion I could generate the permutation? As the number of element can be up to 10000, and the calculation stop if the accum weight for the branch exceeds max_weight, it is not necessary to generate all permutation first before the calculation. How can the algo in (1) generate permutation on the fly?

    Read the article

  • Problem: Sorting for GridView/ObjectDataSource changes depending on page

    - by user148298
    I have a GridView tied to an ObjectDataSource using paging. The paging works fine, except that the sort order changes depending on which page of the results is being viewed. This causes items to reappear on subsequent pages among other issues. I traced the problem to my DAL, which reads a page at a time and then sorts it. Obviously the sorting is going to change as the result set size changes. Is there an improvement to this algorithm. I would like to use a datareader if possible: [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public static WordsCollection LoadForCriteria(string sqlCriteria, int maximumRows, int startRowIndex, string sortExpression) { //DEFAULT SORT EXPRESSION if (string.IsNullOrEmpty(sortExpression)) sortExpression = "OrderBy"; //CREATE THE DYNAMIC SQL TO LOAD OBJECT StringBuilder selectQuery = new StringBuilder(); selectQuery.Append("SELECT"); if (maximumRows > 0) selectQuery.Append(" TOP " + (startRowIndex + maximumRows).ToString()); selectQuery.Append(" " + Words.GetColumnNames(string.Empty)); selectQuery.Append(" FROM sw_Words"); string whereClause = string.IsNullOrEmpty(sqlCriteria) ? string.Empty : " WHERE " + sqlCriteria; selectQuery.Append(whereClause); selectQuery.Append(" ORDER BY " + sortExpression); Database database = Token.Instance.Database; DbCommand selectCommand = database.GetSqlStringCommand(selectQuery.ToString()); //EXECUTE THE COMMAND WordsCollection results = new WordsCollection(); int thisIndex = 0; int rowCount = 0; using (IDataReader dr = database.ExecuteReader(selectCommand)) { while (dr.Read() && ((maximumRows < 1) || (rowCount < maximumRows))) { if (thisIndex >= startRowIndex) { Words varWords = new Words(); Words.LoadDataReader(varWords, dr); results.Add(varWords); rowCount++; } thisIndex++; } dr.Close(); } return results; }

    Read the article

  • How do I find the most “Naturally" direct route using A-star (A*)

    - by Greg B
    I have implemented the A* algorithm in AS3 and it works great except for one thing. Often the resulting path does not take the most “natural” or smooth route to the target. In my environment the object can move diagonally as inexpensively as it can move horizontally or vertically. Here is a very simple example; the start point is marked by the S, and the end (or finish) point by the F. | | | | | | | | | | |S| | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | As you can see, during the 1st round of finding, nodes [0,2], [1,2], [2,2] will all be added to the list of possible node as they all have a score of N. The issue I’m having comes at the next point when I’m trying to decide which node to proceed with. In the example above I am using possibleNodes[0] to choose the next node. If I change this to possibleNodes[possibleNodes.length-1] I get the following path. | | | | | | | | | | |S| | | | | | | | | | |x| | | | | | | | | | |x| | | | | | | | | | |x| | | | | | | | |x| | | | | | | | |x| | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | And then with possibleNextNodes[Math.round(possibleNextNodes.length / 2)-1] | | | | | | | | | | |S| | | | | | | | | |x| | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | x| | | | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | All these paths have the same cost as they all contain the same number of steps but, in this situation, the most sensible path would be as follows... | | | | | | | | | | |S| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |x| | | | | | | | | |F| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Is there a formally accepted method of making the path appear more sensible rather than just mathematically correct?

    Read the article

  • How to convert CMYK to RGB programmatically in indesign.

    - by BBDeveloper
    Hi, I have a CMYK colorspace in indesign, i want to convert that as RGB color space, I got some codes, but I am getting incorrect data. Some of the codes which I tried are given below double cyan = 35.0; double magenta = 29.0; double yellow = 0.0; double black = 16.0; cyan = Math.min(255, cyan + black); //black is from K magenta = Math.min(255, magenta + black); yellow = Math.min(255, yellow + black); l_res[0] = 255 - cyan; l_res[1] = 255 - magenta; l_res[2] = 255 - yellow; @Override public float[] toRGB(float[] p_colorvalue) { float[] l_res = {0,0,0}; if (p_colorvalue.length >= 4) { float l_black = (float)1.0 - p_colorvalue[3]; l_res[0] = l_black * ((float)1.0 - p_colorvalue[0]); l_res[1] = l_black * ((float)1.0 - p_colorvalue[1]); l_res[2] = l_black * ((float)1.0 - p_colorvalue[2]); } return (l_res); } The values are C=35, M = 29, Y = 0, K = 16 in CMYK color space and the correct RGB values are R = 142, G = 148, B = 186. In adobe indesign, using swatches we can change the mode to CMYK or to RGB. But I want to do that programmatically, Can I get any algorithm to convert CMYK to RGB which will give the correct RGB values. And one more question, if the alpha value for RGB is 1 then what will be the alpha value for CMYK? Can anyone help me to solve these issues... Thanks in advance.

    Read the article

  • What algorithms are suitable for this simple machine learning problem?

    - by user213060
    I have a what I think is a simple machine learning question. Here is the basic problem: I am repeatedly given a new object and a list of descriptions about the object. For example: new_object: 'bob' new_object_descriptions: ['tall','old','funny']. I then have to use some kind of machine learning to find previously handled objects that had similar descriptions, for example, past_similar_objects: ['frank','steve','joe']. Next, I have an algorithm that can directly measure whether these objects are indeed similar to bob, for example, correct_objects: ['steve','joe']. The classifier is then given this feedback training of successful matches. Then this loop repeats with a new object. a Here's the pseudo-code: Classifier=new_classifier() while True: new_object,new_object_descriptions = get_new_object_and_descriptions() past_similar_objects = Classifier.classify(new_object,new_object_descriptions) correct_objects = calc_successful_matches(new_object,past_similar_objects) Classifier.train_successful_matches(object,correct_objects) But, there are some stipulations that may limit what classifier can be used: There will be millions of objects put into this classifier so classification and training needs to scale well to millions of object types and still be fast. I believe this disqualifies something like a spam classifier that is optimal for just two types: spam or not spam. (Update: I could probably narrow this to thousands of objects instead of millions, if that is a problem.) Again, I prefer speed when millions of objects are being classified, over accuracy. What are decent, fast machine learning algorithms for this purpose?

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • Issue with clipping rectangles and back to front rendering

    - by Milo
    Here is my problem. My rendering algorithm renders from back to front. But logically, clipping rectangles need to be applied from front to back. Hence why the following does not work: void AguiWidgetManager::recursiveRender(const AguiWidget *root) { //recursively calls itself to render widgets from back to front AguiWidget* nonConstRoot = (AguiWidget*)root; if(!nonConstRoot->isVisable()) { return; } //push the clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->pushClippingRect(nonConstRoot->getClippingRectangle()); } if(nonConstRoot->isEnabled()) { nonConstRoot->paint(AguiPaintEventArgs(true,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRender(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRender(*it); } } else { nonConstRoot->paint(AguiPaintEventArgs(false,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } } //release clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->popClippingRect(); } } I could ofcourse go to the top of the tree, then apply clipping rectangles inward until I get to the currently rendered widget, but that would involve lots of clipping rectangles @ 60 frames per second. I want to minimize calls to pushing and popping rectangles. What could I do, Thanks

    Read the article

  • Calculate car filled up times

    - by Ivan
    Here is the question: The driving distance between Perth and Adelaide is 1996 miles. On the average, the fuel consumption of a 2.0 litre 4 cylinder car is 8 litres per 100 kilometres. The fuel tank capacity of such a car is 60 litres. Design and implement a JAVA program that prompts for the fuel consumption and fuel tank capacity of the aforementioned car. The program then displays the minimum number of times the car’s fuel tank has to be filled up to drive from Perth to Adelaide. Note that 62 miles is equal to 100 kilometres. What data will you use to test that your algorithm works correctly? Here is what I've done so far: import java.util.Scanner;// public class Ex4{ public static void main( String args[] ){ Scanner input = new Scanner( System.in ); double distance, consumption, capacity, time; distance = Math.sqrt(1996/62*100); consumption = Math.sqrt(8/100); capacity = 60; time = Math.sqrt(distance*consumption/capacity); System.out.println("The car's fuel tank need to be filled up:" + time + "times"); } } I can compile it but the problem is that the result is always 0.0, can anyone help me what's wrong with it ?

    Read the article

  • Trying to use boost lambda, but my code won't compile

    - by hamishmcn
    Hi, I am trying to use boost lambda to avoid having to write trivial functors. For example, I want to use the lambda to access a member of a struct or call a method of a class, eg: #include <vector> #include <utility> #include <algorithm> #include <boost/lambda/lambda.hpp> using namespace std; using namespace boost::lambda; vector< pair<int,int> > vp; vp.push_back( make_pair<int,int>(1,1) ); vp.push_back( make_pair<int,int>(3,2) ); vp.push_back( make_pair<int,int>(2,3) ); sort(vp.begin(), vp.end(), _1.first > _2.first ); When I try and compile this I get the following errors: error C2039: 'first' : is not a member of 'boost::lambda::lambda_functor<T>' with [ T=boost::lambda::placeholder<1> ] error C2039: 'first' : is not a member of 'boost::lambda::lambda_functor<T>' with [ T=boost::lambda::placeholder<2> ] Since vp contains pair<int,int> I thought that _1.first should work. What I am doing wrong?

    Read the article

  • Using multiple aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using a domain language to query the DB. The 'language' comprises of algebraic expressions (or more specifically SQL like criteria) which I use to generate (ANSI) SQL statements which are sent to the db engine. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • Collaborative filtering in MySQL ?

    - by user281434
    Hi I'm trying to develop a site that recommends items(fx. books) to users based on their preferences. So far, I've read O'Reilly's "Collective Intelligence" and numerous other online articles. They all, however, seem to deal with single instances of recommendation, for example if you like book A then you might like book B. What I'm trying to do is to create a set of 'preference-nodes' for each user on my site. Let's say a user likes book A,B and C. Then, when they add book D, I don't want the system to recommend other books based solely other users experience with book D. I wan't the system to look up similar 'preference-nodes' and recommend books based on that. Here's an example of 4 nodes: User1: 'book A'->'book B'->'book C' User2: 'book A'->'book B'->'book C'->'book D' user3: 'book X'->'book Y'->'book C'->'book Z' user4: 'book W'->'book Q'->'book C'->'book Z' So a recommendation system, as described in the material I've read, would recommend book Z to User 1, because there are two people who recommends Z in conjuction with liking C (ie. Z weighs more than D), even though a user with a similar 'preference-node', User2, would be more qualified to recommend book D because he has a more similar interest-pattern. So does any of you have any experience with this sort of thing? Is there some things I should try to read or does there exist any open source systems for this? Thanks for your time! Small edit: I think last.fm's algorithm is doing exactly what I my system to do. Using the preference-trees of people to recommmend music more personally to people. Instead of just saying "you might like B because you liked A"

    Read the article

  • Bad crypto error in .NET 4.0

    - by Andrey
    Today I moved my web application to .net 4.0 and Forms Auth just stopped working. After several hours of digging into my SqlMembershipProvider (simplified version of built-in SqlMembershipProvider), I found that HMACSHA256 hash is not consistent. This is the encryption method: internal string EncodePassword(string pass, int passwordFormat, string salt) { if (passwordFormat == 0) // MembershipPasswordFormat.Clear return pass; byte[] bIn = Encoding.Unicode.GetBytes(pass); byte[] bSalt = Convert.FromBase64String(salt); byte[] bAll = new byte[bSalt.Length + bIn.Length]; byte[] bRet = null; Buffer.BlockCopy(bSalt, 0, bAll, 0, bSalt.Length); Buffer.BlockCopy(bIn, 0, bAll, bSalt.Length, bIn.Length); if (passwordFormat == 1) { // MembershipPasswordFormat.Hashed HashAlgorithm s = HashAlgorithm.Create( Membership.HashAlgorithmType ); bRet = s.ComputeHash(bAll); } else { bRet = EncryptPassword( bAll ); } return Convert.ToBase64String(bRet); } Passing the same password and salt twice returns different results!!! It was working perfectly in .NET 3.5 Anyone aware of any breaking changes, or is it a known bug? UPDATE: When I specify SHA512 as hashing algorithm, everything works fine, so I do believe it's a bug in .NET 4.0 crypto Thanks! Andrey

    Read the article

  • Is it possible in any Java IDE to collapse the type definitions in the source code?

    - by asmaier
    Lately I often have to read Java code like this: LinkedHashMap<String, Integer> totals = new LinkedHashMap<String, Integer>(listOfRows.get(0)) for (LinkedHashMap<String, Integer> row : (ArrayList<LinkedHashMap<String,Integer>>) table.getValue()) { for(Entry<String, Integer> elem : row.entrySet()) { String colName=elem.getKey(); int Value=elem.getValue(); int oldValue=totals.get(colName); int sum = Value + oldValue; totals.put(colName, sum); } } Due to the long and nested type definitions the simple algorithm becomes quite obscured. So I wished I could remove or collapse the type definitions with my IDE to see the Java code without types like: totals = new (listOfRows.get(0)) for (row : table.getValue()) { for(elem : row.entrySet()) { colName=elem.getKey(); Value=elem.getValue(); oldValue=totals.get(colName); sum = Value + oldValue; totals.put(colName, sum); } } The best way of course would be to collapse the type definitions, but when moving the mouse over a variable show the type as a tooltip. Is there a Java IDE or a plugin for an IDE that can do this?

    Read the article

  • Replicating SQL's 'Join' in Python

    - by Daniel Mathews
    I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function. I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key': join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Join arrays r1 and r2 on key key. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html Any pointers or help will be most appreciated!

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Output iterator's value_type

    - by wilhelmtell
    The STL commonly defines an output iterator like so: template<class Cont> class insert_iterator : public iterator<output_iterator_tag,void,void,void,void> { // ... Why do output iterators define value_type as void? It would be useful for an algorithm to know what type of value it is supposed to output. For example, a function that translates a URL query "key1=value1&key2=value2&key3=value3" into any container that holds key-value strings elements. template<typename Ch,typename Tr,typename Out> void parse(const std::basic_string<Ch,Tr>& str, Out result) { std::basic_string<Ch,Tr> key, value; // loop over str, parse into p ... *result = typename iterator_traits<Out>::value_type(key, value); } The SGI reference page of value_type hints this is because it's not possible to dereference an output iterator. But that's not the only use of value_type: I might want to instantiate one in order to assign it to the iterator.

    Read the article

  • Approximate string matching with a letter confusion matrix?

    - by zigglenaut
    I'm trying to model a phonetic recognizer that has to isolate instances of words (strings of phones) out of a long stream of phones that doesn't have gaps between each word. The stream of phones may have been poorly recognized, with letter substitutions/insertions/deletions, so I will have to do approximate string matching. However, I want the matching to be phonetically-motivated, e.g. "m" and "n" are phonetically similar, so the substitution cost of "m" for "n" should be small, compared to say, "m" and "k". So, if I'm searching for [mein] "main", it would match the letter sequence [meim] "maim" with, say, cost 0.1, whereas it would match the letter sequence [meik] "make" with, say, cost 0.7. Similarly, there are differing costs for inserting or deleting each letter. I can supply a confusion matrix that, for each letter pair (x,y), gives the cost of substituting x with y, where x and y are any letter or the empty string. I know that there are tools available that do approximate matching such as agrep, but as far as I can tell, they do not take a confusion matrix as input. That is, the cost of any insertion/substitution/deletion = 1. My question is, are there any open-source tools already available that can do approximate matching with confusion matrices, and if not, what is a good algorithm that I can implement to accomplish this?

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >