Search Results

Search found 15882 results on 636 pages for 'similar'.

Page 10/636 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Get last n lines of a file with Python, similar to tail

    - by Armin Ronacher
    I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom. So I need a tail() method that can read n lines from the bottom and supports an offset. What I came up with looks like this: def tail(f, n, offset=0): """Reads a n lines from f with an offset of offset lines.""" avg_line_length = 74 to_read = n + offset while 1: try: f.seek(-(avg_line_length * to_read), 2) except IOError: # woops. apparently file is smaller than what we want # to step back, go to the beginning instead f.seek(0) pos = f.tell() lines = f.read().splitlines() if len(lines) >= to_read or pos == 0: return lines[-to_read:offset and -offset or None] avg_line_length *= 1.3 Is this a reasonable approach? What is the recommended way to tail log files with offsets?

    Read the article

  • Neural Network Always Produces Same/Similar Outputs for Any Input

    - by l33tnerd
    I have a problem where I am trying to create a neural network for Tic-Tac-Toe. However, for some reason, training the neural network causes it to produce nearly the same output for any given input. I did take a look at Artificial neural networks benchmark, but my network implementation is built for neurons with the same activation function for each neuron, i.e. no constant neurons. To make sure the problem wasn't just due to my choice of training set (1218 board states and moves generated by a genetic algorithm), I tried to train the network to reproduce XOR. The logistic activation function was used. Instead of using the derivative, I multiplied the error by output*(1-output) as some sources suggested that this was equivalent to using the derivative. I can put the Haskell source on HPaste, but it's a little embarrassing to look at. The network has 3 layers: the first layer has 2 inputs and 4 outputs, the second has 4 inputs and 1 output, and the third has 1 output. Increasing to 4 neurons in the second layer didn't help, and neither did increasing to 8 outputs in the first layer. I then calculated errors, network output, bias updates, and the weight updates by hand based on http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf to make sure there wasn't an error in those parts of the code (there wasn't, but I will probably do it again just to make sure). Because I am using batch training, I did not multiply by x in equation (4) there. I am adding the weight change, though http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-2.html suggests to subtract it instead. The problem persisted, even in this simplified network. For example, these are the results after 500 epochs of batch training and of incremental training. Input |Target|Output (Batch) |Output(Incremental) [1.0,1.0]|[0.0] |[0.5003781562785173]|[0.5009731800870864] [1.0,0.0]|[1.0] |[0.5003740346965251]|[0.5006347214672715] [0.0,1.0]|[1.0] |[0.5003734471544522]|[0.500589332376345] [0.0,0.0]|[0.0] |[0.5003674110937019]|[0.500095157458231] Subtracting instead of adding produces the same problem, except everything is 0.99 something instead of 0.50 something. 5000 epochs produces the same result, except the batch-trained network returns exactly 0.5 for each case. (Heck, even 10,000 epochs didn't work for batch training.) Is there anything in general that could produce this behavior? Also, I looked at the intermediate errors for incremental training, and the although the inputs of the hidden/input layers varied, the error for the output neuron was always +/-0.12. For batch training, the errors were increasing, but extremely slowly and the errors were all extremely small (x10^-7). Different initial random weights and biases made no difference, either. Note that this is a school project, so hints/guides would be more helpful. Although reinventing the wheel and making my own network (in a language I don't know well!) was a horrible idea, I felt it would be more appropriate for a school project (so I know what's going on...in theory, at least. There doesn't seem to be a computer science teacher at my school). EDIT: Two layers, an input layer of 2 inputs to 8 outputs, and an output layer of 8 inputs to 1 output, produces much the same results: 0.5+/-0.2 (or so) for each training case. I'm also playing around with pyBrain, seeing if any network structure there will work. Edit 2: I am using a learning rate of 0.1. Sorry for forgetting about that. Edit 3: Pybrain's "trainUntilConvergence" doesn't get me a fully trained network, either, but 20000 epochs does, with 16 neurons in the hidden layer. 10000 epochs and 4 neurons, not so much, but close. So, in Haskell, with the input layer having 2 inputs & 2 outputs, hidden layer with 2 inputs and 8 outputs, and output layer with 8 inputs and 1 output...I get the same problem with 10000 epochs. And with 20000 epochs. Edit 4: I ran the network by hand again based on the MIT PDF above, and the values match, so the code should be correct unless I am misunderstanding those equations. Some of my source code is at http://hpaste.org/42453/neural_network__not_working; I'm working on cleaning my code somewhat and putting it in a Github (rather than a private Bitbucket) repository. All of the relevant source code is now at https://github.com/l33tnerd/hsann.

    Read the article

  • using BOSH/similar technique for existing application/system

    - by SnapConfig.com
    We've an existing system which connects to the the back end via http (apache/ssl) and polls the server for new messages, needless to say we have scalability issues. I'm researching on removing this polling and have come across BOSH/XMPP but I'm not sure how we should take the BOSH technique (using long lived http connection). I've seen there are few libraries available but the entire thing seems bloaty since we do not need buddy lists etc and simply want to notify the clients of available messages. The client is written in C/C++ and works across most OS so that is an important factor. The server is in Java. does bosh result in huge number of httpd processes? since it has to keep all the clients connected, what would be the limit on that. we are also planning to move to 64 bit JVM/apache what would be the max limit of clients in that case. any hints?

    Read the article

  • Framework Similar to Pylons for Ruby

    - by Travis
    I've been using Python for most of my web projects lately, and have come to really love the Pylons MVC framework. I like the incredible transparency (lack of magic), the built-in components they selected (sqlalchemy, formencode, routes), and the ability to easily change things up (use a different ORM or templating engine). Moving forward, due to constraints at my company, I'm going to be trying out Ruby rather than Python. I'm wondering if people with experience in both have any recommendations for a Ruby framework that is comparable to Pylons. Python is to Django as Ruby is to Rails Python is to Pylons as Ruby is to ?

    Read the article

  • How to differentiate between two similar fields in Linq Join tables

    - by Azhar
    How to differentiate between two select new fields e.g. Description c.Description and lt.Description DataTable lDt = new DataTable(); try { lDt.Columns.Add(new DataColumn("AreaTypeID", typeof(Int32))); lDt.Columns.Add(new DataColumn("CategoryRef", typeof(Int32))); lDt.Columns.Add(new DataColumn("Description", typeof(String))); lDt.Columns.Add(new DataColumn("CatDescription", typeof(String))); EzEagleDBDataContext lDc = new EzEagleDBDataContext(); var lAreaType = (from lt in lDc.tbl_AreaTypes join c in lDc.tbl_AreaCategories on lt.CategoryRef equals c.CategoryID where lt.AreaTypeID== pTypeId select new { lt.AreaTypeID, lt.Description, lt.CategoryRef, c.Description }).ToArray(); for (int j = 0; j< lAreaType.Count; j++) { DataRow dr = lDt.NewRow(); dr["AreaTypeID"] = lAreaType[j].LandmarkTypeID; dr["CategoryRef"] = lAreaType[j].CategoryRef; dr["Description"] = lAreaType[j].Description; dr["CatDescription"] = lAreaType[j].; lDt.Rows.Add(dr); } } catch (Exception ex) { }

    Read the article

  • Optimizing a Soundex Query for finding similar names

    - by xkingpin
    My application will offer a list of suggestions for English names that "sound like" a given typed name. The query will need to be optimized and return results as quick as possible. Which option would be most optimal for returning results quickly. (Or your own suggestion if you have one) A. Generate the Soundex Hash and store it in the "Names" table then do something like the following: (This saves generating the soundex hash for at least every row in my db per query right?) select name from names where NameSoundex = Soundex('Ann') B. Use the Difference function (This must generate the soundex for every name in the table?) select name from names where Difference(name, 'Ann') = 3 C. Simple comparison select name from names where Soundex(name) = Soundex('Ann') Option A seems like to me it would be the fastest to return results because it only generates the Soundex for one string then compares to an indexed column "NameSoundex" Option B should give more results than option A because the name does not have to be an exact match of the soundex, but could be slower Assuming my table could contain millions of rows, what would yield the best results?

    Read the article

  • An idea for something similar to delicious bookmarking

    - by Andrew Welch
    Hi, This is an idea more than a question, but I thought it would be the right community. I really like mindmapping as a way to organise information and I think it would be really cool to have a piece of software that allowed you to organise bookmarks into a dynamic mind map. If I get time I would start to create such a thing. Any thoughts or does it already exist? Thanks Andy

    Read the article

  • How to handle similar items in rails MVC?

    - by mocker
    I'm working on building a pretty simple site mainly as an exercise in learning more about rails. You can see my rough progress at statific.com. It's working pretty much as I wanted it for keeping track of server information, but now I'd like to expand it to other things, next on the list being firewalls. I can pretty easily duplicate all the models, views, etc.. that I have for my servers. The problem I see with that is that it isn't very DRY since most of the code would look the same, the only difference would be the attributes I have setup for firewalls would be different than for servers. I know in plain ruby this is pretty simple, you can have a 'Product' w/ common attributes ('manufacturer', 'model') and then have children with more specific attributes. Does the same type of concept exist for rails, or am I just over thinking this?

    Read the article

  • SQL - How to join on similar (not exact) columns

    - by BlueRaja
    I have two tables which get updated at almost the exact same time - I need to join on the datetime column. I've tried this: SELECT * FROM A, B WHERE ABS(DATEDIFF(second, A.Date_Time, B.Date_Time) = ( SELECT MIN(ABS(DATEDIFF(second, A.Date_Time, B2.Date_Time))) FROM B AS B2 ) But it tells me: Multiple columns are specified in an aggregated expression containing an outer reference. If an expression being aggregated contains an outer reference, then that outer reference must be the only column referenced in the expression. How can I join these tables?

    Read the article

  • Existing LINQ extension method similar to Parallel.For?

    - by Joel Martinez
    The linq extension methods for ienumerable are very handy ... but not that useful if all you want to do is apply some computation to each item in the enumeration without returning anything. So I was wondering if perhaps I was just missing the right method, or if it truly doesn't exist as I'd rather use a built-in version if it's available ... but I haven't found one :-) I could have sworn there was a .ForEach method somewhere, but I have yet to find it. In the meantime, I did write my own version in case it's useful for anyone else: using System.Collections; using System.Collections.Generic; public delegate void Function<T>(T item); public delegate void Function(object item); public static class EnumerableExtensions { public static void For(this IEnumerable enumerable, Function func) { foreach (object item in enumerable) { func(item); } } public static void For<T>(this IEnumerable<T> enumerable, Function<T> func) { foreach (T item in enumerable) { func(item); } } } usage is: myEnumerable.For<MyClass>(delegate(MyClass item) { item.Count++; });

    Read the article

  • Does Android AsyncTaskQueue or similar exist?

    - by Ben L.
    I read somewhere (and have observed) that starting threads is slow. I always assumed that AsyncTask created and reused a single thread because it required being started inside the UI thread. The following (anonymized) code is called from a ListAdapter's getView method to load images asynchronously. It works well until the user moves the list quickly, and then it becomes "janky". final File imageFile = new File(getCacheDir().getPath() + "/img/" + p.image); image.setVisibility(View.GONE); view.findViewById(R.id.imageLoading).setVisibility(View.VISIBLE); (new AsyncTask<Void, Void, Bitmap>() { @Override protected Bitmap doInBackground(Void... params) { try { Bitmap image; if (!imageFile.exists() || imageFile.length() == 0) { image = BitmapFactory.decodeStream(new URL( "http://example.com/images/" + p.image).openStream()); image.compress(Bitmap.CompressFormat.JPEG, 85, new FileOutputStream(imageFile)); image.recycle(); } image = BitmapFactory.decodeFile(imageFile.getPath(), bitmapOptions); return image; } catch (MalformedURLException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } catch (IOException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } } @Override protected void onPostExecute(Bitmap image) { if (view.getTag() != p) // The view was recycled. return; view.findViewById(R.id.imageLoading).setVisibility( View.GONE); view.findViewById(R.id.image) .setVisibility(View.VISIBLE); ((ImageView) view.findViewById(R.id.image)) .setImageBitmap(image); } }).execute(); I'm thinking that a queue-based method would work better, but I'm wondering if there is one or if I should attempt to create my own implementation.

    Read the article

  • a program similar to ls with some modifications

    - by Bond
    Hi, here is a simple puzzle I wanted to discuss. A C program to take directory name as command line argument and print last 3 directories and 3 files in all subdirectories without using api 'system' inside it. suppose directory bond0 contains bond1, di2, bond3, bond4, bond5 and my_file1, my_file2, my_file3, my_file4, my_file5, my_file6 and bond1 contains bond6 my_file7 my_file8 my_file9 my_file10 program should output - bond3, bond4, bond5, my_file4, my_file5, my_file6, bond6, my_file8, my_file9, my_file10 My code for the above problem is here #include<dirent.h> #include<unistd.h> #include<string.h> #include<sys/stat.h> #include<stdlib.h> #include<stdio.h> char *directs[20], *files[20]; int i = 0; int j = 0; int count = 0; void printdir(char *); int count_dirs(char *); int count_files(char *); int main() { char startdir[20]; printf("Scanning user directories\n"); scanf("%s", startdir); printdir(startdir); } void printdir(char *dir) { printf("printdir called %d directory is %s\n", ++count, dir); DIR *dp = opendir(dir); int nDirs, nFiles, nD, nF; nDirs = 0; nFiles = 0; nD = 0; nF = 0; if (dp) { struct dirent *entry = 0; struct stat statBuf; nDirs = count_dirs(dir); nFiles = count_files(dir); printf("The no of subdirectories in %s is %d \n", dir, nDirs); printf("The no of files in %s is %d \n", dir, nFiles); while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { } if (S_ISDIR(statBuf.st_mode)) { nD++; if ((nDirs - nD) < 3) { printf("The directory is %s\n",entry->d_name); } } else { nF++; if ((nFiles - nF) < 3) { printf("The files are %s\n", entry->d_name); } //if } //else free(filepath); } //if(filepath) } //while while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } printf("In second while loop *entry=%s\n",entry->d_name); char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { } if (S_ISDIR(statBuf.st_mode)) { printdir(entry->d_name); } } //else free(filepath); } //2nd while closedir(dp); } else { fprintf(stderr, "Error, cannot open directory %s\n", dir); } } //printdir int count_dirs(char *dir) { DIR *dp = opendir(dir); int nD; nD = 0; if (dp) { struct dirent *entry = 0; struct stat statBuf; while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { fprintf(stderr, "File Not found? %s\n", filepath); } if (S_ISDIR(statBuf.st_mode)) { nD++; } else { continue; } free(filepath); } } closedir(dp); } else { fprintf(stderr, "Error, cannot open directory %s\n", dir); } return nD; } int count_files(char *dir) { DIR *dp = opendir(dir); int nF; nF = 0; if (dp) { struct dirent *entry = 0; struct stat statBuf; while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { fprintf(stderr, "File Not found? %s\n", filepath); } if (S_ISDIR(statBuf.st_mode)) { continue; } else { nF++; } free(filepath); } } closedir(dp); } else { fprintf(stderr, "Error, cannot open file %s\n", dir); } return nF; } The above code I wrote is a bit not functioning correctly can some one help me to understand the error which is coming.So that I improve it further.There seems to be some small glitch which is not clear to me right now.

    Read the article

  • C array assignment and indexing with similar variable.

    - by Todd R.
    Hello! I apologize if this has been posted before. Compiling under two separate compilers, BCC 5.5 and LCC, yields 0 and 1. #include <stdio.h> int main(void) { int i = 0, array[2] = {0, 0}; array[i] = ++i; printf("%d\n", array[1]); } Am I to assume not all compilers evaluate expressions within an array from right to left?

    Read the article

  • Existing parsers in c# (BSD license or similar)

    - by Sylverdrag
    I am looking for parsers (in C#) for a bunch of formats. (PHP, ASP, some XML based formats, HTML,...pretty much anything I can get my hands on.) The purpose is to separate the text from the code and do some edits without messing up the code. I had a look at ANTLR, but while it seems like the "right tool", there is just too much prior knowledge assumed. I have an easier time writing a parser from scratch than understanding how to "easily" generate parsers from ANTLR. (I wrote a small parser for a specific type of RTF files within a couple days, so the task is probably within my reach, but as I have no formal knowledge of parsing/lexing, I am at loss with ANTLR) Then it occurred to me that there must existing parsers for many formats, so before I start writing yet another a brand new and potentially buggy version of the wheel, I figured I would check what parsers already exist and can be reused in a commercial product. I could use parsers for just about every format in existence, so this question would be a good place to make a list of all existing free parsers written in C#, if there are any. Thanks in advance for your suggestions

    Read the article

  • Hosting Javascript/CSS file on CDN similar to Google hosting jQuery

    - by Alec Smart
    Hello, I am wondering if there are any hosts or if I can host my file (JS & CSS) on Google so that they are cached and load real quick (due to CDN and gzip). A number of my customers use these files and I would prefer if they could somehow include this to file to receive the JS file. Ideally with filename.js?publickey=sdfgsdfg (which will be tied to a particular domain name). The problem is that my hosting needs are very small- only about 100kb. Any suggestions. My problem is that the customers using the JS & CSS file, have no clue about gzipping content or caching (as their shared hosts do not support it), as a result causes the JS/CSS to take forever to load. Am wondering if I can leverage an existing free service, or I do not mind paying either. Thank you for your time.

    Read the article

  • jQuery Grouping Similar Items w/ Multidimensional Array

    - by NessDan
    So I have this structure setup: <ul> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> (Vid1) <li>http://www.youtube.com/watch?v=bOF3o8B292U</li> (Vid2) <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> (Vid3) <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> <li>http://www.youtube.com/watch?v=bOF3o8B292U</li> <li>http://www.youtube.com/watch?v=yAY4vNJd7A8</li> <li>http://www.youtube.com/watch?v=dw1Vh9Yzryo</li> </ul> Vid1 is repeated 3 times, Vid2 is repeated 3 times, and Vid3 is repeated 2 times. I want to put them into a structure where I can reference them like this: youtube[0][repeated] = 3; youtube[0][download] = "http://www.youtube.com/get_video?video_id=dw1Vh9Yzryo&fmt=36" youtube[1][repeated] = 3; youtube[1][download] = "http://www.youtube.com/get_video?video_id=bOF3o8B292U&fmt=36" youtube[2][repeated] = 3; youtube[2][download] = "http://www.youtube.com/get_video?video_id=yAY4vNJd7A8&fmt=36" "This video was repeated " + youtube[0][repeated] + " times and you can download it here: " + youtube[0][download]; How can I set this multidimensional array up? Been Googling for hours and I don't know how to set it up. Can anyone help me out?

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • Run SQL Queries on DataTables, or similar, in .Net, without an RDBMS

    - by FastAl
    I'd like to have a dataset or datatables, and be able to run SQL statements on them, without using any external RDBMS. For Example, to take take 2 datatables in a dataset and just join them outright with a SQL statement and Where clause, the result being a new datatable? For example if I have 2 datatables, named People and Addresses in a dataset (that I built using code, not getting from a database .. pardon the old fashioned Join syntax): dim dtJoined as DataTable = MyDataSet.RunSQLQuery ("Select * from People, Orders Where People.PersonID=Orders.OrdereID") Thanks

    Read the article

  • XPath: How to check multiple attributes across similar nodes

    - by Justin
    Hi, If I have some xml like: <root> <customers> <customer firstname="Joe" lastname="Bloggs" description="Member of the Bloggs family"/> <customer firstname="Joe" lastname="Soap" description="Member of the Soap family"/> <customer firstname="Fred" lastname="Bloggs" description="Member of the Bloggs family"/> <customer firstname="Jane" lastname="Bloggs" description="Is a member of the Bloggs family"/> </customers> </root> How do I get, in pure XPath - not XSLT - an xpath expression that detects rows where lastname is the same, but has a different description? So it would pull the last node above? Thanks a mill if you can help, been scratching at it for ages, and I can't find it by searching (apologies if it is) Cheers, J

    Read the article

  • C#: Problem trying to resolve a class when two namespaces are similar

    - by rally25rs
    I'm running into an issue where I can't make a reference to a class in a different namespace. I have 2 classes: namespace Foo { public class Class1 { ... } } namespace My.App.Foo { public class Class2 { public void SomeMethod() { var x = new Foo.Class1; // compile error! } } } The compile error is: The type or namespace name 'Class1' does not exist in the namespace 'My.App.Foo' In this situation, I can't seem to get Visual Studio to recognize that "Foo.Class1" refers to the first class. If I mouse-over "Foo", it shows that its trying to resolve that to "My.App.Foo.Class1" If I put the line: using Foo; at the top of the .cs file that contains Class2, then it also resolves that to "My.App.Foo". Is there some trick to referencing the right "Foo" namespace without just renaming the namespaces so they don't conflict? Both of these namespaces are in the same assembly.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >