Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 13/151 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What's the most efficient way to setup a multi-lingual website

    - by Jasper De Bruijn
    Hi, I'm developing a website that will be available in different languages. It is a LAMP (Linux, Apache, MySQL, PHP) setup, and it makes use of Smarty, mostly for the template engine. The way we currently translate is by a self-written smarty plugin, which will recognize certain tags in the HTML files, and will find the corresponding tag in an earlier defined language file. The HTML could look as follows: <p>Hi, welcome to $#gamedesc;!</p> And the language file could look like this: gamedesc:Poing 2009$; welcome:this is another tag$; Which would then output <p>Hi, welcome to Poing 2009!</p> This system is very basic, but it is pretty hard to control, if I f.e. would like to keep track of what has been translated so far, or give certain users the rights to translate only certain tags. I've been looking at some alternative ways to approach this, by either replacing the text-file with XML files which could store some extra meta-data, or by perhaps storing all the texts in the database, and retrieving it there. My question is, what would be the best way to make this system both maintainable and perform well with high user-traffic? Are there perhaps any (lightweight) plugins I could take a look at?

    Read the article

  • Ruby efficient way of building an array from an array of arrays

    - by randombits
    I have an array of ActiveRecord objects, each one which has its own respective errors array. I want to flatten it all out and get only the unique values into one array. So the top level array might look like: foo0 = Foo.new foo1 = Foo.new foo2 = Foo.new foo3 = Foo.new arr = [foo0, foo1, foo2, foo3] Each one of those objects could potentially have an array of errors, and I'd like to get just the unique message out of them and put them in another array, say called error_arr. How would you do it the "Ruby" way?

    Read the article

  • Storing a bucket of numbers in an efficient data structure

    - by BlitzKrieg
    I have a buckets of numbers e.g. - 1 to 4, 5 to 15, 16 to 21, 22 to 34,.... I have roughly 600,000 such buckets. The range of numbers that fall in each of the bucket varies. I need to store these buckets in a suitable data structure so that the lookups for a number is as fast as possible. So my question is what is the suitable data structure and a sorting mechanism for this type of problem. Thanks in advance

    Read the article

  • How to shift pixels of a pixmap efficient in Qt4

    - by stanleyxu2005
    Hello, I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap) My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap. Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • whats faster, more efficient, loading a js file with arrays or populating arrays from tables

    - by Leigh
    I am rebuilding an ecom site where the product data is stored in a multidimensional JS array that gets loaded on page load. This data is constantly being accessed with JS due to the nature of the site, to update prices based on user selections. There are many options that affect final price. From a programming standpoint, a DB table is much easier to maintain and update than are JS arrays, and since I am porting the site over to PHP and MYSQL, I have been considering moving these arrays into tables. So, would it be better to populate an array from the DB on load so that the pricing data is always available to the JS, or stay with hard coded JS files? I considered getting data via ajax as needed, but since this site has to constantly update pricing with user interaction, I have pretty much ruled that out. How would you handle it?

    Read the article

  • Efficient storage/retrieval method for replayable comet style applications (Google Wave, Etherpad)

    - by Gareth Simpson
    I am considering a web application that would have the same kind of multi user, automatic saving, infinite undo / replay capabilities that you see in Google Wave and Etherpad (albeit on a drastically smaller scale and userbase). Before I go away and reinvent the wheel, is this something that has already been addressed as either a piece of technology or library, or even just a design pattern. I know this isn't necessarily the best Stack Overflow question as there is probably not a "right" answer, but my Google-fu has failed me and I'd just like a reading list! Ordinarily I would be developing under python/django but this is not a firm requirement just a preference :)

    Read the article

  • Good data structure for efficient insert/querying on arbitrary properties

    - by Juliet
    I'm working on a project where Arrays are the default data structure for everything, and every query is a linear search in the form of: Need a customer with a particular name? customer.Find(x => x.Name == name) Need a customer with a particular unique id? customer.Find(x => x.Id == id) Need a customer of a particular type and age? customer.Find(x => x is PreferredCustomer && x.Age >= age) Need a customer of a particular name and age? customer.Find(x => x.Name == name && x.Age == age) In almost all instances, the criteria for lookups is well-defined. For example, we only search for customers by one or more of the properties Id, Type, Name, or Age. We rarely search by anything else. Is a good data structure to support arbitrary queries of these types with lookup better than O(n)? Any out-of-the-box implementations for .NET?

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • most efficient method of turning multiple 1D arrays into columns of a 2D array

    - by Ty W
    As I was writing a for loop earlier today, I thought that there must be a neater way of doing this... so I figured I'd ask. I looked briefly for a duplicate question but didn't see anything obvious. The Problem: Given N arrays of length M, turn them into a M-row by N-column 2D array Example: $id = [1,5,2,8,6] $name = [a,b,c,d,e] $result = [[1,a], [5,b], [2,c], [8,d], [6,e]] My Solution: Pretty straight forward and probably not optimal, but it does work: <?php // $row is returned from a DB query // $row['<var>'] is a comma separated string of values $categories = array(); $ids = explode(",", $row['ids']); $names = explode(",", $row['names']); $titles = explode(",", $row['titles']); for($i = 0; $i < count($ids); $i++) { $categories[] = array("id" => $ids[$i], "name" => $names[$i], "title" => $titles[$i]); } ?> note: I didn't put the name = value bit in the spec, but it'd be awesome if there was some way to keep that as well.

    Read the article

  • Efficient persistent storage for simple id to table of values map for java

    - by wds
    I need to store some data that follows the simple pattern of mapping an "id" to a full table (with multiple rows) of several columns (i.e. some integer values [u, v, w]). The size of one of these tables would be a couple of KB. Basically what I need is to store a persistent cache of some intermediary results. This could quite easily be implemented as simple sql, but there's a couple of problems, namely I need to compress the size of this structure on disk as much as possible. (because of amount of values I'm storing) Also, it's not transactional, I just need to write once and simply read the contents of the entire table, so a relational DB isn't actually a very good fit. I was wondering if anyone had any good suggestions? For some reason I can't seem to come up with something decent atm. Especially something with an API in java would be nice.

    Read the article

  • Memory efficient collection class

    - by Joe
    I'm building an array of dictionaries (called keys) in my iphone application to hold the section names and row counts for a tableview. the code looks like this: [self.results removeAllObjects]; [self.keys removeAllObjects]; NSUInteger i,j = 0; NSString *key = [NSString string]; NSString *prevKey = [NSString string]; if ([self.allResults count] > 0) { prevKey = [NSString stringWithString:[[[self.allResults objectAtIndex:0] valueForKey:@"name"] substringToIndex:1]]; for (NSDictionary *theDict in self.allResults) { key = [NSString stringWithString:[[theDict valueForKey:@"name"] substringToIndex:1]]; if (![key isEqualToString:prevKey]) { NSDictionary *newDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:i],@"count", prevKey,@"section", [NSNumber numberWithInt:j], @"total",nil]; [self.keys addObject:newDictionary]; prevKey = [NSString stringWithString:key]; i = 1; } else { i++; } j++; } NSDictionary *newDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:i],@"count", prevKey,@"section", [NSNumber numberWithInt:j], @"total",nil]; [self.keys addObject:newDictionary]; } [self.tableview reloadData]; The code works fine first time through but I sometimes have to rebuild the entire table so I redo this code which orks fine on the simulator, but on my device the program bombs when I execute the reloadData line : malloc: *** mmap(size=3772944384) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug malloc: *** mmap(size=3772944384) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Program received signal: “EXC_BAD_ACCESS”. If I remove the reloadData line the code works on the device. I'm wondering if this is something to do with the way I've built the keys array (ie using autoreleased strings and dictionaries).

    Read the article

  • Writing shorter code/algorithms, is more efficient (performance)?

    - by Carlos
    After coming across the code golf trivia around the site it is obvious people try to find ways to write code and algorithms as short as the possibly can in terms of characters, lines and total size, even if that means writing something like: n=input() while n>1:n=(n/2,n*3+1)[n%2];print n So as a beginner I start to wonder whether size actually matters :D. It is obviously a very subjective question highly dependent on the actual code being used, but what is the rule of thumb in the real world. In the case that size wont matter, how come then we don't focus more on performance rather than size?

    Read the article

  • Is there a more efficient way to retrieve tables from websites using wpf + webclient

    - by Jordan Brooker
    new to the community, but here is my first question that I am stuck on. I am new to WPF and WebClient using c# and I am attempting to make a program that access www.nba.com to populate a combobox I have with team names, and then when a user selects a team from the combobox, I wanted to populate a portion of the main window with the roster from the teams home site, same style and eveything. I was able to populate the combobox using the WebClient.OpenRead and reading in the markup to extract the team names. Now I am on the more difficult part. I was planning on using the same method to grab all the markup and then somehow display the table in a content panel, but I feel that this is a very tedious thing to do. Can anyone give me any tips for completing this action or is there a method in the webclient class that allows me to search a webpage for a table or object other than text? Thanks.

    Read the article

  • Most efficient way to write over file after reading

    - by Ryan McClure
    I'm reading in some data from a file, manipulating it, and then overwriting it to the same file. Until now, I've been doing it like so: open (my $inFile, $file) or die "Could not open $file: $!"; $retString .= join ('', <$inFile>); ... close ($inFile); open (my $outFile, $file) or die "Could not open $file: $!"; print $outFile, $retString; close ($inFile); However I realized I can just use the truncate function and open the file for read/write: open (my $inFile, '+<', $file) or die "Could not open $file: $!"; $retString .= join ('', <$inFile>); ... truncate $inFile, 0; print $inFile $retString; close ($inFile); I don't see any examples of this anywhere. It seems to work well, but am I doing it correctly? Is there a better way to do this?

    Read the article

  • Simple and efficient distribution of C++/Boost source code (amalgamation)

    - by Arrieta
    Hello: My job mostly consists of engineering analysis, but I find myself distributing code more and more frequently among my colleagues. A big pain is that not every user is proficient in the intricacies of compiling source code, and I cannot distribute executables. I've been working with C++ using Boost, and the problem is that I cannot request every sysadmin of every network to install the libraries. Instead, I want to distribute a single source file (or as few as possible) so that the user can g++ source.c -o program. So, the question is: can you pack the Boost libraries with your code, and end up with a single file? I am talking about the Boost libraries which are "headers only" or "templates only". As an inspiration, please look at the distribution of SQlite or the Lemon Parser Generator; the author amalgamates the stuff into a single source file which is trivial to compile. Thank you.

    Read the article

  • Have parameters in Dao methods to get entities the most efficient way for read-only access

    - by Blankman
    Allot of my use of hibernate, at least for that data that is presented on many parts of the web application, is for read-only purposes. I want to add some parameters to my Dao methods so I can modify the way hibernate pulls the data and how it handles transactions etc. Example usage: Data on the front page of my website is displayed to the users, it is read-only, so I want to avoid any session/entity tracking that hibernate usually does. This is data that is read-only, will not be changed in this transaction, etc. What would be the most performant way to pull the data? (the code below is c#/nhibernate, I'm implementing this in java as I learn it) public IList<Article> GetArticles() { return Session.CreateCriteria(typeof(Article)) // some where cluase }

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Efficient implementation of natural logarithm (ln) and exponentiation

    - by Donotalo
    Basically, I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory. The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >