Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 100/151 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Performing regex on a stream

    - by takoi
    I have some large text files which im going to preform consecutive matching on (just capturing, not replacing). Im thinking its not such a good idea to keep the whole file in memory, but rather use a Reader. What i know about the input is that if there's a match, its not going to span more than 5 lines. So my idea was to have some sort of buffer which just keeps these 5 lines, or so, do the first search, and continue. But it has to "know" where the regex match ended for this to work. e.g if the match ends at line 2 it should start the next search from here. Is it possible to do something like this in an efficient way?

    Read the article

  • How to rate a connect four game situation in java

    - by MrPink
    Hey, I am trying to write a simple AI for a "Get four" game. The basic game principles are done, so I can throw in coins of different color, and they stack on each other and fill a 2D Array and so on and so forth. until now this is what the method looks like: public int insert(int x, int color) //0 = empty, 1=player1 2=player2" X is the horizontal coordinate, as the y coordinate is determined by how many stones are in the array already, I think the idea is obvious. Now the problem is I have to rate specific game situations, so find how many new pairs, triplets and possible 4 in a row I can get in a specific situation to then give each situation a specific value. With these values I can setup a "Game tree" to then decide which move would be best next (later on implementing Alpha-Beta-Pruning). My current problem is that I can't think of an efficient way to implement a rating of the current game situation in a java method. Any ideas would be greatly appreciated! greetings from Germany Mr. Pink

    Read the article

  • Manage dirty rect efficiently

    - by Tianzhou Chen
    Hi all, I am implementing a view system and I want to keep track of all the dirty rects. It seems my dirty rect management is a bottleneck for the whole system. On one hand, invalidating the bounding box of the dirty region seems to be an easy approach. But in the situation like this: Say I have a client area of 100x100; I have a dirty rect with (0, 0, 1, 1) and another dirty rect with (99, 99, 1, 1). Invalidating the bounding box which turns out to be 100x100 is not efficient at all. So I want to ask if someone can give any hint or give me a link of the related literatures. Thanks in advance!

    Read the article

  • the best way to make codeigniter website multi-language. calling from lang arrays depends on lang se

    - by artmania
    Hi friends, I'm researching hours and hours, but I could not find any clear, efficient way to make it :/ I have a codeigniter base website in English and I have to add a Polish language now. What is the best way to make my site in 2 language depending visitor selection? is there any way to create array files for each language and call them in view files depends on Session from lang selection? I don't wanna use database. Appreciate helps! I'm running out of deadline :/ thanks!!

    Read the article

  • Is there a better way to do SELECT queries in MySQL and sort them in PHP than this way?

    - by Kent
    I am just learning PHP/MySQL, one this I am having to do a lot is displaying data that was previously inserted into the database out to the user's browser. So I am doing this: $select = mysql_query('SELECT * FROM pages'); while ($return = mysql_fetch_assoc($select)) { $title = $return['title']; $author = $return['author']; $content = $return['content']; } then I can use these variables through out the page. Now, doing it the above way isn't an issue when I only have 3 columns in a database but what if I am dealing with a huge database with many more columns. I have a nagging feeling that the pros do it in some more efficient way where they maybe loop through the table they are selecting from to find all columns it has and associate them with variables automatically. Is that the case? or is the above how you guys do it too?

    Read the article

  • To create new DB connection or not?

    - by Yeti
    I'm running a cron job (every 15 minutes) which takes about a minute to execute. It makes lots of API calls and stores data to the database. Right now I create a mysql connection at the beginning and use the same connection through out the code. Most of the time is spent making the API calls. Will it be more efficient to create a new database connection only when it's time to store the data (below)? Kill the last connection Wait for API call to complete Create new DB connection Execute query Goto 1

    Read the article

  • Function chaining depending on boolean result

    - by Markive
    This is just an efficiency question really.. I'm interested to know if there is a more efficient or logical way that people use to handle this sort of scenario. In my asp.net application I am running a script to generate a new project my code at the top level looks like this: Dim ok As Boolean = True ok = createFolderStructure() If ok Then ok = createMDB() If ok Then ok = createProjectConfig() If ok Then ok = updateCompanyConfig() I create a boolean and each function returns a boolean result, the next function in this chain will only run if the previous one was successful. I do this because an asp.net application will continue to run through the page life cycle unless there is an unhandled exception and I don't want my whole application to be screwed up if something in the chain goes wrong (there is a lot of copying and deleting of files etc.. in this example). I was just wondering how other people handle this scenario? the vb.net single line if statement is quite succinct but I'm wondering if there is a better way?

    Read the article

  • ruby parametrized regular expression

    - by astropanic
    I have a string like "{some|words|are|here}" or "{another|set|of|words}" So in general the string consists of an opening curly bracket,words delimited by a pipe and a closing curly bracket. What is the most efficient way to get the selected word of that string ? I would like do something like this: @my_string = "{this|is|a|test|case}" @my_string.get_column(0) # => "this" @my_string.get_column(2) # => "is" @my_string.get_column(4) # => "case" What should the method get_column contain ?

    Read the article

  • Error with global variable in php not extending to included php file

    - by phileaton
    I have an html file that calls has a php section in the header like so: $page = basename($_SERVER['PHP_SELF']); Now further down in the html file I have a included footer like so: <?php include("footer.php"); ?> In this script I am referencing the $page variable, but it doesn't find it. I tried echoing the variable and nothing happened. When I place the basename() code in the footer.php file though, it handled it fine. I'd like to be efficient with the coding, so how can I get the footer.php file to get that $page variable? Is there a better way to do this than including the $page variable twice?

    Read the article

  • Is it faster to count down than it is to count up?

    - by Bob
    Our computer science teacher once said that for some reason it is more efficient to count down that count up. For example if you need to use a FOR loop and the loop index is not used somewhere (like printing a line of N * to the screen) I mean that code like this : for (i=N; i>=0; i--) putchar('*'); is better than: for (i=0; i<N; i++) putchar('*'); Is it really true? and if so does anyone know why?

    Read the article

  • What is the fastest method to calculate substring

    - by Misha Moroshko
    I have a huge "binary" string, like: 1110 0010 1000 1111 0000 1100 1010 0111.... It's length is 0 modulo 4, and may reach 500,000. I have also a corresponding array: {14, 2, 8, 15, 0, 12, 10, 7, ...} (every number in the array corresponds to 4 bits in the string) Given this string, this array, and a number N, I need to calculate the following substring string.substr(4*N, 4), i.e.: for N=0 the result should be 1110 for N=1 the result should be 0010 I need to perform this task many many times, and my question is what would be the fastest method to calculate this substring ? One method is to calculate the substring straight forward: string.substr(4*N, 4). I'm afraid this one is not efficient for such huge strings. Another method is to use array[N].toString(2) and then wrap the result with zeros if needed. I'm not sure how fast is this. May be you have any other ideas ?

    Read the article

  • Hudson: where to download file and stop specific builds running ?

    - by Kim Jong Woo
    I have a file that is generated inside (hudson server) /var/lib/hudson/jobs/jobtitle/1/out.txt I need to fetch this file, but doing a GET request for http://myhudson:8090/job/jobtitle/1/out.txt doesn't actually locate the file. Basically, I have another box that will grab this file from the hudson server. This box will make the out.txt file available for download. Another challenge is the build number directories. How would I be able to use the hudson API to stop or delete the specific builds running ? I am forced to do iterate through all build numbers to send STOP or DELETE api call in php using wget to do the REST API call. This is not very efficient. for ($i=0; $i < 3000; $i++){ exec('wget -O /dev/null "http://myhudson:8090/job/' . 'jobtitle' . '/$i/stop"'); }

    Read the article

  • Getting list of fields back from 'use fields' pragma?

    - by makenai
    So I'm familiar with the use fields pragma in Perl that can be used to restrict the fields that are stored in a class: package Fruit; use fields qw( color shape taste ); sub new { my ( $class, $params ) = @_; my $self = fields::new( $class ) unless ref $class; foreach my $name ( keys %$params ) { $self->{ $name } = $params->{ $name }; } return $self; } My question is.. once I've declared the fields at the top, how I can get the list back.. say because I want to generate accessors dynamically? Is keys %FIELDS the only way? Secondarily, is there a more efficient way to pre-populate the fields in the constructor than looping through and assigning each parameter as I am above? Thanks!

    Read the article

  • Is there a single query that can update a "sequence number" across multiple groups?

    - by Drarok
    Given a table like below, is there a single-query way to update the table from this: | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | NULL | | 2 | 1 | 2010-04-27 | NULL | | 3 | 2 | 2010-04-28 | NULL | | 4 | 3 | 2010-04-28 | NULL | To this (note that created_at is used for ordering, and sequence is "grouped" by type_id): | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | 1 | | 2 | 1 | 2010-04-27 | 2 | | 3 | 2 | 2010-04-28 | 1 | | 4 | 3 | 2010-04-28 | 1 | I've seen some code before that used an @ variable like the following, that I thought might work: SET @seq = 0; UPDATE `log` SET `sequence` = @seq := @seq + 1 ORDER BY `created_at`; But that obviously doesn't reset the sequence to 1 for each type_id. If there's no single-query way to do this, what's the most efficient way? Data in this table may be deleted, so I'm planning to run a stored procedure after the user is done editing to re-sequence the table.

    Read the article

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • Adding new "columns" to csv data file in Tcl

    - by George
    Hi All, I am dealing with a "large" measurement data, approximately 30K key-value pairs. The measurements have number of iterations. After each iteration a datafile (non-csv) with 30K kay-value pairs is created. I want to somehow creata a csv file of form: Key1,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... ... Now, I was wondering about efficient way of adding each iteration mesurement data as a columns to csv file in Tcl. So, far it seems that in either case I will need to load whole csv file into some variable(array/list) and work on each element by adding new measurement data. This seems somewhat inefficient. Is there another way, perhaps?

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • Java technologies for web-development.

    - by Alex
    Hello. I'm PHP-programmer, but I'm extremely interested in learning Java. So I decided to change speciality from PHP to Java. At the moment I have an opportunity to try to make quite simple web-application (it should contain 2-3 forms, several pages with information from the database and authorization module) and also I have a chance to choose any technology I want. Besides I have about 3 months for this task. I've decided to develop site with Java technologies for the purpose of studying. I've already read a book about Java ("Java2 Complete Reference" by P.Naughton) and currently I'm reading "Thinking in Java" by B.Eckel. I clearly understand it's not enough for efficient development, but I want, at least, to try. I would be very appreciated for the advises, which framework (for example) or technology to choose (Spring, Grails etc.) and what primary aspects and technologies of Java should I pay attention to? Thank you in advance.

    Read the article

  • Any tips of how to handle hierarchial trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • Is shortening properties names worth it?

    - by raam86
    in how to node Blog rolling with node.js and mongoDB the author mentions it's a good idea to shorten proprieties names: ....oft-reported issue with mongoDB is the size of the data on the disk... each and every record stores all the field-names .... This means that it can often be more space-efficient to have properties such as 't', or 'b' rather than 'title' or 'body', however for fear of confusion I would avoid this unless truly required! I am aware of solutions of how to do it I am more intrested in when is it truly required?

    Read the article

  • Finding group maxes in SQL join result

    - by Gene
    Two SQL tables. One contestant has many entries: Contestants Entries Id Name Id Contestant_Id Score -- ---- -- ------------- ----- 1 Fred 1 3 100 2 Mary 2 3 22 3 Irving 3 1 888 4 Grizelda 4 4 123 5 1 19 6 3 50 Low score wins. Need to retrieve current best scores of all contestants ordered by score: Best Entries Report Name Entry_Id Score ---- -------- ----- Fred 5 19 Irving 2 22 Grizelda 4 123 I can certainly get this done with many queries. My question is whether there's a way to get the result with one, efficient SQL query. I can almost see how to do it with GROUP BY, but not quite. In case it's relevant, the environment is Rails ActiveRecord and PostgreSQL.

    Read the article

  • GQL Query with __key__ in List of KEYs

    - by bossylobster
    In the GQL reference [1], it is encouraged to use the IN keyword with a list of values, and to construct a Key from hand the GQL query SELECT * FROM MyModel WHERE __key__ = KEY('MyModel', 'my_model_key') will succeed. However, using the code you would expect to work: SELECT * FROM MyModel WHERE __key__ IN (KEY('MyModel', 'my_model_key1'), KEY('MyModel', 'my_model_key2')) in the Datastore Viewer, there is a complaint of "Invalid GQL query string." What is the correct way to format such a query? [1] http://code.google.com/appengine/docs/python/datastore/gqlreference.html PS I know there are more efficient ways to do this in Python (without constructing a GQL query) and using the remote_api, but each call to the remote_api counts against quota. In an environment where quota is not (necessarily) free, quick and dirty queries are very helpful.

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • C# GroupJoin effectiveness

    - by bsnote
    without using GroupJoin: var playersDictionary = players.ToDictionary(player => player.Id, element => new PlayerDto { Rounds = new List<RoundDto>() }); foreach (var round in rounds) { PlayerDto playerDto; playersDictionary.TryGetValue(round.PlayerId, out playerDto); if (playerDto != null) { playerDto.Rounds.Add(new RoundDto { }); } } var playerDtoItems = playersDictionary.Values; using GroupJoin: var playerDtoItems = from player in players join round in rounds on player.Id equals round.PlayerId into playerRounds select new PlayerDto { Rounds = playerRounds.Select(playerRound => new RoundDto {}) }; Which of these two pieces is more efficient?

    Read the article

  • Image expire time

    - by Jens
    The google page speed tool recommends me to set 'Expires' headers for images etc. But what is the most efficient way to set an Expires header for an image? In now redirect all image requests to an imagehandler.php using htaccess: /* HTTP/1.1 404 Not Found, HTTP/1.1 400 Bad Request and content type detection stuff ... */ header( "Content-Type: " . $content_type ); header( "Cache-Control: public" ); header( "Last-Modified: ".gmdate("D, d M Y H:i:s", filemtime($path))." GMT"); header( "Expires: ". date("r",time() + (60*60*24*30))); readfile( $path ); But of course this adds extra loading time for my images on first request, and I was wondering if there was a better solution for this.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >