Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 100/151 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Force download menu for remote files

    - by o-logn
    Hey, I would like users to upload links on my site. When another user clicks on the link (e.g. PDF file), then I would like the download popup to show instead of actually displaying the PDF in browser. I know I can use Response.AddHeader/Response.WriteFile to achieve this, but the WriteFile method required a virtual path. However, the links uploaded by the user will be pointing to external servers. Can I still force the download popup to show and, if so, what would be the most efficient way of doing it? Thanks for any advice

    Read the article

  • Obtain patterns in one file from another using ack or awk or better way than grep?

    - by Rock
    Is there a way to obtain patterns in one file (a list of patterns) from another file using ack as the -f option in grep? I see there is an -f option in ack but it's different with the -f in grep. Perhaps an example will give you a better idea. Suppose I have file1: file1: a c e And file2: file2: a 1 b 2 c 3 d 4 e 5 And I want to obtain all the patterns in file1 from file2 to give: a 1 c 3 e 5 Can ack do this? Otherwise, is there a better way to handle the job (such like awk or using hash) because I have millions of records in both files and really need an efficient way to complete? Thanks!

    Read the article

  • Performing regex on a stream

    - by takoi
    I have some large text files which im going to preform consecutive matching on (just capturing, not replacing). Im thinking its not such a good idea to keep the whole file in memory, but rather use a Reader. What i know about the input is that if there's a match, its not going to span more than 5 lines. So my idea was to have some sort of buffer which just keeps these 5 lines, or so, do the first search, and continue. But it has to "know" where the regex match ended for this to work. e.g if the match ends at line 2 it should start the next search from here. Is it possible to do something like this in an efficient way?

    Read the article

  • What is the fastest method to calculate substring

    - by Misha Moroshko
    I have a huge "binary" string, like: 1110 0010 1000 1111 0000 1100 1010 0111.... It's length is 0 modulo 4, and may reach 500,000. I have also a corresponding array: {14, 2, 8, 15, 0, 12, 10, 7, ...} (every number in the array corresponds to 4 bits in the string) Given this string, this array, and a number N, I need to calculate the following substring string.substr(4*N, 4), i.e.: for N=0 the result should be 1110 for N=1 the result should be 0010 I need to perform this task many many times, and my question is what would be the fastest method to calculate this substring ? One method is to calculate the substring straight forward: string.substr(4*N, 4). I'm afraid this one is not efficient for such huge strings. Another method is to use array[N].toString(2) and then wrap the result with zeros if needed. I'm not sure how fast is this. May be you have any other ideas ?

    Read the article

  • Most efficent way to limit rows returns from union query- TSQL

    - by stephen776
    Hey guys...I have a simple stored proc with two queries joined with a union select name as 'result' from product where... union select productNum as 'result' from product where... I want to limit this to the TOP 10 results... if i put TOP 10 in each seperate query I get 20 results total. What is the most efficient way to limit total results to 10? I dont want to do TOP 5 in each because I may end up in a situation where I have something like 7 "names" and 3 "productsNumbers"

    Read the article

  • Which tool should I use for finding out my memory allocation in Perl?

    - by Colin Newell
    I've slurped in a big file using File::Slurp but given the size of the file I can see that I must have it in memory twice or perhaps it's getting inflated by being turned into 16 bit unicode. How can I best diagnose that sort of a problem in Perl? The file I pulled in is 800mb in size and my perl process that's analysing that data has roughly 1.6gb allocated at runtime. I realise that I may be wrong about my reason for the problem but I'm not sure the most efficient way to prove/disprove my theory.

    Read the article

  • the best way to make codeigniter website multi-language. calling from lang arrays depends on lang se

    - by artmania
    Hi friends, I'm researching hours and hours, but I could not find any clear, efficient way to make it :/ I have a codeigniter base website in English and I have to add a Polish language now. What is the best way to make my site in 2 language depending visitor selection? is there any way to create array files for each language and call them in view files depends on Session from lang selection? I don't wanna use database. Appreciate helps! I'm running out of deadline :/ thanks!!

    Read the article

  • Manage dirty rect efficiently

    - by Tianzhou Chen
    Hi all, I am implementing a view system and I want to keep track of all the dirty rects. It seems my dirty rect management is a bottleneck for the whole system. On one hand, invalidating the bounding box of the dirty region seems to be an easy approach. But in the situation like this: Say I have a client area of 100x100; I have a dirty rect with (0, 0, 1, 1) and another dirty rect with (99, 99, 1, 1). Invalidating the bounding box which turns out to be 100x100 is not efficient at all. So I want to ask if someone can give any hint or give me a link of the related literatures. Thanks in advance!

    Read the article

  • Debugging ASP.NET in VS

    - by negligible
    A lot of what I'm doing at the moment is figuring out other peoples code and adding or adapting functions, so currently I am debugging more than I am writing code of my own. I'm still new to this, Junior Developer, and I am always finding new ways to improve what I am doing. For example I recently found This Guide which had some excellent tips, such as overriding the ToString() method in your classes so children are readable from their parents. So I am looking for any other tips or tricks to make my debugging more efficient, as I recognise it as a big part of programming, that you more experienced programmers may have picked up or found. Anything appreciated, I can read websites just fine so no need to explain it yourself if you have a good link!

    Read the article

  • To create new DB connection or not?

    - by Yeti
    I'm running a cron job (every 15 minutes) which takes about a minute to execute. It makes lots of API calls and stores data to the database. Right now I create a mysql connection at the beginning and use the same connection through out the code. Most of the time is spent making the API calls. Will it be more efficient to create a new database connection only when it's time to store the data (below)? Kill the last connection Wait for API call to complete Create new DB connection Execute query Goto 1

    Read the article

  • Java technologies for web-development.

    - by Alex
    Hello. I'm PHP-programmer, but I'm extremely interested in learning Java. So I decided to change speciality from PHP to Java. At the moment I have an opportunity to try to make quite simple web-application (it should contain 2-3 forms, several pages with information from the database and authorization module) and also I have a chance to choose any technology I want. Besides I have about 3 months for this task. I've decided to develop site with Java technologies for the purpose of studying. I've already read a book about Java ("Java2 Complete Reference" by P.Naughton) and currently I'm reading "Thinking in Java" by B.Eckel. I clearly understand it's not enough for efficient development, but I want, at least, to try. I would be very appreciated for the advises, which framework (for example) or technology to choose (Spring, Grails etc.) and what primary aspects and technologies of Java should I pay attention to? Thank you in advance.

    Read the article

  • Is there a better way to do SELECT queries in MySQL and sort them in PHP than this way?

    - by Kent
    I am just learning PHP/MySQL, one this I am having to do a lot is displaying data that was previously inserted into the database out to the user's browser. So I am doing this: $select = mysql_query('SELECT * FROM pages'); while ($return = mysql_fetch_assoc($select)) { $title = $return['title']; $author = $return['author']; $content = $return['content']; } then I can use these variables through out the page. Now, doing it the above way isn't an issue when I only have 3 columns in a database but what if I am dealing with a huge database with many more columns. I have a nagging feeling that the pros do it in some more efficient way where they maybe loop through the table they are selecting from to find all columns it has and associate them with variables automatically. Is that the case? or is the above how you guys do it too?

    Read the article

  • Data format for content heavy iPhone app - Plist or XML?

    - by Toby
    Hello, I'm building an iPhone app that is essentially a book, it will be bundled with a lot of text-heavy content. I considered bundling the data as XML and load it when the application starts but the XML would contain a lot of nested structures and be a bit of a pain to parse. Would it be better to use a plist? I'm concerned about memory usage and plists are loaded entirely into memory - can they be parsed in chunks? Is there a maximum size to a plist and how efficient are they? I'm not sure how big the bundled content is going to be yet but I should imagine it could be anywhere from 500k to 4MB. Thanks in advance.

    Read the article

  • CSV parser in C++

    - by User1
    All I need is a good CSV file parser for C++. At this point it can really just be a comma-delimited parser (ie don't worry about escaping new lines and commas). The main need is a line-by-line parser that will return a vector for the next line each time the method is called. I found this article which looks quite promising: http://www.boost.org/doc/libs/1_35_0/libs/spirit/example/fundamental/list_parser.cpp I've never used Boost's Spirit, but am willing to try it. Is it overkill/bloated or is it fast and efficient? Does anyone have faster algorithms using STL or anything else? Thanks!

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Is it faster to count down than it is to count up?

    - by Bob
    Our computer science teacher once said that for some reason it is more efficient to count down that count up. For example if you need to use a FOR loop and the loop index is not used somewhere (like printing a line of N * to the screen) I mean that code like this : for (i=N; i>=0; i--) putchar('*'); is better than: for (i=0; i<N; i++) putchar('*'); Is it really true? and if so does anyone know why?

    Read the article

  • Hudson: where to download file and stop specific builds running ?

    - by Kim Jong Woo
    I have a file that is generated inside (hudson server) /var/lib/hudson/jobs/jobtitle/1/out.txt I need to fetch this file, but doing a GET request for http://myhudson:8090/job/jobtitle/1/out.txt doesn't actually locate the file. Basically, I have another box that will grab this file from the hudson server. This box will make the out.txt file available for download. Another challenge is the build number directories. How would I be able to use the hudson API to stop or delete the specific builds running ? I am forced to do iterate through all build numbers to send STOP or DELETE api call in php using wget to do the REST API call. This is not very efficient. for ($i=0; $i < 3000; $i++){ exec('wget -O /dev/null "http://myhudson:8090/job/' . 'jobtitle' . '/$i/stop"'); }

    Read the article

  • Warn user when new data is inserted on database

    - by João Menighin
    I don't know how to search about this so I'm kinda lost (the two topics I saw here were closed). I have a news website and I want to warn the user when a new data is inserted on the database. I want to do that like here on StackOverflow where we are warned without reloading the page or like in facebook where you are warned about new messages/notifications without reloading. Which is the best way to do that? Is it some kind of listener with a timeout that is constantly checking the database? It doesn't sounds efficient... Thanks in advance.

    Read the article

  • Adding new "columns" to csv data file in Tcl

    - by George
    Hi All, I am dealing with a "large" measurement data, approximately 30K key-value pairs. The measurements have number of iterations. After each iteration a datafile (non-csv) with 30K kay-value pairs is created. I want to somehow creata a csv file of form: Key1,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... ... Now, I was wondering about efficient way of adding each iteration mesurement data as a columns to csv file in Tcl. So, far it seems that in either case I will need to load whole csv file into some variable(array/list) and work on each element by adding new measurement data. This seems somewhat inefficient. Is there another way, perhaps?

    Read the article

  • Getting list of fields back from 'use fields' pragma?

    - by makenai
    So I'm familiar with the use fields pragma in Perl that can be used to restrict the fields that are stored in a class: package Fruit; use fields qw( color shape taste ); sub new { my ( $class, $params ) = @_; my $self = fields::new( $class ) unless ref $class; foreach my $name ( keys %$params ) { $self->{ $name } = $params->{ $name }; } return $self; } My question is.. once I've declared the fields at the top, how I can get the list back.. say because I want to generate accessors dynamically? Is keys %FIELDS the only way? Secondarily, is there a more efficient way to pre-populate the fields in the constructor than looping through and assigning each parameter as I am above? Thanks!

    Read the article

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • Stringification of a macro value

    - by SF.
    I faced a problem - I need to use a macro value both as string and as integer. #define RECORDS_PER_PAGE 10 /*... */ #define REQUEST_RECORDS \ "SELECT Fields FROM Table WHERE Conditions" \ " OFFSET %d * " #RECORDS_PER_PAGE \ " LIMIT " #RECORDS_PER_PAGE ";" char result_buffer[RECORDS_PER_PAGE][MAX_RECORD_LEN]; /* ...and some more uses of RECORDS_PER_PAGE, elsewhere... */ This fails with a message about "stray #", and even if it worked, I guess I'd get the macro names stringified, not the values. Of course I can feed the values to the final method ( "LIMIT %d ", page*RECORDS_PER_PAGE ) but it's neither pretty nor efficient. It's times like this when I wish the preprocessor didn't treat strings in a special way and would process their content just like normal code. For now, I cludged it with #define RECORDS_PER_PAGE_TXT "10" but understandably, I'm not happy about it. How to get it right?

    Read the article

  • Is there a single query that can update a "sequence number" across multiple groups?

    - by Drarok
    Given a table like below, is there a single-query way to update the table from this: | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | NULL | | 2 | 1 | 2010-04-27 | NULL | | 3 | 2 | 2010-04-28 | NULL | | 4 | 3 | 2010-04-28 | NULL | To this (note that created_at is used for ordering, and sequence is "grouped" by type_id): | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | 1 | | 2 | 1 | 2010-04-27 | 2 | | 3 | 2 | 2010-04-28 | 1 | | 4 | 3 | 2010-04-28 | 1 | I've seen some code before that used an @ variable like the following, that I thought might work: SET @seq = 0; UPDATE `log` SET `sequence` = @seq := @seq + 1 ORDER BY `created_at`; But that obviously doesn't reset the sequence to 1 for each type_id. If there's no single-query way to do this, what's the most efficient way? Data in this table may be deleted, so I'm planning to run a stored procedure after the user is done editing to re-sequence the table.

    Read the article

  • Effective way of String spliting C#

    - by openidsujoy
    I have a completed string like this N:Pay in Cash++RGI:40++R:200++T:Purchase++IP:N++IS:N++PD:PC++UCP:598.80++UPP:0.00++TCP:598.80++TPP:0.00++QE:1++QS:1++CPC:USD++PPC:Points++D:Y++E:Y++IFE:Y++AD:Y++IR:++MV:++CP:~ ~N:ERedemption++RGI:42++R:200++T:Purchase++IP:N++IS:N++PD:PC++UCP:598.80++UPP:0.00++TCP:598.80++TPP:0.00++QE:1++QS:1++CPC:USD++PPC:Points++D:Y++E:Y++IFE:Y++AD:Y++IR:++MV:++CP: this string is like this It's list of PO's(Payment Options) which are separated by ~~ this list may contains one or more PO contains only Key-Value Pairs which separated by : spaces are denoted by ++ I need to extract the values for Key "RGI" and "N". I can do it via for loop , I want a efficient way to do this. any help on this.

    Read the article

  • Is shortening properties names worth it?

    - by raam86
    in how to node Blog rolling with node.js and mongoDB the author mentions it's a good idea to shorten proprieties names: ....oft-reported issue with mongoDB is the size of the data on the disk... each and every record stores all the field-names .... This means that it can often be more space-efficient to have properties such as 't', or 'b' rather than 'title' or 'body', however for fear of confusion I would avoid this unless truly required! I am aware of solutions of how to do it I am more intrested in when is it truly required?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >