Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 100/151 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Warn user when new data is inserted on database

    - by João Menighin
    I don't know how to search about this so I'm kinda lost (the two topics I saw here were closed). I have a news website and I want to warn the user when a new data is inserted on the database. I want to do that like here on StackOverflow where we are warned without reloading the page or like in facebook where you are warned about new messages/notifications without reloading. Which is the best way to do that? Is it some kind of listener with a timeout that is constantly checking the database? It doesn't sounds efficient... Thanks in advance.

    Read the article

  • Debugging ASP.NET in VS

    - by negligible
    A lot of what I'm doing at the moment is figuring out other peoples code and adding or adapting functions, so currently I am debugging more than I am writing code of my own. I'm still new to this, Junior Developer, and I am always finding new ways to improve what I am doing. For example I recently found This Guide which had some excellent tips, such as overriding the ToString() method in your classes so children are readable from their parents. So I am looking for any other tips or tricks to make my debugging more efficient, as I recognise it as a big part of programming, that you more experienced programmers may have picked up or found. Anything appreciated, I can read websites just fine so no need to explain it yourself if you have a good link!

    Read the article

  • What is the fastest method to calculate substring

    - by Misha Moroshko
    I have a huge "binary" string, like: 1110 0010 1000 1111 0000 1100 1010 0111.... It's length is 0 modulo 4, and may reach 500,000. I have also a corresponding array: {14, 2, 8, 15, 0, 12, 10, 7, ...} (every number in the array corresponds to 4 bits in the string) Given this string, this array, and a number N, I need to calculate the following substring string.substr(4*N, 4), i.e.: for N=0 the result should be 1110 for N=1 the result should be 0010 I need to perform this task many many times, and my question is what would be the fastest method to calculate this substring ? One method is to calculate the substring straight forward: string.substr(4*N, 4). I'm afraid this one is not efficient for such huge strings. Another method is to use array[N].toString(2) and then wrap the result with zeros if needed. I'm not sure how fast is this. May be you have any other ideas ?

    Read the article

  • the best way to make codeigniter website multi-language. calling from lang arrays depends on lang se

    - by artmania
    Hi friends, I'm researching hours and hours, but I could not find any clear, efficient way to make it :/ I have a codeigniter base website in English and I have to add a Polish language now. What is the best way to make my site in 2 language depending visitor selection? is there any way to create array files for each language and call them in view files depends on Session from lang selection? I don't wanna use database. Appreciate helps! I'm running out of deadline :/ thanks!!

    Read the article

  • Which tool should I use for finding out my memory allocation in Perl?

    - by Colin Newell
    I've slurped in a big file using File::Slurp but given the size of the file I can see that I must have it in memory twice or perhaps it's getting inflated by being turned into 16 bit unicode. How can I best diagnose that sort of a problem in Perl? The file I pulled in is 800mb in size and my perl process that's analysing that data has roughly 1.6gb allocated at runtime. I realise that I may be wrong about my reason for the problem but I'm not sure the most efficient way to prove/disprove my theory.

    Read the article

  • Is there a better way to do SELECT queries in MySQL and sort them in PHP than this way?

    - by Kent
    I am just learning PHP/MySQL, one this I am having to do a lot is displaying data that was previously inserted into the database out to the user's browser. So I am doing this: $select = mysql_query('SELECT * FROM pages'); while ($return = mysql_fetch_assoc($select)) { $title = $return['title']; $author = $return['author']; $content = $return['content']; } then I can use these variables through out the page. Now, doing it the above way isn't an issue when I only have 3 columns in a database but what if I am dealing with a huge database with many more columns. I have a nagging feeling that the pros do it in some more efficient way where they maybe loop through the table they are selecting from to find all columns it has and associate them with variables automatically. Is that the case? or is the above how you guys do it too?

    Read the article

  • What's a good algorithm for searching arrays N and M, in order to find elements in N that also exist

    - by GenTiradentes
    I have two arrays, N and M. they are both arbitrarily sized, though N is usually smaller than M. I want to find out what elements in N also exist in M, in the fastest way possible. To give you an example of one possible instance of the program, N is an array 12 units in size, and M is an array 1,000 units in size. I want to find which elements in N also exist in M. (There may not be any matches.) The more parallel the solution, the better. I used to use a hash map for this, but it's not quite as efficient as I'd like it to be. Typing this out, I just thought of running a binary search of M on sizeof(N) independent threads. (Using CUDA) I'll see how this works, though other suggestions are welcome.

    Read the article

  • Performing regex on a stream

    - by takoi
    I have some large text files which im going to preform consecutive matching on (just capturing, not replacing). Im thinking its not such a good idea to keep the whole file in memory, but rather use a Reader. What i know about the input is that if there's a match, its not going to span more than 5 lines. So my idea was to have some sort of buffer which just keeps these 5 lines, or so, do the first search, and continue. But it has to "know" where the regex match ended for this to work. e.g if the match ends at line 2 it should start the next search from here. Is it possible to do something like this in an efficient way?

    Read the article

  • Manage dirty rect efficiently

    - by Tianzhou Chen
    Hi all, I am implementing a view system and I want to keep track of all the dirty rects. It seems my dirty rect management is a bottleneck for the whole system. On one hand, invalidating the bounding box of the dirty region seems to be an easy approach. But in the situation like this: Say I have a client area of 100x100; I have a dirty rect with (0, 0, 1, 1) and another dirty rect with (99, 99, 1, 1). Invalidating the bounding box which turns out to be 100x100 is not efficient at all. So I want to ask if someone can give any hint or give me a link of the related literatures. Thanks in advance!

    Read the article

  • Detecting similar words among n text documents

    - by javanes
    Hi; I have n documents and want to find common words that are included in these documents. For example I want to say (n-3) documents include the word "web". Certainly I can do this by basic data structures but there maybe efficient algorithm or a way to handle same words with different suffix. Is there any algorithm for such purposes? I am unfamiliar with datamining world. In general manner is there a term used for efforts of finding similarities between different documents? If there is then I will make my research easily. Thanks.

    Read the article

  • Is shortening properties names worth it?

    - by raam86
    in how to node Blog rolling with node.js and mongoDB the author mentions it's a good idea to shorten proprieties names: ....oft-reported issue with mongoDB is the size of the data on the disk... each and every record stores all the field-names .... This means that it can often be more space-efficient to have properties such as 't', or 'b' rather than 'title' or 'body', however for fear of confusion I would avoid this unless truly required! I am aware of solutions of how to do it I am more intrested in when is it truly required?

    Read the article

  • Force download menu for remote files

    - by o-logn
    Hey, I would like users to upload links on my site. When another user clicks on the link (e.g. PDF file), then I would like the download popup to show instead of actually displaying the PDF in browser. I know I can use Response.AddHeader/Response.WriteFile to achieve this, but the WriteFile method required a virtual path. However, the links uploaded by the user will be pointing to external servers. Can I still force the download popup to show and, if so, what would be the most efficient way of doing it? Thanks for any advice

    Read the article

  • Data format for content heavy iPhone app - Plist or XML?

    - by Toby
    Hello, I'm building an iPhone app that is essentially a book, it will be bundled with a lot of text-heavy content. I considered bundling the data as XML and load it when the application starts but the XML would contain a lot of nested structures and be a bit of a pain to parse. Would it be better to use a plist? I'm concerned about memory usage and plists are loaded entirely into memory - can they be parsed in chunks? Is there a maximum size to a plist and how efficient are they? I'm not sure how big the bundled content is going to be yet but I should imagine it could be anywhere from 500k to 4MB. Thanks in advance.

    Read the article

  • To create new DB connection or not?

    - by Yeti
    I'm running a cron job (every 15 minutes) which takes about a minute to execute. It makes lots of API calls and stores data to the database. Right now I create a mysql connection at the beginning and use the same connection through out the code. Most of the time is spent making the API calls. Will it be more efficient to create a new database connection only when it's time to store the data (below)? Kill the last connection Wait for API call to complete Create new DB connection Execute query Goto 1

    Read the article

  • Java technologies for web-development.

    - by Alex
    Hello. I'm PHP-programmer, but I'm extremely interested in learning Java. So I decided to change speciality from PHP to Java. At the moment I have an opportunity to try to make quite simple web-application (it should contain 2-3 forms, several pages with information from the database and authorization module) and also I have a chance to choose any technology I want. Besides I have about 3 months for this task. I've decided to develop site with Java technologies for the purpose of studying. I've already read a book about Java ("Java2 Complete Reference" by P.Naughton) and currently I'm reading "Thinking in Java" by B.Eckel. I clearly understand it's not enough for efficient development, but I want, at least, to try. I would be very appreciated for the advises, which framework (for example) or technology to choose (Spring, Grails etc.) and what primary aspects and technologies of Java should I pay attention to? Thank you in advance.

    Read the article

  • Stringification of a macro value

    - by SF.
    I faced a problem - I need to use a macro value both as string and as integer. #define RECORDS_PER_PAGE 10 /*... */ #define REQUEST_RECORDS \ "SELECT Fields FROM Table WHERE Conditions" \ " OFFSET %d * " #RECORDS_PER_PAGE \ " LIMIT " #RECORDS_PER_PAGE ";" char result_buffer[RECORDS_PER_PAGE][MAX_RECORD_LEN]; /* ...and some more uses of RECORDS_PER_PAGE, elsewhere... */ This fails with a message about "stray #", and even if it worked, I guess I'd get the macro names stringified, not the values. Of course I can feed the values to the final method ( "LIMIT %d ", page*RECORDS_PER_PAGE ) but it's neither pretty nor efficient. It's times like this when I wish the preprocessor didn't treat strings in a special way and would process their content just like normal code. For now, I cludged it with #define RECORDS_PER_PAGE_TXT "10" but understandably, I'm not happy about it. How to get it right?

    Read the article

  • CSV parser in C++

    - by User1
    All I need is a good CSV file parser for C++. At this point it can really just be a comma-delimited parser (ie don't worry about escaping new lines and commas). The main need is a line-by-line parser that will return a vector for the next line each time the method is called. I found this article which looks quite promising: http://www.boost.org/doc/libs/1_35_0/libs/spirit/example/fundamental/list_parser.cpp I've never used Boost's Spirit, but am willing to try it. Is it overkill/bloated or is it fast and efficient? Does anyone have faster algorithms using STL or anything else? Thanks!

    Read the article

  • Getting list of fields back from 'use fields' pragma?

    - by makenai
    So I'm familiar with the use fields pragma in Perl that can be used to restrict the fields that are stored in a class: package Fruit; use fields qw( color shape taste ); sub new { my ( $class, $params ) = @_; my $self = fields::new( $class ) unless ref $class; foreach my $name ( keys %$params ) { $self->{ $name } = $params->{ $name }; } return $self; } My question is.. once I've declared the fields at the top, how I can get the list back.. say because I want to generate accessors dynamically? Is keys %FIELDS the only way? Secondarily, is there a more efficient way to pre-populate the fields in the constructor than looping through and assigning each parameter as I am above? Thanks!

    Read the article

  • Displaying objects based on if a user is logged in or not

    - by MaxMackie
    I'm learning about PHP sessions for user authentication on my website. I know how to restrict the viewing of a complete page using sessions (simply check if the 'uid' session variable is set and if it is, show content, if not redirect to an error). However I'm trying to figure out the best way to selectively show and hide different objects (div, text, images) based on if a user is logged in or not. Is it as simple as checking for the 'uid' session variable and displaying based on if it set or not? Is there a more efficient way of doing this id there are a lot of conditional elements on a page?

    Read the article

  • Efficiently Reshaping/Reordering Numpy Array to Properly Ordered Tiles (Image)

    - by Phelix
    I would like to be able to somehow reorder a numpy array for efficient processing of tiles. what I got: >>> A = np.array([[1,2],[3,4]]).repeat(2,0).repeat(2,1) >>> A # image like array array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) >>> A.reshape(2,2,4) array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) what I want: X >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [4, 4, 4, 4]]]) to be able to do something like: >>> X[X.sum(2)>12] -= 1 >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [3, 3, 3, 3]]]) Is this possible without a slow python loop? Bonus: Conversion back from X to A Edit: How can I get X from A?

    Read the article

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • Effective way of String spliting C#

    - by openidsujoy
    I have a completed string like this N:Pay in Cash++RGI:40++R:200++T:Purchase++IP:N++IS:N++PD:PC++UCP:598.80++UPP:0.00++TCP:598.80++TPP:0.00++QE:1++QS:1++CPC:USD++PPC:Points++D:Y++E:Y++IFE:Y++AD:Y++IR:++MV:++CP:~ ~N:ERedemption++RGI:42++R:200++T:Purchase++IP:N++IS:N++PD:PC++UCP:598.80++UPP:0.00++TCP:598.80++TPP:0.00++QE:1++QS:1++CPC:USD++PPC:Points++D:Y++E:Y++IFE:Y++AD:Y++IR:++MV:++CP: this string is like this It's list of PO's(Payment Options) which are separated by ~~ this list may contains one or more PO contains only Key-Value Pairs which separated by : spaces are denoted by ++ I need to extract the values for Key "RGI" and "N". I can do it via for loop , I want a efficient way to do this. any help on this.

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Hudson: where to download file and stop specific builds running ?

    - by Kim Jong Woo
    I have a file that is generated inside (hudson server) /var/lib/hudson/jobs/jobtitle/1/out.txt I need to fetch this file, but doing a GET request for http://myhudson:8090/job/jobtitle/1/out.txt doesn't actually locate the file. Basically, I have another box that will grab this file from the hudson server. This box will make the out.txt file available for download. Another challenge is the build number directories. How would I be able to use the hudson API to stop or delete the specific builds running ? I am forced to do iterate through all build numbers to send STOP or DELETE api call in php using wget to do the REST API call. This is not very efficient. for ($i=0; $i < 3000; $i++){ exec('wget -O /dev/null "http://myhudson:8090/job/' . 'jobtitle' . '/$i/stop"'); }

    Read the article

  • efficiently compare two BitArrays of the same length

    - by BobTurbo
    How would I do this? I am trying to count when both arrays have the same value of TRUE/1 at the same index. As you can see, my code has multiple bitarrays and is looping through each one and comparing them with a comparisonArray with another loop. It doesn't seem to be very efficient and I need it to be. foreach (bitArrayTuple in bitarryList) { for (int i = 0; i < arrayLength; i++) if (bArrayTuple.Item2[i] && comparisonArray[i]) bitArrayTuple.Item1++; } where Item1 is the count and Item2 is a bitarray.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >