Search Results

Search found 1091 results on 44 pages for 'efficiency'.

Page 11/44 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Generics and Performance question.

    - by Tarmon
    Hey Everyone, I was wondering if anyone could look over a class I wrote, I am receiving generic warnings in Eclipse and I am just wondering if it could be cleaned up at all. All of the warnings I received are surrounded in ** in my code below. The class takes a list of strings in the form of (hh:mm AM/PM) and converts them into HourMinute objects in order to find the first time in the list that comes after the current time. I am also curious about if there are more efficient ways to do this. This works fine but the student in me just wants to find out how I could do this better. public class FindTime { private String[] hourMinuteStringArray; public FindTime(String[] hourMinuteStringArray){ this.hourMinuteStringArray = hourMinuteStringArray; } public int findTime(){ HourMinuteList hourMinuteList = convertHMStringArrayToHMArray(hourMinuteStringArray); Calendar calendar = new GregorianCalendar(); int hour = calendar.get(Calendar.HOUR_OF_DAY); int minute = calendar.get(Calendar.MINUTE); HourMinute now = new HourMinute(hour,minute); int nearestTimeIndex = findNearestTimeIndex(hourMinuteList, now); return nearestTimeIndex; } private int findNearestTimeIndex(HourMinuteList hourMinuteList, HourMinute now){ HourMinute current; int position = 0; Iterator<HourMinute> iterator = **hourMinuteList.iterator()**; while(iterator.hasNext()){ current = (HourMinute) iterator.next(); if(now.compareTo(current) == -1){ return position; } position++; } return position; } private static HourMinuteList convertHMStringArrayToHMArray(String[] times){ FindTime s = new FindTime(new String[1]); HourMinuteList list = s.new HourMinuteList(); String[] splitTime = new String[3]; for(String time : times ){ String[] tempFirst = time.split(":"); String[] tempSecond = tempFirst[1].split(" "); splitTime[0] = tempFirst[0]; splitTime[1] = tempSecond[0]; splitTime[2] = tempSecond[1]; int hour = Integer.parseInt(splitTime[0]); int minute = Integer.parseInt(splitTime[1]); HourMinute hm; if(splitTime[2] == "AM"){ hm = s.new HourMinute(hour,minute); } else if((splitTime[2].equals("PM")) && (hour < 12)){ hm = s.new HourMinute(hour + 12,minute); } else{ hm = s.new HourMinute(hour,minute); } **list.add(hm);** } return list; } class **HourMinuteList** extends **ArrayList** implements RandomAccess{ } class HourMinute implements **Comparable** { int hour; int minute; public HourMinute(int hour, int minute) { setHour(hour); setMinute(minute); } int getMinute() { return this.minute; } String getMinuteString(){ if(this.minute < 10){ return "0" + this.minute; }else{ return "" + this.minute; } } int getHour() { return this.hour; } void setHour(int hour) { this.hour = hour; } void setMinute(int minute) { this.minute = minute; } @Override public int compareTo(Object aThat) { if (aThat instanceof HourMinute) { HourMinute that = (HourMinute) aThat; if (this.getHour() == that.getHour()) { if (this.getMinute() > that.getMinute()) { return 1; } else if (this.getMinute() < that.getMinute()) { return -1; } else { return 0; } } else if (this.getHour() > that.getHour()) { return 1; } else if (this.getHour() < that.getHour()) { return -1; } else { return 0; } } return 0; } } If you have any questions let me know. Thanks, Rob

    Read the article

  • What are some typing patterns using a standard QWERTY keyboard that work well for you as a programme

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer? Update: US layout and I use home row Update 2: Despite my best efforts to the contrary, people are interpreting this questionas "how do I learn to type" or "what keyboard should I use". Take it as a given, that I will learn to type, and that I will be doing so on a standard QWERTY layout keyboard, not DVORAK. I am interested in aquiring a skill that will be useful wherever I go.

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • MySqli Prepared Statment Fetch Row

    - by pws5068
    I'm looking for an efficient way to select a specific row with a php prepared statement. $iDB = new mysqliDB(); $stmt = $iDB->prepare($sql); $stmt->bind_param('s',$searhStr); $stmt->execute(); $stmt->store_result(); $stmt->bind_result($objID,$typeID,$year,$updated); // Now jump to row $rowNumber echo("Obj ID = {$objID}"); Any suggestions? Thanks!

    Read the article

  • Database design and foreign keys: Where should they be added in related tables?

    - by Carvell Fenton
    I have a relatively simple subset of tables in my database for tracking something called sessions. These are academic sessions (think offerings of a particular program). The tables to represent a sessions information are: sessions session_terms session_subjects session_mark_item_info session_marks All of these tables have their own primary keys, and are like a tree, in that sessions have terms, terms have subjects, subjects have mark items, etc. So each on would have at least its "parent's" foreign key. My question is, design wise is it a good idea to include the sessions primary key in the other tables as a foreign key to easily select related session items, or is that too much redundency? If I include the session foreign key (or all parent foreign keys from tables up the heirarchy) in all the tables, I can easily select all the marks for a session. As an example, something like SELECT mark FROM session_marks WHERE sessionID=... If I don't, then I would have to combine selects with something like WHERE something IN (SELECT... Which approach is "more correct" or efficient? Thanks in advance!

    Read the article

  • Searching with Linq

    - by Phil
    I have a collection of objects, each with an int Frame property. Given an int, I want to find the object in the collection that has the closest Frame. Here is what I'm doing so far: public static void Search(int frameNumber) { var differences = (from rec in _records select new { FrameDiff = Math.Abs(rec.Frame - frameNumber), Record = rec }).OrderBy(x => x.FrameDiff); var closestRecord = differences.FirstOrDefault().Record; //continue work... } This is great and everything, except there are 200,000 items in my collection and I call this method very frequently. Is there a relatively easy, more efficient way to do this?

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • PHP News Feed Database & Design

    - by pws5068
    I'm designing a News Feed system using PHP/MySQL similar to facebook's. I have asked a similar question before but now I've changed the design and I'm looking for feedback. Example News: User_A commented on User_B's new album. "Hey man nice pictures!" User_B added a new Photo to [his/her] profile. [show photo thumbnail] Initially, I implemented this using excessive columns for Obj1:Type1 | Obj2:Type2 | etc.. Now the design is set up using a couple special keywords, and actor/receiver relationships. My database uses a table of messages joined on a table containing userid,actionid,receiverid,receiverObjectTypeID, Here's a condensed version of what it will look like once joined: News_ID | User_ID | Message | Timestamp 2643 A %a commented on %o's new %r. SomeTimestamp 2644 B %a added a new %r to [his/her] profile. SomeTimestamp %a = the User_ID of the person doing the action %r = the receiving object %o = the owner of the receiving object (for example the owner of the album) (NULL if %r is a user) Questions: Is this a smart (efficient/scalable) way to move forward? How can I show messages like: "User_B added 4 new photos to his profile."?

    Read the article

  • What is a faster way of merging the values of this Python structure into a single dictionary?

    - by jcoon
    I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient. I have a dictionary of dictionaries, like this: groups_and_classes = {'group_1': {'class_A': [1, 2, 3], 'class_B': [1, 3, 5, 7], 'class_c': [1, 2], # ...many more items like this }, 'group_2': {'class_A': [11, 12, 13], 'class_C': [5, 6, 7, 8, 9] }, # ...and many more items like this } A function creates a new object from groups_and_classes like this (the function to create this is called often): all_classes = {'class_A': [1, 2, 3, 11, 12, 13], 'class_B': [1, 3, 5, 7, 9], 'class_C': [1, 2, 5, 6, 7, 8, 9] } Right now, there is a loop that does this: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): for v in vals: if all_classes.has_key(c): if v not in all_classes[c]: all_classes[c].append(v) else: all_classes[c] = [v] So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): try: all_classes[c].update(set(vals)) except KeyError: all_classes[c] = set(vals) This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code. Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?

    Read the article

  • Best way to simplify this code, more efficient

    - by Derek
    My question is, is there a way to make this code more efficient or write it in a simple way? javascript by the way. switch (tempvar1) { case 1: currentSlide = 'slide1'; showaslide('ppslide1'); break; case 2: currentSlide = 'slide2'; showaslide('ppslide2'); break; case 3: currentSlide = 'slide3'; showaslide('ppslide3'); break; case 4: currentSlide = 'slide4'; showaslide('ppslide4'); break; case 5: currentSlide = 'slide5'; showaslide('ppslide5'); break; case 6: currentSlide = 'slide6'; showaslide('ppslide6'); break; // 20 total cases }

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Minify an Entire Directory While Keeping Element/Style/Script Relationships?

    - by Jonathan Sampson
    Do any solutions currnetly exist that can minify an entire project directory? More importantly, do any solutions exist that can shorten classnames, id's, and keep them consistent throughout all documents? Something that can turn this: Index.html --- <div class="fooBar"> <!-- Code --> </div> Styles.css --- .fooBar { // Comments and Messages background-color:#000000; } Index.js --- $(".fooBar").click(function(){ /* More Comments */ alert("fooBar"); }); Into This: Index.html --- <div class="a"></div> Styles.css --- .a{background-color:#000;} Index.js --- $(".a").click(function(){alert("fooBar");});

    Read the article

  • How Can I Improve/SpeedUp This FrequentFunction in C?

    - by Peter Lee
    Hi folks, How can I improve / speed up this frequent function? #include <math.h> #include <stdio.h> #include <stdlib.h> #include <time.h> // Assumptions: x, y, z, a, b and c are all array of 10. // Requirement: return the value of ret; // get all elements of array c float fnFrequentFunction(const float* x, const float* y, const float* z, const float *a, const float *b, float *c, int M) { register float tmp; register float sum; register float ret = 0; int i; for (i = 0; i < M; i++) // M == 1, 2, 4, or 8 { sum = 0; tmp = x[0] - y[0]; sum += tmp * tmp * z[0]; tmp = x[1] - y[1]; sum += tmp * tmp * z[1]; tmp = x[2] - y[2]; sum += tmp * tmp * z[2]; tmp = x[3] - y[3]; sum += tmp * tmp * z[3]; tmp = x[4] - y[4]; sum += tmp * tmp * z[4]; tmp = x[5] - y[5]; sum += tmp * tmp * z[5]; tmp = x[6] - y[6]; sum += tmp * tmp * z[6]; tmp = x[7] - y[7]; sum += tmp * tmp * z[7]; tmp = x[8] - y[8]; sum += tmp * tmp * z[8]; tmp = x[9] - y[9]; sum += tmp * tmp * z[9]; ret += (c[i] = log(a[i] * b[i]) + sum); } return ret; } int main() { float x[10] = {0.001251f, 0.563585f, 0.193304f, 0.808741f, 0.585009f, 0.479873f, 0.350291f, 0.895962f, 0.622840f, 0.746605f}; float y[10] = {0.864406f, 0.709006f, 0.091433f, 0.995727f, 0.227180f, 0.902585f, 0.659047f, 0.865627f, 0.846767f, 0.514359f}; float z[10] = {0.866817f, 0.581347f, 0.175542f, 0.620197f, 0.781823f, 0.778588f, 0.938688f, 0.721610f, 0.940214f, 0.811353f}; float a[10] = {0.870205f, 0.733879f, 0.711386f, 0.588244f, 0.484176f, 0.852962f, 0.168126f, 0.684286f, 0.072573f, 0.632160f}; float b[10] = {0.871487f, 0.998108f, 0.798608f, 0.134831f, 0.576281f, 0.410779f, 0.402936f, 0.522935f, 0.623218f, 0.193030f}; float c[8]; int i; int n = 10000000; long start; // Speed test here: start = clock(); while(--n) fnFrequentFunction(x, y, z, a, b, c, 8); printf("Time used: %ld\n", clock() - start); printf("fnFrequentFunction == %f\n", fnFrequentFunction(x, y, z, a, b, c, 8)); for(i = 0; i < 8; ++i) printf(" c[%d] == %f\n", i, c[i]); printf("\n"); return 0; } Any suggestions are welcome :-)

    Read the article

  • mySQL - One large query vs Ajax indivdual queries

    - by Mark
    Hi guys, I guess no one will have a definative answer to this but considered predictions would be appriciated. I am in the process of developing a mySQL database for a web application and my question is: Is it more efficient to make a single query that returns a single row using AJAX or To request 100 - 700 rows when the user will likely only ever use the results of two or three? Really I am asking what is heavier for the server 2-3 requests with one result or 1 request with 100 - 700 results? Thanks, Mark

    Read the article

  • Reading large excel file with PHP

    - by Itamar Bar-Lev
    I'm trying to read a 17MB excel file (2003) with PHPExcel1.7.3c, but it crushes already while loading the file, after exceeding the 120 seconds limit I have. Is there another library that can do it more efficiently? I have no need in styling, I only need it to support UTF8. Thanks for your help

    Read the article

  • Why is Perl commonly used as CGI scripts?

    - by Michael Vasquez
    I plan to add a better search feature to my site, so I thought that I would write it in C and use the CGI as a means to access it. But it seems that Perl is the most popular language when it comes to CGI-based stuff. Why is that? Wouldn't it be faster programmed in C or machine code? What advantages, if any, are there to writing it in a scripting language? Thanks.

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • Simaltaneous connections with PHP and SOAP?

    - by Dov
    I'm new to using SOAP and understanding the utmost basics of it. I create a client resource/connection, I then run some queries in a loop and I'm done. The issue I am having is when I increase the iterations of the loop, ie: from 100 to 1000, it seems to run out of memory and drops an internal server error. How could I possibly run either a) multiple simaltaneous connections or b) create a connection, 100 iterations, close connection, create connection.. etc. "a)" looks to be the better option but I have no clue as to how to get it up and running whilst keeping memory (I assume opening and closing connections) at a minimum. Thanks in advance! index.php <?php // set loops to 0 $loops = 0; // connection credentials and settings $location = 'https://theconsole.com/'; $wsdl = $location.'?wsdl'; $username = 'user'; $password = 'pass'; // include the console and client classes include "class_console.php"; include "class_client.php"; // create a client resource / connection $client = new Client($location, $wsdl, $username, $password); while ($loops <= 100) { $dostuff; } ?> class_console.php <?php class Console { // the connection resource private $connection = NULL; /** * When this object is instantiated a connection will be made to the console */ public function __construct($location, $wsdl, $username, $password, $proxyHost = NULL, $proxyPort = NULL) { if(is_null($proxyHost) || is_null($proxyPort)) $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password)); else $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password, 'proxy_host' => $proxyHost, 'proxy_port' => $proxyPort)); $connection->__setLocation($location); $this->connection = $connection; return $this->connection; } /** * Will print any type of data to screen, where supported by print_r * * @param $var - The data to print to screen * @return $this->connection - The connection resource **/ public function screen($var) { print '<pre>'; print_r($var); print '</pre>'; return $this->connection; } /** * Returns a server / connection resource * * @return $this->connection - The connection resource */ public function srv() { return $this->connection; } } ?>

    Read the article

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • Most efficient approach for multilingual PHP website

    - by alexteg
    I am working on a large multilingual website and I am considering different approaches for making it multilingual. The possible alternatives I can think of are: The Gettext functions with generation of .po files One MySQL table with the translations and a unique string ID for each text PHP-files with arrays containing the different translations with unique string IDs As far as I have understood the Gettext functions should be most efficient, but my requirement is that it should be possible to change a text string in the original reference language (English) without the other translations of that string automatically reverting back to English just because a couple of words changed. Is this possible with Gettext? What is the least resource demanding solution? Is using the Gettext functions or PHP files with arrays more or less equally resource demanding? Any other suggestions for more efficient solutions?

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >