Search Results

Search found 8875 results on 355 pages for 'optimized solutions'.

Page 164/355 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • A more effecient millisecond conversion method?

    - by cube
    I am currently using this method to convert milliseconds to min:sec:1/10sec. However it does not seem to be efficient at all. Would anyone know of a faster more efficient and optimized way of accomplishing the same. mills.prototype.formatTime = function(time) { var elapsedTime = (time * 1000); //Minutes var elapsedM = (elapsedTime/60000)|0; var remaining = elapsedTime - (elapsedM * 60000); //add a leading zero if it's a single digit number if (elapsedM < 10) { elapsedM = "0"+elapsedM; } //Seconds var elapsedS = ((remaining/1000)|0); remaining -= (elapsedS*1000); ////add leading zero if (elapsedS<10) { elapsedS = "0"+elapsedS; } //Hundredths var elapsedFractions = ((remaining/10)|0); if (elapsedFractions < 10) { elapsedFractions = "0"+elapsedFractions; } //display results nicely var time_data = elapsedM+":"+elapsedS+":"+elapsedFractions; //return time_data; return[time_data,elapsedM,elapsedS,elapsedFractions] };

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Optimization in Common Decalaration

    - by Pratik
    Its a 3-tier ASP.NET Website Project In Data Layer there is class "Common Decalaration" in which lot of common things are mentioned. Something this way : public class CommonDeclartion { #region Common Messages public const string RECORD_INSERT_MSG = "Record Inserted Successfully "; public const string RECORD_UPDATE_MSG = "Record Updated Successfully"; public const string RECORD_DELETE_MSG = "Record Deleted Successfully"; public const string ERROR_MSG = "Error Ocuured while Perfoming This Action."; public const string UserID_Incorrect = "Please Enter The Correct User ID."; public const string RECORD_ALREADY_EXIT = "Record Already Exit"; public const string NO_RECORD = "No Record found."; #endregion } Can this be more optimized in terms of : 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity I thought of using enum but can't figure that out : enum CommonMessages { RECORD_INSERT_MSG "Record Inserted Successfully.", RECORD_UPDATE_MSG "Record Updated Successfully.", RECORD_DELETE_MSG "Record Deleted Successfully.", ERROR_MSG "Error Ocuured while Perfoming This Action.", UserID_Incorrect "Please Enter The Correct User ID.", RECORD_ALREADY_EXIT "Record Already Exit.", NO_RECORD "No Record found.", } or else should keep them in some collections like dictionary/NameValueCollection or so or i have to keep them in XML in form of key/value pair and reterive from it ? What can be better way keeping in mind 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity

    Read the article

  • Optimization of Function with Dictionary and Zip()

    - by eWizardII
    Hello, I have the following function: def filetxt(): word_freq = {} lvl1 = [] lvl2 = [] total_t = 0 users = 0 text = [] for l in range(0,500): # Open File if os.path.exists("C:/Twitter/json/user_" + str(l) + ".json") == True: with open("C:/Twitter/json/user_" + str(l) + ".json", "r") as f: text_f = json.load(f) users = users + 1 for i in range(len(text_f)): text.append(text_f[str(i)]['text']) total_t = total_t + 1 else: pass # Filter occ = 0 import string for i in range(len(text)): s = text[i] # Sample string a = re.findall(r'(RT)',s) b = re.findall(r'(@)',s) occ = len(a) + len(b) + occ s = s.encode('utf-8') out = s.translate(string.maketrans("",""), string.punctuation) # Create Wordlist/Dictionary word_list = text[i].lower().split(None) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 keys = word_freq.keys() numbo = range(1,len(keys)+1) WList = ', '.join(keys) NList = str(numbo).strip('[]') WList = WList.split(", ") NList = NList.split(", ") W2N = dict(zip(WList, NList)) for k in range (0,len(word_list)): word_list[k] = W2N[word_list[k]] for i in range (0,len(word_list)-1): lvl1.append(word_list[i]) lvl2.append(word_list[i+1]) I have used the profiler to find that it seems the greatest CPU time is spent on the zip() function and the join and split parts of the code, I'm looking to see if there is any way I have overlooked that I could potentially clean up the code to make it more optimized, since the greatest lag seems to be in how I am working with the dictionaries and the zip() function. Any help would be appreciated thanks!

    Read the article

  • Queueing method calls - any idea how?

    - by TomTom
    I write a heavily asynchronseous application. I am looking for a way to queue method calls, similar to what BeginInvoke / EndInvoke does.... but on my OWN queue. The reaqson is that I am having my own optimized message queueing system using a threadpool but at the same time making sure every component is single threaded in the requests (i.e. one thread only handles messages for a component). I Have a lot of messages going back and forth. For limited use, I would really love to be able to just queue a message call with parameters, instead of having to define my own parameter, method wrapping / unwrapping just for the sake of doing a lot of admnistrative calls. I also do not always want to bypass the queue, and I definitely do not want the sending service to wait for the other service to respond. Anyone knows of a way to intercept a method call? Some way to utilize TransparentProxy / Virtual Proxy for this? ;) ServicedComponent? I would like this to be as little overhead as possible ;)

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Creating an object in the loop

    - by Jacob
    std::vector<double> C(4); for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } is much faster than for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { std::vector<double> C(4); C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } I realize this happens because std::vector is repeatedly being created and instantiated in the loop, but I was under the impression this would be optimized away. Is it completely wrong to keep variables local in a loop whenever possible? I was under the (perhaps false) impression that this would provide optimization opportunities for the compiler. EDIT: I use VC++2005 (release mode) with full optimization (/Ox)

    Read the article

  • Slightly different execution times between python2 and python3

    - by user557634
    Hi. Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4"). I was curious about the differences in execution time of it between python2 and python3. Here is my function: def perms(s): s = tuple(s) N = len(s) if N <= 1: yield s[:] raise StopIteration() for x in perms(s[1:]): for i in range(0,N): yield x[:i] + (s[0],) + x[i:] I tested both using timeit module. My tests: $ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py python2.6: args time[ms] 1 0.003811 2 0.008268 3 0.015907 4 0.042646 5 0.166755 6 0.908796 7 6.117996 8 48.346996 9 433.928967 10 4379.904032 python3: args time[ms] 1 0.00246778964996 2 0.00656183719635 3 0.01419159912 4 0.0406293644678 5 0.165960511097 6 0.923101452814 7 6.24257639835 8 53.0099868774 9 454.540967941 10 4585.83498001 As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better. As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2? Thank you in advance for kind answer :)

    Read the article

  • Limit CPU usage of a process

    - by jb
    I have a service running which periodically checks a folder for a file and then processes it. (Reads it, extracts the data, stores it in sql) So I ran it on a test box and it took a little longer thaan expected. The file had 1.6 million rows, and it was still running after 6 hours (then I went home). The problem is the box it is running on is now absolutely crippled - remote desktop was timing out so I cant even get on it to stop the process, or attach a debugger to see how far through etc. It's solidly using 90%+ CPU, and all other running services or apps are suffering. The code is (from memory, may not compile): List<ItemDTO> items = new List<ItemDTO>(); using (StreamReader sr = fileInfo.OpenText()) { while (!sr.EndOfFile) { string line = sr.ReadLine() try { string s = line.Substring(0,8); double y = Double.Parse(line.Substring(8,7)); //If the item isnt already in the collection, add it. if (items.Find(delegate(ItemDTO i) { return (i.Item == s); }) == null) items.Add(new ItemDTO(s,y)); } catch { /*Crash*/ } } return items; } - So I am working on improving the code (any tips appreciated). But it still could be a slow affair, which is fine, I've no problems with it taking a long time as long as its not killing my server. So what I want from you fine people is: 1) Is my code hideously un-optimized? 2) Can I limit the amount of CPU my code block may use? Cheers all

    Read the article

  • STL vectors with uninitialized storage?

    - by Jim Hunziker
    I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values. Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements? (I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.) Also, see my comments below for a clarification about when the initialization occurs. SOME CODE: void GetsCalledALot(int* data1, int* data2, int count) { int mvSize = memberVector.size() memberVector.resize(mvSize + count); // causes 0-initialization for (int i = 0; i < count; ++i) { memberVector[mvSize + i].d1 = data1[i]; memberVector[mvSize + i].d2 = data2[i]; } }

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • optimize output value using a class and public member

    - by wiso
    Suppose you have a function, and you call it a lot of times, every time the function return a big object. I've optimized the problem using a functor that return void, and store the returning value in a public member: #include <vector> const int N = 100; std::vector<double> fun(const std::vector<double> & v, const int n) { std::vector<double> output = v; output[n] *= output[n]; return output; } class F { public: F() : output(N) {}; std::vector<double> output; void operator()(const std::vector<double> & v, const int n) { output = v; output[n] *= n; } }; int main() { std::vector<double> start(N,10.); std::vector<double> end(N); double a; // first solution for (unsigned long int i = 0; i != 10000000; ++i) a = fun(start, 2)[3]; // second solution F f; for (unsigned long int i = 0; i != 10000000; ++i) { f(start, 2); a = f.output[3]; } } Yes, I can use inline or optimize in an other way this problem, but here I want to stress on this problem: with the functor I declare and construct the output variable output only one time, using the function I do that every time it is called. The second solution is two time faster than the first with g++ -O1 or g++ -O2. What do you think about it, is it an ugly optimization?

    Read the article

  • Is it acceptable to wrap PHP library functions solely to change the names?

    - by Carson Myers
    I'm going to be starting a fairly large PHP application this summer, on which I'll be the sole developer (so I don't have any coding conventions to conform to aside from my own). PHP 5.3 is a decent language IMO, despite the stupid namespace token. But one thing that has always bothered me about it is the standard library and its lack of a naming convention. So I'm curious, would it be seriously bad practice to wrap some of the most common standard library functions in my own functions/classes to make the names a little better? I suppose it could also add or modify some functionality in some cases, although at the moment I don't have any examples (I figure I will find ways to make them OO or make them work a little differently while I am working). If you saw a PHP developer do this, would you think "Man, this is one shoddy developer?" Additionally, I don't know much (or anything) about if/how PHP is optimized, and I know that usually PHP performace doesn't matter. But would doing something like this have a noticeable impact on the performance of my application?

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Mysql - Help me alter this query to apply AND logic instead of OR in searching?

    - by sandeepan-nath
    First execute these tables and data dumps :- CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'key1'), (2, 'key2'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (1, 1), (2, 1); The following query finds all the tutors from Tutors_Tag_Relations table which have reference to at least one of the terms "key1" or "key2". SELECT td . * FROM Tutors_Tag_Relations AS td INNER JOIN Tags AS t ON t.id_tag = td.id_tag WHERE t.tag LIKE "%key1%" OR t.tag LIKE "%key2%" Group by td.id_tutor LIMIT 10 Please help me modify this query so that it returns all the tutors from Tutors_Tag_Relations table which have reference to both the terms "key1" and "key2" (AND logic instead of OR logic). Please suggest an optimized query considering huge number of data records (the query should NOT individually fetch two sets of tutors matching each keyword and then find the intersection).

    Read the article

  • Cross-platform build UNC share (Windows->Linux) - possible to be case-sensitive on CIFS share?

    - by holtavolt
    To optimize builds between Windows and Linux (Ubuntu 10.04), I've got a UNC share of the source tree that is shared between systems, and all build output goes to local disk on each system. This mostly works great, as source updates and changes can quickly be tested on both systems, but there's one annoying limitation I can't find a way around, which is that the Linux CIFS mount is case-insensitive. Consequently, a test compile of code that has an error like: #include "Foo.h" for a file foo.h, will not be caught by a test build (until a local compile is done on the Linux box, e.g. nightly builds) Is it possible to have case-sensitivity of the Windows UNC share on the Linux box? I've tried a variety of fstab and mount combinations with no success, as well as editing the smb.config to set "case sensitive = yes" Given what the Ubuntu man page info states on this: nocase Request case insensitive path name matching (case sensitive is the default if the server suports it). I suspect that this is a limitation from the Windows UNC side, and there's nothing to be done short of switching to some other mechanism (is NFS still viable anywhere?) If anyone has already solved this to support optimized cross-platform build environments, I'd appreciate hearing about it!

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • How to iteratively generate k elements subsets from a set of size n in java?

    - by Bea Metitiri
    Hi, I'm working on a puzzle that involves analyzing all size k subsets and figuring out which one is optimal. I wrote a solution that works when the number of subsets is small, but it runs out of memory for larger problems. Now I'm trying to translate an iterative function written in python to java so that I can analyze each subset as it's created and get only the value that represents how optimized it is and not the entire set so that I won't run out of memory. Here is what I have so far and it doesn't seem to finish even for very small problems: public static LinkedList<LinkedList<Integer>> getSets(int k, LinkedList<Integer> set) { int N = set.size(); int maxsets = nCr(N, k); LinkedList<LinkedList<Integer>> toRet = new LinkedList<LinkedList<Integer>>(); int remains, thresh; LinkedList<Integer> newset; for (int i=0; i<maxsets; i++) { remains = k; newset = new LinkedList<Integer>(); for (int val=1; val<=N; val++) { if (remains==0) break; thresh = nCr(N-val, remains-1); if (i < thresh) { newset.add(set.get(val-1)); remains --; } else { i -= thresh; } } toRet.add(newset); } return toRet; } Can anybody help me debug this function or suggest another algorithm for iteratively generating size k subsets? EDIT: I finally got this function working, I had to create a new variable that was the same as i to do the i and thresh comparison because python handles for loop indexes differently.

    Read the article

  • Overhead of calling tiny functions from a tight inner loop? [C++]

    - by John
    Say you see a loop like this one: for(int i=0; i<thing.getParent().getObjectModel().getElements(SOME_TYPE).count(); ++i) { thing.getData().insert( thing.GetData().Count(), thing.getParent().getObjectModel().getElements(SOME_TYPE)[i].getName() ); } if this was Java I'd probably not think twice. But in performance-critical sections of C++, it makes me want to tinker with it... however I don't know if the compiler is smart enough to make it futile. This is a made up example but all it's doing is inserting strings into a container. Please don't assume any of these are STL types, think in general terms about the following: Is having a messy condition in the for loop going to get evaluated each time, or only once? If those get methods are simply returning references to member variables on the objects, will they be inlined away? Would you expect custom [] operators to get optimized at all? In other words is it worth the time (in performance only, not readability) to convert it to something like: ElementContainer &source = thing.getParent().getObjectModel().getElements(SOME_TYPE); int num = source.count(); Store &destination = thing.getData(); for(int i=0;i<num;++i) { destination.insert(thing.GetData().Count(), source[i].getName(); } Remember, this is a tight loop, called millions of times a second. What I wonder is if all this will shave a couple of cycles per loop or something more substantial? Yes I know the quote about "premature optimisation". And I know that profiling is important. But this is a more general question about modern compilers, Visual Studio in particular.

    Read the article

  • how to compare two tables fields name with another value in mysql?

    - by I Like PHP
    I have two tables table_school school_open_time|school_close_time|school_day 8:00 AM | 9:00PM | Monday 10:00 AM | 7:00PM | Wednesday table_college college_open_time|college_close_time|college_day 10:00 AM | 8:00PM | Monday 10:00 AM | 9:00PM | Tuesday 10:00 AM | 5:00PM | Wednesday Now I want to select school_open_time school_close time, college_open_time and college_close_time according to today (means college_day=school_day=today), and also if there is no row for a specific day in any of one table then it display blank field ( LEFT JOIN , I think I can use). Please suggest me best and optimized query for this. UPDATE: if there is no open time and close time for school then college_open_time and college_close_time has to be returned( not to be filled in database,just return) as school_open_time and school_close_time. and there always must be college_open_time and college_close_time for a given day MORE UPDATE: i m using below query SELECT college_open_time,college_close_time ,school_open_time, school_close_time FROM tbl_college LEFT JOIN tbl_school ON school_owner_id=college_owner_id WHERE college_owner_id='".$_session['user_id']."' AND college_day='".date('l',time())."'"; it return single row (left hand having some value and right hand having blank value) when there is no row of a given day in table_school, BUT display seven rows with same value on left hand side(college_open_time, college_close_time) and 6 blank row on right hand side (school_open_time and school_close_time) i need only one row when both table have a row of a given day but using above query take only first row of corresponding table_school where school_owner_id is 50(let), it not see the condition that school_day name should be given day

    Read the article

  • What is a good RPC model for building a AJAX web app using PHP & JS?

    - by user366152
    I'm new to writing AJAX applications. I plan on using jQuery on the client side while PHP on the server side. I want to use something like XML-RPC to simplify my effort in calling server-side code. Ideally, I wouldn't care whether the transport layer uses XML or JSON or a format more optimized for the wire. If I was writing a console app I'd use some tool to generate function stubs which I would then implement on the RPC server while the client would natively call into those stubs. This provides a clean separation. Is there something similar available in the AJAX world? While on this topic, how would I proceed with session management? I would want it to be as transparent as possible. For example, if I try to hit an RPC end-point which needs a valid session, it should reject the request if the client doesn't pass a valid session cookie. This would really ease my application development. I'd then have to simply handle the frontend using native JS functions. While on the backend, I can simply implement the RPC functions. BTW I dont wish to use Google Web Toolkit. My app wont be extremely heavy on AJAX.

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >