Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 399/457 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • Trouble getting QMainWindow to scroll

    - by random
    A minimal example: class MainWindow(QtGui.QMainWindow): def __init__(self, parent = None): QtGui.QMainWindow.__init__(self, parent) winWidth = 683 winHeight = 784 screen = QtGui.QDesktopWidget().availableGeometry() screenCenterX = (screen.width() - winWidth) / 2 screenCenterY = (screen.height() - winHeight) / 2 self.setGeometry(screenCenterX, screenCenterY, winWidth, winHeight) layout = QtGui.QVBoxLayout() layout.addWidget(FormA()) mainWidget = QtGui.QWidget() mainWidget.setLayout(layout) self.setCentralWidget(mainWidget) FormA is a QFrame with a VBoxLayout that can expand to an arbitrary number of entries. In the code posted above, if the entries in the forms can't fit in the window then the window itself grows. I'd prefer for the window to become scrollable. I've also tried the following... replacing mainWidget = QtGui.QWidget() mainWidget.setLayout(layout) self.setCentralWidget(mainWidget) with mainWidget = QtGui.QScrollArea() mainWidget.setLayout(layout) self.setCentralWidget(mainWidget) results in the forms and entries shrinking if they can't fit in the window. Replacing it with mainWidget = QtGui.QWidget() mainWidget.setLayout(layout) scrollWidget = QtGui.QScrollArea() scrollWidget.setWidget(mainWidget) self.setCentralWidget(scrollWidget) results in the mainwidget (composed of the forms) being scrunched in the top left corner of the window, leaving large blank areas on the right and bottom of it, and still isn't scrollable. I can't set a limit on the size of the window because I wish for it to be resizable. How can I make this window scrollable?

    Read the article

  • How to add an object to a html string?

    - by Philippe Maes
    I'm trying to load several images by a drop action and than resizing them and adding them as a thumbnail. The resize part is very important because the images can be very large and I want to make the thumbnails small of size. Here is my code: loadingGif(drop); for (var i=0;i<files.length;i++) { var file = files[i]; var reader = new FileReader(); reader.onload = function(e) { var src = e.target.result; var img = document.createElement('img'); img.src = src; var scale = 100/img.height; img.height = 100; img.width = scale*img.width; output.push('<div id="imagecontainer"><div id="image">'+img+'</div><div id="delimage"><img src="img/del.jpg"" /></div></div>'); if(output.length == files.length) { drop.removeChild(drop.lastChild); drop.innerHTML += output.join(''); output.length = 0; } } reader.readAsDataURL(file); } As you can probably tell I insert a loading gif image in my dropzone until all files are loaded (output.length == files.length). When a file is loaded I add html to an array which I will print if the load is complete. The problem is I can't seem to add the img object (I need this object to resize the image) to the html string, which seems obvious as the img is an object... So my question to you guys is: how do I do this? :)

    Read the article

  • Cleaner method for list comprehension clean-up

    - by Dan McGrath
    This relates to my previous question: Converting from nested lists to a delimited string I have an external service that sends data to us in a delimited string format. It is lists of items, up to 3 levels deep. Level 1 is delimited by '|'. Level 2 is delimited by ';' and level 3 is delimited by ','. Each level or element can have 0 or more items. An simplified example is: a,b;c,d|e||f,g|h;; We have a function that converts this to nested lists which is how it is manipulated in Python. def dyn_to_lists(dyn): return [[[c for c in b.split(',')] for b in a.split(';')] for a in dyn.split('|')] For the example above, this function results in the following: >>> dyn = "a,b;c,d|e||f,g|h;;" >>> print (dyn_to_lists(dyn)) [[['a', 'b'], ['c', 'd']], [['e']], [['']], [['f', 'g']], [['h'], [''], ['']]] For lists, at any level, with only one item, we want it as a scalar rather than a 1 item list. For lists that are empty, we want them as just an empty string. I've came up with this function, which does work: def dyn_to_min_lists(dyn): def compress(x): return "" if len(x) == 0 else x if len(x) != 1 else x[0] return compress([compress([compress([item for item in mv.split(',')]) for mv in attr.split(';')]) for attr in dyn.split('|')]) Using this function and using the example above, it returns: [[['a', 'b'], ['c', 'd']], 'e', '', ['f', 'g'], ['h', '', '']] Being new to Python, I'm not confident this is the best way to do it. Are there any cleaner ways to handle this? This will potentially have large amounts of data passing through it, are there any more efficient/scalable ways to achieve this?

    Read the article

  • Does IE6 Really not Allow Me to set width/height from left/right/top/bottom???

    - by viatropos
    Building a site super quick and having it work on all my Mac browsers, I thought I'd take a gander on a friends old dell laptop with Windows XP and IE6. Nothing looks remotely correct. It's because I used lots of left/right/top/bottom (constraint) declarations to size elements proportionally to their parent's size (I didn't use percent sizes because the percents refer to the parent's size before margins and padding are applied, left/right/top/bottom refer to them after with position:absolute. I'm asking about that here :)). I've read lots these past few weeks on how horrible IE6 (and IE) is in general, but because of all the reasons people say to support it (large market share and the fear of installing better software), and because half the people in the company we're building a site for use IE6 (getting them to upgrade to Chrome slowly but surely), I thought if I could just get IE6 to render my constraints, that might help. So I am messing around with simple layouts here, and they work fine in my latest versions of Firefox, Safari, Chrome, and Opera, but IE6 is basically saying: If you haven't set a width or height on me, I'm assuming it's zero. But position:absolute; left:0px; right:0px; top:0px; bottom:0px; on a container that's width:1000px; height:1000px; should be the same as setting width:1000px; height:1000px on the child, no? Taking a quick look at the source for this, why won't IE6 render the constraint based absolutely positioned AND SIZED elements? (note: I will be messing around with that file for a while) Thanks

    Read the article

  • C++, inject additional data in a method

    - by justik
    I am adding the new modul in some large library. All methods here are implemented as static. Let mi briefly describe the simplified model: typedef std::vector<double> TData; double test ( const TData &arg ) { return arg ( 0 ) * sin ( arg ( 1 ) + ...;} double ( * p_test ) ( const TData> &arg) = &test; class A { public: static T f1 (TData &input) { .... //some computations B::f2 (p_test); } }; Inside f1() some computations are perfomed and a static method B::f2 is called. The f2 method is implemented by another author and represents some simulation algorithm (example here is siplified). class B { public: static double f2 (double ( * p_test ) ( const TData &arg ) ) { //difficult algorithm working p_test many times double res = p_test(arg); } }; The f2 method has a pointer to some weight function (here p_test). But in my case some additional parameters computed in f1 for test() methods are required double test ( const TData &arg, const TData &arg2, char *arg3.... ) { } How to inject these parameters into test() (and so to f2) to avoid changing the source code of the f2 methods (that is not trivial), redesign of the library and without dirty hacks :-) ? The most simple step is to override f2 static double f2 (double ( * p_test ) ( const TData &arg ), const TData &arg2, char *arg3.... ) But what to do later? Consider, that methods are static, so there will be problems with objects. Thanks for your help.

    Read the article

  • What type of data store should I use for my ios app?

    - by mwiederrecht
    I am pretty new to ios and using servers so forgive me. I am building an ios app for research. I need to monitor things that the user does and then push it up to a server for analysis (yes, with user and IRB permission). On the client's side I need to keep quite a bit of data that won't really change except in the case of pulling an updated version from the server, and then a minimal amount of user-specific data. Most of the data I will collect needs to be pushed to a server for analysis and then can be deleted from the client side. I am struggling to figure out what kind of data store I need to use, especially since I am not quite sure how the pushing and pulling from the server process works yet. Does it make sense to use Core Data? XML? SQLite? I like the Core Data idea, but I am not sure what kind of problems I will run into when I need to send large amounts of data to it and from it from the server. I imagine I might need to send data in a different form than it is probably stored in on either end - so what kind of overhead am I likely to run into in the process of converting that data? Is there a good format to save stuff in that would work well for me on both ends AND for sending the data? As you can probably tell, I could use some advice. Thanks!

    Read the article

  • Background loading javascript into iframe with using jQuery/Ajax?

    - by user210099
    I'm working on an offline only help system which requires loading a large amount of search-related data into an iframe before the search functionality can be used. Due to the folder structure of the project, I am unable to use Ajax-related background load methods, since the files I need are loaded a few directories "up and over." I have written some code which delays the loading of the help data until the rest of the webpage is loaded. The help data consists of a bunch of javascript files which have information about the terms, ect that exist in the help books which are installed on the system. The webpage works fine, until I start to load this help data into a hidden iframe. While the javascript files are loading, I can not use any of the webpage. Links that require a small files be downloaded for hover over effects don't show up, javascript (switching tabs on the page) has no effect. I'm wondering if this is just a limitation of the way javascript works, or if there's something else going on here. Once all the files are loaded for the help system, the webpage works as expected. function test(){ var MGCFrame = eval("parent.parent"); if((ALLFRAMESLOADED == true)){ t2 = MGCFrame.setTimeout("this.IHHeader.frames[0].loadData()",1); } else{ t1 = MGCFrame.setTimeout("this.IHHeader.frames[0].test()",1000); } } Load data simply starts the data loading process. Thanks for any help you can provide.

    Read the article

  • Random Loss of precision in Python ReadLine()

    - by jackyouldon
    Hi all, We have a process which takes a very large csv (1.6GB) and breaks it down into pieces (in this case 3). This runs nightly and normally doesn't give us any problems. When it ran last night, however, the first of the output files had lost precision on the numeric fields in the data. The active ingredient in the script are the lines: while lineCounter <= chunk: oOutFile.write(oInFile.readline()) lineCounter = lineCounter + 1 and the normal output might be something like StringField1; StringField2; StringField3; StringField4; 1000000; StringField5; 0.000054454 etc. On this one occasion and in this one output file the numeric fields were all output with 6 zeros at the end i.e. StringField1; StringField2; StringField3; StringField4; 1000000.000000; StringField5; 0.000000 We are using Python v2.6 (and don't want to upgrade unless we really have to) but we can't afford to lose this data. Does anyone have any idea why this might have happened? If the readline is doing some kind of implicit conversion is there a way to do a binary read, because we really just want this data to pass through untouched? It is very wierd to us that this only affected one of the output files generated by the same script, and when it was rerun the output was as expected. thanks Jack

    Read the article

  • Simple Oracle File repository with folder hierarchy

    - by Ope
    I have an application that stores large amount of files (XML and binary) in folder hierarchies. Currently the main method is storing them in file system or using a legacy CMS, which we want to get rid of. The CMS supports Oracle and a customer wants to keep the files in Oracle because of enterprise policies (backup etc.) The question is: Is there a simple implementation of file repository with folder hierarchy for Oracle? What I am looking for is a small .Net component or example code (PL/SQL and/or .Net) that would have the following methods: Create, Delete, Exists Folder CRUD file Move and potentially Copy file or directory Access to files and folders with paths like "/root/folder1/folder2/file.xml" Ability to get all the files and folders in a folder and potentially also the entire directory tree Tree traversal, getting the parent, all children etc. needs to be fast. I need the implementation in .Net, but if it was just the stored procedures, I could create the .Net calling code. I have pointers to generic articles for creating hierarchies in DB, so if I need to do it from scratch, I know where to start. What I am asking here, is there already an implementation that I could take without doing this from scratch? It seems like such a generic requirement... If the answer is a CMS, Document management system or such it should be Open Source or at least quite cheap (some hundreds / server) and it should be possible to deploy it XCopy - hopefully only couple of DLL:s. I do not need - or want - a full featured big CMS with dozens of dlls and especially not an msi-installation. I have tried to google this, but the words "repository", "CMS", "file hierarchy" etc. give so many answers, the searches are pretty much useless. Thanks, OPe

    Read the article

  • jCarousel jQuery ajax loading 1000 records

    - by user1714862
    I'm using jCarousel to present a vertical scolling list of +-1000 names. I am using ajax to load the data 100 records at a time then when all the data has loaded I just let the jCarousel loop in the DOM. I have the ajax and loop all working but would like to make the code work no matter how large the total record count becomes. 1) I'd like to eliminate the 1201 fixed number and use a variable. 2) I currently loop on every record I see (carousel.first) to see if it matches my reload position(s) (albeit the loop is ony 12x it still seems a little "loopy") Any suggestions on improving this? function mycarousel_itemLoadCallback(carousel, state) { //if (carousel.has(carousel.first, carousel.last)) { //return; //} var getCount = 100; // Number of records to grab at a time var maxCount = 1201; // total possible number of records var visible = 9; // the number of records you can see in the window so this creates a pre-load by this number of records for (var i = 1; i < maxCount; i+=getCount ) { if (carousel.first === 1 || carousel.first === (i-visible)){ var getFrom = i; var getTo = getFrom+(getCount-1); //alert('TOP Record ='+carousel.first+'\n Now GET '+getFrom+'-'+getTo); jQuery.get('#ajaxscript#', { first: getFrom, last: getTo }, function(xml) { mycarousel_itemAddCallback(carousel, getFrom, getTo, xml); }, 'xml' ); break; } } };

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • How to implement instance numbering?

    - by Joan Venge
    I don't know if the title is clear but basically I am trying to implement something like this: public class Effect { public int InternalId ... public void ResetName() ... } When ResetName is called, this will reset the name of the object to: "Effect " + someIndex; So if I have 5 instances of Effect, they will be renamed to: "Effect 1" "Effect 2" "Effect 3" ... So I have another method (ResetNames) in another manager/container type that calls ResetName for each instance. And right now I have to pass an integer to ResetName while keeping a counter myself inside ResetNames. But this feels not as clean and this prevents me from calling ResetName myself outside the manager class, which is valid. How to do this better/cleaner? As for the InternalId, it's just some id that stores the creation order for everything. So I can't just rely on these, because the numbers are large, like 32000, etc. EDIT: Container ResetNames code: int count = 1; var effects = this.Effects.OrderBy ( n => n.InternalId ); foreach ( Effect effect in effects ) { effect.ResetName ( count ); ++count; }

    Read the article

  • AJAX Uploading - Not waiting for response before continuing

    - by waxical
    I'm using Blueimp's jQuery Uploader (very good it is too btw) and an S3 handler to upload files and then transfer them to S3 via the S3 API (from the PHP SDK). It works. The problem is, on large files (1GB) it can take anything up to a a few minutes to transfer (via create-object) onto S3. The PHP file that does this is hung-up until this process is complete. The problem is, the uploader (which utilises the jQuery Ajax method) seems to give up waiting and start again everytime. I have thought this was related to PHP INI 'max_input_time' or such, as it seemed to wait around 60 seconds, though this now appears to vary. I have upped the max_input_time in PHP INI and others related - but no further. I've also considered (the more likely) that JS, either in the script or the jQuery method has a timeout. The developer (blueimp) has said there's no such timeout in the front-end script, nor have I seen any and though 'timeout' is referenced in the jQuery Ajax method options, it seems to affect the entire time it uploads rather than the wait for a response - so that's not much use. Any help or guidance gratefully received.

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • PHP dynamic Page-level DocBlocks

    - by Obmerk Kronen
    I was wondering if there is a way to interact with the Page-level DocBlocks. My question is more specifically about wordpress plugin development, but this question has arised also in a non-wordpress environments. The reason is mainly the possibility to easily change VERSIONS and names throughout a large project with maybe a constant definition - but that will reflect also in the docblock.. The following example Docblock is from a wordpress plugin I write - /* Plugin Name: o99 Auxilary Functions v0.4.7 Plugin URI: http://www.myurl.com Description: some simple description that nobody reads. Version: 0.4.7 Author: my cool name Author URI: http://www.ok-alsouri.com */ Is there a way to transform it into : $ver = '0.4.7'; $uri = 'http://www.myurl.com'; $desc = 'some simple description that nobody reads.'; $mcn = 'my cool name'; etc.. etc.. /* Plugin Name: o99 Auxilary Functions ($ver) Plugin URI: ($uri) Description: ($desc) Version: ($ver) Author: ($mcn) Author URI: ($$uri) */ obviously for echo to work I would need to break the docblock itself, and I can not WRITE the docblock directly into it´s own file . In shorts : can I "generate" a docblock with php itself somehow (I would think that the answer is - "no" for the page itself.. But maybe I am wrong and someone has some neat hack :-) ) Is that even possible ?

    Read the article

  • Good C string libary

    - by chamakits
    Hello all. I recently got inspired to start up a project I've been wanting to code for a while. I want to do it in C, because memory handling is key this application. I was searching around for a good implementation of strings in C, since I know me doing it myself could lead to some messy buffer overflows, and I expect to be dealing with a fairly big amount of strings. I found this article which gives details on each, but they each seem like they have a good amount of cons going for them (don't get me wrong, this article is EXTREMELY helpful, but it still worries me that even if I were to choose one of those, I wouldn't be using the best I can get). I also don't know how up to date the article is, hence my current plea. What I'm looking for is something that may hold a large amount of characters, and simplifies the process of searching through the string. If it allows me to tokenize the string in any way, even better. Also, it should have some pretty good I/O performance. Printing, and formatted printing isn't quite a top priority. I know I shouldn't expect a library to do all the work for me, but was just wandering if there was a well documented string function out there that could save me some time and some work. Any help is greatly appreciated. Thanks in advance! EDIT: I was asked about the license I prefer. Any sort of open source license will do, but preferably GPL (v2 or v3). EDIt2: I found betterString (bstring) library and it looks pretty good. Good documentation, small yet versatile amount of functions, and easy to mix with c strings. Anyone have any good or bad stories about it? The only downside I've read about it is that it lacks Unicode (again, read about this, haven't seen it face to face just yet), but everything else seems pretty good. EDIT3: Also, preferable that its pure C.

    Read the article

  • Python "callable" attribute (pseudo-property)

    - by mgilson
    In python, I can alter the state of an instance by directly assigning to attributes, or by making method calls which alter the state of the attributes: foo.thing = 'baz' or: foo.thing('baz') Is there a nice way to create a class which would accept both of the above forms which scales to large numbers of attributes that behave this way? (Shortly, I'll show an example of an implementation that I don't particularly like.) If you're thinking that this is a stupid API, let me know, but perhaps a more concrete example is in order. Say I have a Document class. Document could have an attribute title. However, title may want to have some state as well (font,fontsize,justification,...), but the average user might be happy enough just setting the title to a string and being done with it ... One way to accomplish this would be to: class Title(object): def __init__(self,text,font='times',size=12): self.text = text self.font = font self.size = size def __call__(self,*text,**kwargs): if(text): self.text = text[0] for k,v in kwargs.items(): setattr(self,k,v) def __str__(self): return '<title font={font}, size={size}>{text}</title>'.format(text=self.text,size=self.size,font=self.font) class Document(object): _special_attr = set(['title']) def __setattr__(self,k,v): if k in self._special_attr and hasattr(self,k): getattr(self,k)(v) else: object.__setattr__(self,k,v) def __init__(self,text="",title=""): self.title = Title(title) self.text = text def __str__(self): return str(self.title)+'<body>'+self.text+'</body>' Now I can use this as follows: doc = Document() doc.title = "Hello World" print (str(doc)) doc.title("Goodbye World",font="Helvetica") print (str(doc)) This implementation seems a little messy though (with __special_attr). Maybe that's because this is a messed up API. I'm not sure. Is there a better way to do this? Or did I leave the beaten path a little too far on this one? I realize I could use @property for this as well, but that wouldn't scale well at all if I had more than just one attribute which is to behave this way -- I'd need to write a getter and setter for each, yuck.

    Read the article

  • Interpretation of range(n) and boolean list, one-to-one map, simpler?

    - by HH
    #!/usr/bin/python # # Description: bitwise factorization and then trying to find # an elegant way to print numbers # Source: http://forums.xkcd.com/viewtopic.php?f=11&t=61300#p2195422 # bug with large numbers such as 99, but main point in simplifying it # def primes(n): # all even numbers greater than 2 are not prime. s = [False]*2 + [True]*2 + [False,True]*((n-4)//2) + [False]*(n%2) i = 3; while i*i < n: # get rid of ** and skip even numbers. s[i*i : n : i*2] = [False]*(1+(n-i*i)//(i*2)) i += 2 # skip non-primes while not s[i]: i += 2 return s # TRIAL: can you find a simpler way to print them? # feeling the overuse of assignments but cannot see a way to get it simpler # p = 49 boolPrimes = primes(p) numbs = range(len(boolPrimes)) mydict = dict(zip(numbs, boolPrimes)) print([numb for numb in numbs if mydict[numb]]) Something I am looking for, can you get TRIAL to be of the extreme simplicity below? Any such method? a=[True, False, True] b=[1,2,3] b_a # any such simple way to get it evaluated to [1,3] # above a crude way to do it in TRIAL

    Read the article

  • Java: multi-threaded maps: how do the implementations compare?

    - by user346629
    I'm looking for a good hash map implementation. Specifically, one that's good for creating a large number of maps, most of them small. So memory is an issue. It should be thread-safe (though losing the odd put might be an OK compromise in return for better performance), and fast for both get and put. And I'd also like the moon on a stick, please, with a side-order of justice. The options I know are: HashMap. Disastrously un-thread safe. ConcurrentHashMap. My first choice, but this has a hefty memory footprint - about 2k per instance. Collections.sychronizedMap(HashMap). That's working OK for me, but I'm sure there must be faster alternatives. Trove or Colt - I think neither of these are thread-safe, but perhaps the code could be adapted to be thread safe. Any others? Any advice on what beats what when? Any really good new hash map algorithms that Java could use an implementation of? Thanks in advance for your input!

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • Why would gnu ld link order causes Signal 11 (SEGV) on startup?

    - by Benoit
    We are building a large application in C++ that includes the use of many (static) libraries. We have a problem where the application crashes on startup with a Signal 11, before we even reach main. After much debugging, we have observed that if we explicitly reference an object file so its link order is early, the program crashes on startup. If the file is referenced later (or not referenced at all), the program does not crash. To be clear, there is NO code invoked directly from this object file. However, as it is C++, there might be static objects that do get constructed (it's a CORBA IDL generated file). We use the -Wl,--start-group ... --end-group arguments to multi-pass link the symbols since the libraries are interdependent. Here is a representation of what I mean. This is what the linker's object file order is: Order 1 Order 2 Order 3 foo.o foo.o foo.o ... ... ... main.o main.o main.o crasher.o libA.o libA.o libA.o LibB.o LibB.o LibB.o LibC.o LibC.o LibC.o crasher.o Results: NO CRASH NO CRASH CRASH Does any one have an idea why the linkage order has an effect on the crash? It would be nice if we could force the crasher.o to link later, but we're really after an explanation. Also, is there a way to force the linker to place crasher.o towards the end? Just to add to the fun, in actuality, crasher.o is part of a Library in the --start/--end-group.

    Read the article

  • Z-index vs Accessibility

    - by MetalAdam
    Here's a simplification of my code that I'm having problems with, in regards to layering. <ul id="main_menu"> <li>Option 1 <ul id="submenu1"> <li>link</li> <li>link</li> <li>link</li> </ul> </li> <li>Option 2 <ul id="submenu2"> <li>link</li> <li>link</li> <li>link</li> </ul> </li> </ul> My issue is that submenu2 seems to be above Option 1. I have tried to give them appropriate z-indexes, but they don't seem to work... I'm assuming because submenu2 is a child of Option 2, and has no relevance to Option 1. Any idea of any work around that would help resolve my issue? I'm using large graphics for most of these links, so the overlapping is quite obvious.

    Read the article

  • Generic file container for quick read of data

    - by DreamCodeR
    Since there are some major privacy issues with alot of social networking sites I am trying to think about alternatives. One is to let the user keep all the information stored in some kind of file container. Now, I haven't found a single type of container that can hold "generic" information. Only for audio/video. What I want is a container that can be read by PHP with some kind of index file that lists up the users pictures in a image/ directory in the container, FOAF files (or some alternative XML-file describing users information and friends, etc.). My thoughts was to let the user keep all their information and data stored in a container that can be imported/exported and deleted from my server (the prototype social networking site I am trying to create), and then uploaded to another site that might use the same format (not that I think that will ever happen, but the user still keeps all their pictures, data, comments, messages, etc). The only thing I have come up with yet is to create a tar archive with the Archive_tar library which extracts and creates Tar-archives with an index-file describing which files are holding the messages (there might be several so each file won't be so large), what pictures are in the image/ folder and what are their names and what comments they have gotten etc. Maybe also the permissions for viewing each type of content. Does there exist any generic file format of a container that I can use to keep all this information in one file with a tree-like index file? Or must i try and create something like this myself?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >