Search Results

Search found 6207 results on 249 pages for 'slow mtion'.

Page 208/249 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Serialization Performance and Google Android

    - by Jomanscool2
    I'm looking for advice to speed up serialization performance, specifically when using the Google Android. For a project I am working on, I am trying to relay a couple hundred objects from a server to the Android app, and am going through various stages to get the performance I need. First I tried a terrible XML parser that I hacked together using Scanner specifically for this project, and that caused unbelievably slow performance when loading the objects (~5 minutes for a 300KB file). I then moved away from that and made my classes implement Serializable and wrote the ArrayList of objects I had to a file. Reading that file into the objects the Android, with the file already downloaded mind you, was taking ~15-30 seconds for the ~100KB serialized file. I still find this completely unacceptable for an Android app, as my app requires loading the data when starting the application. I have read briefly about Externalizable and how it can increase performance, but I am not sure as to how one implements it with nested classes. Right now, I am trying to store an ArrayList of the following class, with the nested classes below it. public class MealMenu implements Serializable{ private String commonsName; private long startMillis, endMillis, modMillis; private ArrayList<Venue> venues; private String mealName; } And the Venue class: public class Venue implements Serializable{ private String name; private ArrayList<FoodItem> foodItems; } And the FoodItem class: public class FoodItem implements Serializable{ private String name; private boolean vegan; private boolean vegetarian; } IF Externalizable is the way to go to increase performance, is there any information as to how java calls the methods in the objects when you try to write it out? I am not sure if I need to implement it in the parent class, nor how I would go about serializing the nested objects within each object.

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • Open source file upload with no timeout on IIS6 with ASP, ASP.NET 2.0 or PHP5

    - by Christopher Done
    I'm after a cross-platform cross-browser way of uploading files such that there is no timeout. Uploads aren't necessarily huge -- some just take a long time to upload because of the uploader's slow connection -- but the server times out anyway. I hear that there are methods to upload files in chunks so that somehow the server decides not to timeout the upload. After searching around all I can see is proprietary upload helpers and Java and Flash (SWFUpload) widgets that aren't cross-platform, don't upload in chunks, or aren't free. I'd like a way to do it in any of these platforms (ASP, ASP.NET 2.0 or PHP5), though I am not very clued up on all this .NET class/controller/project/module/visual studio/compile/etc stuff, so some kind of runnable complete project that runs on .NET 2.0 would be helpful. PHP and ASP I can assume will be more straight-forward. Unless I am completely missing something, which I suspect/hope I am, reasonable web uploads are bloody hard work in any language or platform. So my question is: how can I perform web browser uploads, cross-platform, so that they don't timeout, using free software? Is it possible at all?

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • Porting an IBXpress Interbase 6 app to the current Firebird platform, on Delphi 7?

    - by robsoft
    Just wondering if there are any gotchas to be wary of here. We have a legacy D7 app that we developed several years ago for a client, which uses IBXpress to talk to the open source Interbase 6 build. We're having a number of issues with that platform these days (very slow to connect/start-up on new hardware being the chief one) and the client has okayed spending some time/money moving the database over to Firebird. We really DON'T want to embark upon moving it to D2010 (or D2007 which would be my preference right now) as we figure that we might have to move the database layer from IBXpress to something else to best suit Firebird anyway. And at the end of the day, the client is only looking to lessen the database pain, not overhaul/upgrade/rewrite the app. Given the ancestry of Firebird, is it a fairly painless, well-understood path from IBXpress Interbase 6 to (whatever) with Firebird? We have quite a number of sprocs, triggers (and even datatypes) etc in the existing IB database already (and the client has a number of paying customers all using this platform) so we felt that going to Firebird was more likely to be a smoother move than moving to SQL Express (or another flavour of DB entirely). Note that we're not looking for 'embedded' DB advocacy - in many of our client's customers' installations, the software is used in a multi-user client-server way so keeping that kind of approach is important.

    Read the article

  • Overlaying 2D paths on UIImage without scaling artifacts

    - by tat0
    I need to draw a path along the shape of an image in a way that it is always matching its position on the image independent of the image scale. Think of this like the hybrid view of Google Maps where streets names and roads are superimposed on top of the aerial pictures. Furthermore, this path will be drawn by the user's finger movements and I need to be able to retrieve the path keypoints on the image pixel coordinates. The user zooms-in in order to more precisely set the paths location. I manage to somehow make it work using this approach: -Create a custom UIView called CanvasView that handles touches interaction and delivers scaling, rotation, translation values to either the UIImageView or PathsView (see bellow) depending on a flag: deliverToImageOrPaths. -Create a UIImageView holding the base image. This is set as a children of CanvasView -Create a custom UIView called PathsView that keeps track of the 2D paths geometry and draws itself with a custom drawRect. This is set as children of the UIImageView. So hierarchy: CanvasView - UIImageView -PathsView In this way when deliverToImageOrPaths is YES, finger gestures transforms both the UIImageView and its child PathsView. When deliverToImageOrPaths is NO the gestures affect only the PathsView altering its geometry. So far so good. QUESTION: The problem I have is that when scaling the base UIImageView (via its .transform property) the PathsView is scaled with aliasing artifacts. drawRect is still being called on the PathsView but I guess it's performing the drawing using the original buffer size and then interpolating. How can I solve this issue? Are there better ways to implement these features? PS: I tried changing the PathsView layer class to CATiledLayer with levelsOfDetailBias 4 and levelsOfDetail 4. It solves the aliasing problem to some extent but it's unacceptable slow to render.

    Read the article

  • Thumbnails fadein fadeout specific div fade issues

    - by Omikron
    I am using this code to hide and show a div based on which thumbnail you rollover; $(document).ready(function(){ $('div.infodiv').hide(); $(".website_thumbs a").hover( function(){ var name = $(this).attr("name"); $(".infodiv").stop(); $("."+name).fadeIn(); }, function(){ var name = $(this).attr("name"); $("."+name).fadeTo(7000,1).fadeOut(); }); }); The script gets the name attribute from the thumbnail and displays the div with the corresponding class. Each div shares the .infodiv class but also has a class unique to each thumbnail. The functionality is basically where I want it but when you scroll over the thumbnails fast some of the divs get stuck in a kind of half faded-in state and stop working unless i roll over them once - then they slow fade in and they are usable again. I am a bit new to jQuery and would appreciate any help.

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • F#: Tell me what I'm missing about using Async.Parallel

    - by JBristow
    ok, so I'm doing ProjectEuler Problem #14, and I'm fiddling around with optimizations in order to feel f# out. in the following code: let evenrule n = n / 2L let oddrule n = 3L * n + 1L let applyRule n = if n % 2L = 0L then evenrule n else oddrule n let runRules n = let rec loop a final = if a = 1L then final else loop (applyRule a) (final + 1L) n, loop (int64 n) 1L let testlist = seq {for i in 3 .. 2 .. 1000000 do yield i } let getAns sq = sq |> Seq.head let seqfil (a,acc) (b,curr) = if acc = curr then (a,acc) else if acc < curr then (b,curr) else (a,acc) let pmap f l = seq { for a in l do yield async {return f a} } |> Seq.map Async.RunSynchronously let pmap2 f l = seq { for a in l do yield async {return f a} } |> Async.Parallel |> Async.RunSynchronously let procseq f l = l |> f runRules |> Seq.reduce seqfil |> fst let timer = System.Diagnostics.Stopwatch() timer.Start() let ans1 = testlist |> procseq Seq.map // 837799 00:00:08.6251990 printfn "%A\t%A" ans1 timer.Elapsed timer.Reset() timer.Start() let ans2 = testlist |> procseq pmap printfn "%A\t%A" ans2 timer.Elapsed // 837799 00:00:12.3010250 timer.Reset() timer.Start() let ans3 = testlist |> procseq pmap2 printfn "%A\t%A" ans3 timer.Elapsed // 837799 00:00:58.2413990 timer.Reset() Why does the Async.Parallel code run REALLY slow in comparison to the straight up map? I know I shouldn't see that much of an effect, since I'm only on a dual core mac. Please note that I do NOT want help solving problem #14, I just want to know what's up with my parallel code.

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • Why can't I fade out this table row in IE using jQuery?

    - by George Edison
    I can't get the table row to fade out in IE. It works in Chrome, but not IE. It just becomes really 'light' and stays on the screen. I tried IE8 with and without compatibility mode. <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function hideIt() { $('#hideme').fadeTo("slow", 0.0); } </script> </head> <body> <table> <tr id='hideme'> <td>Hide me!</td> </tr> </table> <button onclick='hideIt();'>Hide</button> </body> </html> Is there a workaround/solution for a smooth fade?

    Read the article

  • faster way to draw an image

    - by iHorse
    im trying to combine two images into a single image. unfortunately this has to be done very quickly in response to a user sliding a UISlider. i can combine the two images into a single image no problem. but the way I'm doing it is horribly slow. the sliders stick and jump rather frustratingly. i don't think i can throw the code into a background thread because i would run out of threads to quickly. i haven't actually tired that yet. below is my code. any ideas on how to speed it up would be helpful. UIGraphicsBeginImageContext(CGSizeMake(bodyImage.theImage.image.size.width * 1.2, bodyImage.theImage.image.size.height * 1.2)); [bodyImage.theImage.image drawInRect: CGRectMake(-2 + ((bodyImage.theImage.image.size.width * 1.2) - bodyImage.theImage.image.size.width)/2, kHeadAdjust, bodyImage.theImage.image.size.width * bodyImage.currentScale, bodyImage.theImage.image.size.height * bodyImage.currentScale)]; if(isCustHead) { [Head.theImage.image drawInRect: CGRectMake((bodyImage.theImage.image.size.width * 1.2 - headWidth)/2 - 11, 0, headWidth * 0.92, headWidth * (Head.theImage.image.size.height/Head.theImage.image.size.width) * 0.92)]; } else { [Head.theImage.image drawInRect: CGRectMake((bodyImage.theImage.image.size.width * 1.2 - (headWidth * defaultHeadAdjust))/2 - 10, 0, (headWidth * defaultHeadAdjust * 0.92), (headWidth * defaultHeadAdjust) * (Head.theImage.image.size.height/Head.theImage.image.size.width) * 0.92)]; } drawSurface.theImage.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • rs232 communication, general timing question

    - by Sunny Dee
    Hi, I have a piece of hardware which sends out a byte of data representing a voltage signal at a frequency of 100Hz over the serial port. I want to write a program that will read in the data so I can plot it. I know I need to open the serial port and open an inputstream. But this next part is confusing me and I'm having trouble understanding the process conceptually: I create a while loop that reads in the data from the inputstream 1 byte at a time. How do I get the while loop timing so that there is always a byte available to be read whenever it reaches the readbyte line? I'm guessing that I can't just put a sleep function inside the while loop to try and match it to the hardware sample rate. Is it just a matter of continuing reading the inputstream in the while loop, and if it's too fast then it won't do anything (since there's no new data), and if it's too slow then it will accumulate in the inputstream buffer? Like I said, i'm only trying to understand this conceptually so any guidance would be much appreciated! I'm guessing the idea is independent of which programming language I'm using, but if not, assume it is for use in Java. Thanks!

    Read the article

  • What specific features of LabView are frustrating to you?

    - by Underflow
    Please bear with me: this isn't a language debate or a flame. It's a real request for opinions. Occasionally, I have to help educate a traditional text coder in how to think in LabVIEW (LV). Often during this process, I get to hear about how LV sucks. Rarely is this insight accompanied by rational observations other than "Language X is just so much better!". While this statement is satisfying to them, it doesn't help me understand what is frustrating them. So, for those of you with LabVIEW and text language experience, what specific things about LV drive you nuts? ------ Summaries ------- Thanks for all the answers! Some of the issues are answered in the comments below, some exist on other sites, and some are just genuine problems with LV. In the spirit of the original question, I'm not going to try to answer all of these here: check LAVA or NI's website, and you'll be pleasantly surprised at how many of these things can be overcome. Unintentional concurrency No access to tradition text manipulation tools Binary-only source code control Difficult to branch and merge Too many open windows Text has cleaner/clearer/more expressive syntax Clean coding requires a lot of time and manipulation Large, difficult to access API/palette system Mouse required File namespacing: no duplicate files with the same name in memory LV objects are natively by-value only Requires dev environment to view code Lack of zoom Slow startup Memory pig "Giant" code is difficult to work with UI lockup is easy to do Trackpads and LV don't mix well String manipulation is graphically bloated Limited UI customization "Hidden" primitives (yes, these exist) Lack of official metaprogramming capability (not for much longer, though) Lack of unicode support [1]: http://www.lavag.org LAVA

    Read the article

  • Autocomplete and IE7 - slowness, sluggishness as overall pagesize grows?

    - by wchrisjohnson
    Hi, I have the autocomplete plugin (http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/, version 1.1 and 1.0.2) on a project to add pieces of "equipment" to a "project". On a fresh project the plugin works great; the data returned from the database comes back FAST, you can scroll the list fast, and can select an item and move on to the next one. Once I have a project established with equipment on it, and I go to add equipment, the performance is pretty bad. It takes 4-5 seconds to get the list of data back from the server, scrolling the list is painful, and the cursor takes several seconds to settle on an item. Repainting the page after the list goes away is slow. This is occurring in IE7, latest version. FF3 and Chrome are fine, very snappy. The pagesize is about 40K overall. I'm thinking this is an issue with the IE7 Javascript engine, or an edge case with this plugin and IE7; it works quickly enough in FF3+. I would appreciate any ideas, solutions, known issues, or thoughts on how to more specifically pin this down. I'd love to post sample code, but this is a corporate app, and I'm not how useful it would be given that the server side piece cannot be shown; ie: you can't pull it down and test it like a self contained piece of code.. Thanks in advance! Chris

    Read the article

  • jquery toggle and fade in one function?

    - by tony noriega
    I was wondering if toggle() and fadeIn() could be used in one function... i got this to work, but it only fades in after the second click... not on first click of the toggle. $(document).ready(function() { $('#business-blue').hide(); $('a#biz-blue').click(function() { $('#business-blue').toggle().fadeIn('slow'); return false; }); // hides the slickbox on clicking the noted link $('a#biz-blue-hide').click(function() { $('#business-blue').hide('fast'); return false; }); }); <a href="#" id="biz-blue">Learn More</a> <div id="business-blue" style="border:1px soild #00ff00; background:#c6c1b8; height:600px; width:600px; margin:0 auto; position:relative;"> <p>stuff here</p> </div>

    Read the article

  • MS Access form_current() firing multiple times

    - by Eric G
    I have a form with two subforms (on separate tab pages). It's an MDB project in Access 2003. When it initially opens, Form_Current on the active subform fires once, as it should. But when you move to another record (ie. from the main form), it fires Form_Current on the active subform 4 times. Then subsequent record-moves result in Form_Current firing 2 times. This is a pain, because the subforms have a lot of fields that get moved and/or hidden and so it jumps around for every Form_Current, not to mention being slow. I am opening the form with a filter via DoCmd.OpenForm (actually it sends the filter in via OpenArgs). FilterOn is only set once, in Form_Open on the main form, never in the subforms. Form_Current is not called explicitly anywhere else in the code. When I look at the call stack when Form_Current fires moving the first time, it looks like: my_subform.Form_Current [<Debug Window>] my_subform.Form_Current So it seems like something in Form_Current is triggering another Form_Current event. But only on the first record move. The code in Form_Current is somewhat complex, involving custom classes and event callbacks, but generally does not touch the table data. The only thing I can think might be triggering a Form_Current is that it checks OldValue on form controls - could this be causing it? Or anything else come to mind? Thanks. Eric

    Read the article

  • What's the largest (most complex) PHP algorithm ever implemented in a single monolithic PHP script?

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The source needs to be in a single file. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • jQuery: Inserting li items one by one?

    - by Legend
    I wrote the following (part of a jQuery plugin) to insert a set of items from a JSON object into a <ul> element. ... query: function() { ... $.ajax({ url: fetchURL, type: 'GET', dataType: 'jsonp', timeout: 5000, error: function() { self.html("Network Error"); }, success: function(json) { //Process JSON $.each(json.results, function(i, item) { $("<li></li>") .html(mHTML) .attr('id', "div_li"+i) .attr('class', "divliclass") .prependTo("#" + "div_ul"); $(slotname + "div_li" + i).hide(); $(slotname + "div_li" + i).show("slow") } } }); } }); }, ... Doing this maybe adding the <li> items one by one theoretically but when I load the page, everything shows up instantaneously. Instead, is there an efficient way to make them appear one by one more slowly? I'll explain with a small example: If I have 3 items, this code is making all the 3 items appear instantaneously (at least to my eyes). I want something like 1 fades in, then 2 fades in, then 3 (something like a newsticker perhaps). Anyone has a suggestion?

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Is there a PHP benchmark that meets these specific criteria? [closed]

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The PHP source needs to be in a single file. It should solve a real-world problem. No silly looping over empty methods etc. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • Drawing to the canvas

    - by Mattl
    I'm writing an android application that draws directly to the canvas on the onDraw event of a View. I'm drawing something that involves drawing each pixel individually, for this I use something like: for (int x = 0; x < xMax; x++) { for (int y = 0; y < yMax; y++){ MyColour = CalculateMyPoint(x, y); canvas.drawPoint(x, y, MyColour); } } The problem here is that this takes a long time to paint as the CalculateMyPoint routine is quite an expensive method. Is there a more efficient way of painting to the canvas, for example should I draw to a bitmap and then paint the whole bitmap to the canvas on the onDraw event? Or maybe evaluate my colours and fill in an array that the onDraw method can use to paint the canvas? Users of my application will be able to change parameters that affect the drawing on the canvas. This is incredibly slow at the moment.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >