Search Results

Search found 6207 results on 249 pages for 'slow mtion'.

Page 162/249 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • SlideDown then call custom function

    - by user275074
    Hi, Having difficulty understanding why my function is being called twice. I'm trying to detect when a radio button is called (intRev_yes), check if a div is empty, slide down another div and then call a custome function which dynamically creates a date field. if(this.id=="intRev_yes"){ if($('div#mainField').is(':empty')){ $('.intRevSections').slideDown('slow',function(){ current=-1; addIrField(); }); } } When I alert out at each stage, after getting to the end (i.e. addIrField()), the script seems to go back to the slideDown() section and recalls addIrField(). I can't understand why, there are no loops. current is used as an index for the date field.

    Read the article

  • aio_write on linux with rtkaio is sometimes long

    - by Drakosha
    I'm using async io on linux with rtkaio library. In my tests everything works perfectly, but, in my real application i see that aio_write which is supposed to return very fast, is very slow. It can take more than 100 milis to write a 128KB to a O_DIRECT padded file. Both my test and the application use same I/O size, i check on the same file system (GFS). I added counting and i see that there are about 50% of async io operations that are short (shorter then 2 milis) and 50% that are long (longer than 2 milis). I also checked that the test and the application both use the same rtkaio library. I'm pretty lost, anyone any ideas where should i look? Another my related question: http://stackoverflow.com/questions/1799537/proc-sys-fs-aio-nr-is-never-higher-than-1024-aio-on-linux

    Read the article

  • 2D Spaceship movement math

    - by YAS
    Hi, I'm new here. I'm trying to make a top-down spaceship game and I want the movement to somewhat realistic. 360 degrees with inertia, gravity, etc. My problem is I can make the ship move 360 with inertia with no problem, but what I need to do is impose a limit for how fast the engines can go while not limiting other forces pushing/pulling the ship. So, if the engines speed is a maximum of 500 and the ship is going 1000 from a gravity well, the ship is not going to go 1500 when it's engines are on, but if is pointing away from the angle is going then it could slow down. For what it's worth, I'm using Construct (www.scirra.com), and all I need is the math of it. Thanks for any help, I'm going bald from trying to figure this out.

    Read the article

  • Convert PDF to PNG using ImageMagick

    - by StackOverflowNewbie
    using ImageMagick, what command should i use to convert a PDF to PNG? I need highest quality, smallest file size. this is what I have so far (very slow by the way): convert -density 300 -depth 8 -quality 85 a.pdf a.png Looking at what Gmail does when a user "view" a PDF, the quality is awesome and the file size very minimal. The DPI is just 96 (I have to set a density of 300 to get anything decent). Anyone know how GMail does it? Thanks.

    Read the article

  • How do I get real integer overflows in MATLAB/Octave?

    - by marvin2k
    I'm working on a verification-tool for some VHDL-Code in MATLAB/Octave. Therefore I need data types which generate "real" overflows: intmax('int32') + 1 ans = -2147483648 Later on, it would be helpful if I can define the bit width of a variable, but that is not so important right now. When I build a C-like example, where a variable gets increased until it's smaller than zero, it spins forever and ever: test = int32(2^30); while (test > 0) test = test + int32(1); end Another approach I tried was a custom "overflow"-routine which was called every time after a number is changed. This approach was painfully slow, not practicable and not working in all cases at all. Any suggestions?

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • jQuery - Add click event to Div and go to first link found

    - by LuckyShot
    Hi guys, I think I've been too much time looking at this function and just got stuck trying to figure out the nice clean way to do it. It's a jQuery function that adds a click event to any div with a click CSS class found in the page and when clicked it redirects the user to the first link found in the div. function clickabledivs() { $('.click').each( function (intIndex) { $(this).bind("click", function(){ window.location = $( "#"+$(this).attr('id')+" a:first-child" ).attr('href'); }); } ); } The code simply works although I'm pretty sure there is a fairly better way to accomplish it, specially the selector I am using: $( "#"+$(this).attr('id')+" a:first-child" ) looks too long and slow. Any ideas? Please let me know if you need more details. Thanks! PS: I've got some jQuery benchmarking reference from Project2k.de here: http://blog.projekt2k.de/2010/01/benchmarking-jquery-1-4/

    Read the article

  • Question on dynamic URL parsing

    - by jerebear
    I see many, many sites that have URLs for individual pages such as http://www.mysite.com/articles/this-is-article-1 http://www.mysite.com/galleries/575 And they don't redirect, they don't run slowly... I know how to parse URL's, that's easy enough. But in my mind, that seems slow and cumbersome on a dynamic site. As well, if the pages are all staticly built (hende the custom URL) then that means all components of the page are static as well... (which would be bad) I'd love to hear some ideas about how this is typically accomplished.

    Read the article

  • writing a fast parser in python

    - by panzi
    I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck. I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge ( 150 MB) with very long lines. My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not jet ended. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

    Read the article

  • importing content into drupal.

    - by anru
    I have a wordpress site with 5k post and each post has average 25 comments. so 125k total nodes have to be added. I need import those posts and comments into drupal 6 . I have written a script to import those post/comments into drupal by drupal's cron service. but the cron service keeps time out. because import 125k nodes one by one is very slow. what can i do to imporve drupal importing speed? i am use drupal built in node_save(), comment_save() method to do it. I have not find out a way to use customized SQL query to increase importing speed yet. I am execute my script through drupals's cron.php, that mean even i have set 'max_execute_time' to unlimited, but that only affects PHP , apache server has it own time out setting.

    Read the article

  • Penalty of using QGraphicsObject vs QGraphicsItem?

    - by Dutch
    I currently have a hierarchy of items based off of QGraphicsItem. I want to move to QGraphicsObject instead so that I can put properties on my items. I will not be making use of signals/slots or any other features of QObject. I'm told that you shouldn't derive from QObject because it's "heavy" and "slow". To test the impact, I derive from QGraphicsObject, add a couple properties to my items, and look at the memory usage of the running app. I create 1000 items using both flavors and I don't notice anything more than 10k more memory usage. Since all I am adding on to my items are properties, is it safe to say that QObject only adds weight if you are using signals/slots?

    Read the article

  • svn commit is hung at start of commit

    - by jwhitlock
    I'm commiting a large changeset, including a large binary file (180 MB) over a slow VPN connection. It looks for all the world like it is stalled. How can I diagnose where it is stuck? The output is: $ svn commit -m "My commit message" Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)` Local subversion is 1.6.9 on Linux, KDE 4.3, and svn status shows ML . L ws M ws/manage.py L ws/locales L ws/locales/ja_JP L ws/locales/ja_JP/LC_MESSAGES The process isn't using much of any resources. The server is Linux, served by Apache and mod_dav_svn, same subversion 1.6.9. I can't see any process that is handling the commit.

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • Will pool the connection help threading in sqlite (and how)?

    - by mamcx
    I currently use a singleton to acces my database (see related question) but now when try to add some background processing everything fall apart. I read the sqlite docs and found that sqlite could work thread-safe, but each thread must have their own db connection. I try using egodatabase that promise a sqlite wrapper with thread safety but is very buggy, so I return to my old FMDB library I start to see how use it in multi-thread way. Because I have all code with the idea of singleton, change everything will be expensive (and a lot of open/close connections could become slow), so I wonder if, as the sqlite docs hint, build a pooling for each connection will help. If is the case, how make it? How know wich connection get from the pool (because 2 threads can't share the connection)? I wonder if somebody already use sqlite in multu-threading with NSOperation or similar stuff, my searching only return "yeah, its possible" but let the details to my imagination...

    Read the article

  • How can I make a steady automata in JavaScript?

    - by RobertWHurst
    I'm working on a JavaScript game and I've got an automata system controlling game time and sprite animation as well as giving a hand to the path finding system for timing and such. My problem is on slow browsers the JavaScript loop I'm using for counting the time is not very accurate. It tends to jump around a lot. I there a way to force the loop to run consistently at 30 fps? The automata can drop frames if it needs to catch up so if the solution requires dropping frames that's ok.

    Read the article

  • removing duplicate strings from a massive array in java efficiently?

    - by Preator Darmatheon
    I'm considering the best possible way to remove duplicates from an (Unsorted) array of strings - the array contains millions or tens of millions of stringz..The array is already prepopulated so the optimization goal is only on removing dups and not preventing dups from initially populating!! I was thinking along the lines of doing a sort and then binary search to get a log(n) search instead of n (linear) search. This would give me nlogn + n searches which althout is better than an unsorted (n^2) search = but this still seems slow. (Was also considering along the lines of hashing but not sure about the throughput) Please help! Looking for an efficient solution that addresses both speed and memory since there are millions of strings involved without using Collections API!

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

  • Use jQuery to show a div only when scroll position is between 2 points

    - by Rik
    Hi, I'm trying to work out how to get a div (#tips) to appear when the user scrolls into the 2nd quarter of its containing div's height (#wrap), and then have it disappear when the user scrolls into the last quarter. So it would be like this: 1st quarter - #tips is hidden 2nd quarter - #tips is visible 3rd quarter - #tips is visible 4th quarter - #tips is hidden I'm almost completely new to jQuery but what I've got so far is this: function addKeyboardNavigation(){ // get the height of #wrap var $wrapHeight = $('#wrap').outerHeight() // get 1/4 of wrapHeight var $quarterwrapHeight = ($wrapHeight)/4 // get 3/4 of wrapHeight var $threequarterswrapHeight = 3*($wrapHeight) // check if we're over a quarter down the page if( $(window).scrollTop() > $quarterwrapHeight ){ // if we are show keyboardTips $("#tips").fadeIn("slow"); } } This is where I get confused. How can I check if the scroll position is $quarterwrapHeight but < $threequarterswrapHeight? To make it run I've been using: // Run addKeyboardNavigation on scroll $(window).scroll(function(){ addKeyboardNavigation(); }); Any help or suggestions would be greatly appreciated! Thanks.

    Read the article

  • PostgreSQL 8.3 data types: xml vs varchar

    - by Sejanus
    There's xml data type in Postgres, I never used it before so I'd like to hear opinions. Downsides and upsides vs using regular varchar (or Text) column to store xml. The text I'm going to store is xml, well-formed, UTF-8. No need to search by it (I've read searching by xml is slow). This XML actually is data prepared for PDF generation with Apache FOP. XML can be generated dynamically from data found elsewhere (other Postgres tables), it's stored as is only so that I won't need to generate it twice. Kinda backup#2 for already generated PDF documents. Anything else to know? Good practices, performance, maintenance, etc?

    Read the article

  • How to force oracle to use index range scan?

    - by wsb3383
    Hi, all. I have a series of extremely similar queries that I run against a table of 1.4 billion records (with indexes), the only problem is that at least 10% of those queries take 100x more time to execute than others. I ran an explain plan and noticed that the for the fast queries (roughly 90%) Oracle is using an index range scan (on my created), while on the slow one, it's using a full index scan. Is there a way to force Oracle to do a an index range scan? Thanks!

    Read the article

  • Can SVG render partially if gzipped and chunk-transferred?

    - by Scott Stafford
    Hi - I have some large, dynamically generated SVGs that are being served over a relatively slow internet connection. I'm trying to optimize them to be viewable as fast as possible. If I set the server to Content-Encoding: gzip and Transfer-Encoding: chunked, will any SVG viewers take advantage of that and render it partially, as it is transferred? If not, are there other ways to get it to render as-it-streams? I could break it up into several SVG pieces but that will be a lot of work, I was hoping for server settings... The most common users use IE7 with the Adobe SVG Viewer plugin. I doubt it matters but I'm serving with C#/ASP.NET and IIS6.

    Read the article

  • will a mysql query run slower if one of the tables involved has no index defined??

    - by lock
    there's this already populated database which came from another dev im not sure what went on that dev's mind when he created the tables, but on one of our scripts there is this query involving 4 tables and it runs super slow SELECT a.col_1, a.col_2, a.col_3, a.col_4, a.col_5, a.col_6, a.col_7 FROM a, b, c, d WHERE a.id = b.id AND b.c_id = c.id AND c.id = d.c_id AND a.col_8 = '$col_8' AND d.g_id = '$g_id' AND c.private = '1' NOTE: $col_8 and $g_id are variables from a form its only my theory that it's due to tables b and c not having an index, although im guessing that the dev didnt think that it was necessary since those tables only tell relations between a and d, where b tells that the data in a belongs to a certain user, and c tells that the user belongs to a group in d as you can see, there's not even a join or other extensive query functions used but this query which returns only around 100 rows takes 2 minutes to execute. anyway my question is simply this post's title. will a mysql query run slower if one of the tables involved has no index defined??

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • Using Spotlight as the "database" of an application

    - by vicvicvic
    I'm developing an OS X application to organize "things" (as iTunes is to music and iPhoto to photos). Instead of having my own database and index, I'm considering using Spotlight to essentially serve this purpose. Has anyone tried this? Is it wise? The main benefit, as I see it, would be simplicity and avoiding redundancy. It seems a bit wasteful to implement my own index machinery when OS X comes with one built in. I have little experience working with Spotlight, however. From a user's perspective, I do know that it has been slow and imprecise in older versions of OS X. I also have a gut-feeling that since it's aimed at searching the whole filesystem, using it for "local" purposes becomes hackish. Obviously, my applications's index needs to constantly be up-to-date. Can mdimport be used for this?

    Read the article

  • Oracle EXECUTE IMMEDIATE changes explain plan of query.

    - by Gunny
    I have a stored procedure that I am calling using EXECUTE IMMEDIATE. The issue that I am facing is that the explain plan is different when I call the procedure directly vs when I use EXECUTE IMMEDIATE to call the procedure. This is causing the execution time to increase 5x. The main difference between the plans is that when I use execute immediate the optimizer isn't unnesting the subquery (I'm using a NOT EXISTS condition). We are using Rule Based Optimizer here at work. Example: Fast: begin package.procedure; end; / Slow: begin execute immediate 'begin package.' || proc_name || '; end;'; end; /

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >