Search Results

Search found 6587 results on 264 pages for 'slow motion'.

Page 228/264 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • Best way to handle Many-to-Many relationships in PHP MySQL

    - by Jayrox
    I am looking for the best way to handle a database of many-to-many relationships in PHP and MySQL. Right now I have 2 tables: Users (id, user_name, first_name, last_name) Connections (id_1, id_2) In the User table id is auto incremented on add and user_name is unique, but can be changed. Unfortunately, I don't have control over the user_name and its ability to be changed, but I must account for it. The Connections table is obviously, user1 and user2's id. The connection table needs to account for these possible relations: user1 --> user2 (user 1 friends with user 2 but not user2 friends with user1) user2 --> user1 (user 2 friends with user 1 but not user1 friends with user2) user1 <--> user2 (user 1 and user 2 mutually friends) user1 <-!-> user2 (user 1 and user 2 not friends) That part is not the problem, The problem I am having with is keeping these relations unique when and if they change in batches. Possible solution 1: delete all of user 1's relations and readd them with the updated list. I think this might be too slow for my needs. Solution 2? Anyone else encounter this problem? How should I best handle this? update: distinguishing relationships: i handle relationships like this: user1, user2 user1, user3 user2, user1 in that example the following is true: user1 follows user2 and user3 user2 only follows user1 but doesn't follow user3 user3 doesn't follow either user1 or user2

    Read the article

  • Optimization in Python - do's, don'ts and rules of thumb.

    - by JV
    Well I was reading this post and then I came across a code which was: jokes=range(1000000) domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] I thought wouldn't it be better to calculate the value of len(jokes) once outside the list comprehension? Well I tried it and timed three codes jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]' 10000000 loops, best of 3: 0.0352 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes);domain=[(0,(l*2)-i-1) for i in range(0,l*2)]' 10000000 loops, best of 3: 0.0343 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes)*2;domain=[(0,l-i-1) for i in range(0,l)]' 10000000 loops, best of 3: 0.0333 usec per loop Observing the marginal difference 2.55% between the first and the second made me think - is the first list comprehension domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] optimized internally by python? or is 2.55% a big enough optimization (given that the len(jokes)=1000000)? If this is - What are the other implicit/internal optimizations in Python ? What are the developer's rules of thumb for optimization in Python? Edit1: Since most of the answers are "don't optimize, do it later if its slow" and I got some tips and links from Triptych and Ali A for the do's. I will change the question a bit and request for don'ts. Can we have some experiences from people who faced the 'slowness', what was the problem and how it was corrected? Edit2: For those who haven't here is an interesting read Edit3: Incorrect usage of timeit in question please see dF's answer for correct usage and hence timings for the three codes.

    Read the article

  • Monotouch threads, GC, WCF

    - by cvista
    Hi This is a question about best practices i guess but it applies directly to my current MT project. I'm using WCF services to communicate with the server. To do this i do the following: services.MethodToCall(params); and the asynch: services.OnMethodToCallCompleted += delegate{ //do stuff and ting }; This can lead to issues if you're not careful in that variables defined within the scope of the asynch callback can sometimes be cleaned up by the gc and this can cause crashes. So - I am making it a practice to declare these outside of the scope of the callback unless I am 100% sure they are not needed. Now - when doing stuff and ting implies changing the ui - i wrap it all in an InvokeOnMainThread call. I guess wrapping everything in this would slow the main thread down and rubbish the point of having multi threads. Even though I'm being careful about all this i am still getting crashes and I have no idea why! I am certain it has something to do with threads, scope and all that. Now - the only thing I can think of outside of updating the UI that may need to happen inside of InvokeOnMainThread is that I have a singleton 'Database' class. This is based on the version 5 code from this thread http://www.yoda.arachsys.com/csharp/singleton.html So now if the service method returns data that needs to be added/updated to the Database class -I also wrap this inside an InvokeOnMainThread call. Still getting random crashes. So... My question is this: I am new to thick client dev - I'm coming from a web dev perspective where we don't need to worry about threads so much :) Aside from what I have mentioned -are there any other things I should be aware of? Is the above stuff correct? Or am i miss-understanding something? Cheers w://

    Read the article

  • Project euler problem 45

    - by Peter
    Hi, I'm not yet a skilled programmer but I thought this was an interesting problem and I thought I'd give it a go. Triangle, pentagonal, and hexagonal numbers are generated by the following formulae: Triangle T_(n)=n(n+1)/2 1, 3, 6, 10, 15, ... Pentagonal P_(n)=n(3n-1)/2 1, 5, 12, 22, 35, ... Hexagonal H_(n)=n(2n-1) 1, 6, 15, 28, 45, ... It can be verified that T_(285) = P_(165) = H_(143) = 40755. Find the next triangle number that is also pentagonal and hexagonal. Is the task description. I know that Hexagonal numbers are a subset of triangle numbers which means that you only have to find a number where Hn=Pn. But I can't seem to get my code to work. I only know java language which is why I'm having trouble finding a solution on the net womewhere. Anyway hope someone can help. Here's my code public class NextNumber { public NextNumber() { next(); } public void next() { int n = 144; int i = 165; int p = i * (3 * i - 1) / 2; int h = n * (2 * n - 1); while(p!=h) { n++; h = n * (2 * n - 1); if (h == p) { System.out.println("the next triangular number is" + h); } else { while (h > p) { i++; p = i * (3 * i - 1) / 2; } if (h == p) { System.out.println("the next triangular number is" + h); break; } else if (p > h) { System.out.println("bummer"); } } } } } I realize it's probably a very slow and ineffecient code but that doesn't concern me much at this point I only care about finding the next number even if it would take my computer years :) . Peter

    Read the article

  • UIButton and UIControlEventState issue

    - by Typeoneerror
    I'm having a very specific "bug" in my iPhone application. I'm setting two images for the highlighted and normal states of a button. It works as expected when you "press" and then "touch up" at a slow pace, but if you click/tap it quickly, there's a noticeable flicker between states. Is this a known bug or am I setting the states incorrectly? Here's the code that creates the buttons: UIImage *normalImage = [[UIImage imageNamed:@"btn-small.png"] stretchableImageWithLeftCapWidth:10.0f topCapHeight:0.0f]; UIImage *highlightedImage = [[UIImage imageNamed:@"btn-small-down.png"] stretchableImageWithLeftCapWidth:10.0f topCapHeight:0.0f]; [self setBackgroundColor:[UIColor clearColor]]; [self setBackgroundImage:normalImage forState:UIControlStateNormal]; [self setBackgroundImage:highlightedImage forState:UIControlStateDisabled]; [self setBackgroundImage:highlightedImage forState:UIControlStateHighlighted]; [self setAdjustsImageWhenDisabled:FALSE]; [self setAdjustsImageWhenHighlighted:FALSE]; When a button is tapped it simply disables itself and enables the other button: - (IBAction)aboutButtonTouched:(id)sender { aboutButton.enabled = FALSE; rulesButton.enabled = TRUE; } - (IBAction)rulesButtonTouched:(id)sender { rulesButton.enabled = FALSE; aboutButton.enabled = TRUE; } Any thoughts on this quick-click flicker?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Implementing Excel 2003 COM Add-in UDF in Asyc Programming model using C#(VS 2005)

    - by Venu
    Hi: I am trying to implement a UDF using Excel COM Add-in(2003) with Visual Studio 2005 in C#. I would like to implement the UDF using async programming. The UDF is a slow operation as its results are fetched from a server. As an illustration(not a real world example),the following UDF works fine without any issue: public double mul(double number1, double number2) { return number1 * number2; } How can I do the same functionality in an async way: For example, I would like the UDF return immediately and later when the results are available from a server, I would like to update the desired cells. // This method returns immediately. public object mul(double number1, double number2) { return "calculating"; } // This method of a worker thread will update the results. public OnResultsAvailable(object result) { // Question: how should I update the cells that triggerred the calcualtions above? } Constraints: I cannot use Excel RTD as I have to work with existing codebase written using Excel C# COM Add-in. Thanks for the help. -Venu

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • I need some pointers on how to implement inertia

    - by gargantaun
    Ok, so I've created a little plugin that takes a bunch of elements and creates a sort of never ending list. I'll try to explain... I have a div, and it's got about 20 elements tags in it. When the user scrolls up, the top element moves out of view and is moved to the bottom of the list. And vice-versa so that when the user scrolls down, the bottom element is moved to the top of the list. This is specifically for Mobile Safari (iPad, iPhone) web content and you can see the work in progress here... http://appliedworks.co.uk/files/times/SVGTests/drumView/drum.html You'll need an iPad or iPhone top see the scrolling in action. You can see the plugin code here... http://appliedworks.co.uk/files/times/SVGTests/drumView/drumView-0.1b.js What I would like to do is implement inertia so the scrolling slows to a halt in response to how fast or slow the user is scrolling when their finger leaves the screen. Just like the inertia commonly found in the iPhone / iPad UI. The problem is, every time an element moves to the top or the bottom of the list, the scollTop value for the parent div is adjusted to make it look like all the elements are staying in the same place. Which means the scrollTop value is never more than the top elements total height. So there's no value I can think of that I can keep on manipulating to give the illusion of inertia. I'm stumped. Does anyone have any suggestions?

    Read the article

  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    - by lyuba
    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu). To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally. Screencasting fsp fully depends on the camera's fsp, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fsp already known to be started. From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization. So the sub-questions are: 1. Does the USD camera's frame rate vary during a single session? 2. Are there any tools which provide fast screencasting frame by frame? 3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request? For my task I may either use C++ or Java. I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

    Read the article

  • Trying to draw a dynamic rectangle in SVG

    - by Shaun
    To be more specific, here are the steps I need: onmousedown - set x and y of rect as mouse coordinates onmousemove - using the current x and y mouse coordinates calculate height and width of the rect, set these and append onmouseup - remove the rectangle, and call a function based off some calculations from the rect. Here is what I have but isn't quite working (right now I have it drawing a line to make it simpler): onmousedown: startbox(evt) function startbox(evt) { if(evt.button === 0) { x1 = evt.clientX + div.scrollLeft-5; y1 = evt.clientY + div.scrollTop-30; obj.setAttributeNS(null, "x1", x1); obj.setAttributeNS(null, "y1", y1); Root.setAttributeNS(null, "onmousemove", "updatebox(evt)"); } } onmousemove: updatebox(evt) function updatebox(evt) { if(evt.button === 0) { x2 = evt.clientX + div.scrollLeft-5; y2 = evt.clientY + div.scrollTop-30; Root.appendChild(.obj); w = Math.abs(x2-x1); h = Math.abs(y2-y1); var strokecolor; if(w>20 && h>20) { strokecolor = "green"; validbox = true; } else { strokecolor = "red"; validbox = false; } var Attr={ x2:x2, y2:y2, stroke:strokecolor } assignAttr(obj, Attr); //just loops thru adding multiple attributes } } onmouseup: endbox() function endbox(evt) { if(evt.button===0) { Root.setAttributeNS(null, "onmousemove", ""); Root.removeChild(obj); if(validbox) { //do stuff validbox = !validbox; } } } Some of my problems with this are: Its slow in Chrome making drawing the line/rect feel sluggish. It won't work two times in a row. This is the real problem that I can't fix. Any and all feedback is welcome.

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • SQL statement to split a table based on a join

    - by williamjones
    I have a primary table for Articles that is linked by a join table Info to a table Tags that has only a small number of entries. I want to split the Articles table, by either deleting rows or creating a new table with only the entries I want, based on the absence of a link to a certain tag. There are a few million articles. How can I do this? Not all of the articles have any tag at all, and some have many tags. Example: table Articles primary_key id table Info foreign_key article_id foreign_key tag_id table Tags primary_key id It was easy for me to segregate the articles that do have the match right off the bat, so I thought maybe I could do that and then use a NOT IN statement but that is so slow running it's unclear if it's ever going to finish. I did that with these commands: INSERT INTO matched_articles SELECT * FROM articles a LEFT JOIN info i ON a.id = i.article_id WHERE i.tag_id = 5; INSERT INTO unmatched_articles SELECT * FROM articles a WHERE a.id NOT IN (SELECT m.id FROM matched_articles m); If it makes a difference, I'm on Postgres.

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • How to handle dates that repeat indefinitely

    - by Addsy
    I am implementing a fairly simple calendar on a website using PHP and MySQL. I want to be able to handle dates that repeat indefinitely and am not sure of the best way to do it. For a time limited repeating event it seems to make sense to just add each event within the timeframe into my db table and group them with some form of recursion id. But when there is no limit to how often the event repeats, is it better to a) put records in the db for a specific time frame (eg the next 2 years) and then periodically check and add new records as time goes by - The problem with this is that if someone is looking 3 years ahead, the event won't show up b) not actually have records for each event but instead when i check in my php code for events within a specified time period, calculate wether a repeated event will occur within this time period - The problem with this is that it means there isn't a specific record for each event which i can see being a pain when i then want to associate other info (attendance etc) with that event. It also seems like it might be a bit slow Has anyone tried either of these methods? If so how did it work out? Or is there some other ingenious crafty method i'm missing?

    Read the article

  • Starting and stopping firefox from c#

    - by Lucas Meijer
    When I start /Applications/Firefox.app/Contents/MacOS/firefox-bin on MacOSX using Process.Start() using Mono, the id of the process that gets returned does not match the process that firefox ends up running under. It looks like firefox quickly decides to start another process, and kill the current one. This makes it difficult to stop firefox, and to detect if it is still running. I've tried starting firefox using the -no-remote flag, to no avail. Is there a way to start firefox in such a way that it doesn't do this "I'll quickly make a new process for you" dance? The situation can somewhat be detected by making sure Firefox keeps on running for at least 3 seconds after its start, and when it does not, scan for other firefox processes. However, this technique is shaky at best, as on slow days it might take a bit more than 3 seconds, and then all tests depending on this behaviour fail. It turns out, that this behaviour only happens when asking firefox to start a specific profile using -P MyProfile. (Which I need to do, as I need to start firefox with specific proxyserver settings) If I start firefox "normally" it does stick to its process.

    Read the article

  • Help with Assembly/SSE Multiplication

    - by Brett
    I've been trying to figure out how to gain some improvement in my code at a very crucial couple lines: float x = a*b; float y = c*d; float z = e*f; float w = g*h; all a, b, c... are floats. I decided to look into using SSE, but can't seem to find any improvement, in fact it turns out to be twice as slow. My SSE code is: Vector4 abcd, efgh, result; abcd = [float a, float b, float c, float d]; efgh = [float e, float f, float g, float h]; _asm { movups xmm1, abcd movups xmm2, efgh mulps xmm1, xmm2 movups result, xmm1 } I also attempted using standard inline assembly, but it doesn't appear that I can pack the register with the four floating points like I can with SSE. Any comments, or help would be greatly appreciated, I mainly need to understand why my calculations using SSE are slower than the serial C++ code? I'm compiling in Visual Studio 2005, on a Windows XP, using a Pentium 4 with HT if that provides any additional information to assit. Thanks in advance!

    Read the article

  • What's the best way to select max over multiple fields in SQL?

    - by allyourcode
    The I kind of want to do is select max(f1, f2, f3). I know this doesn't work, but I think what I want should be pretty clear (see update 1). I was thinking of doing select max(concat(f1, '--', f2 ...)), but this has various disadvantages. In particular, doing concat will probably slow things down. What's the best way to get what I want? update 1: The answers I've gotten so far aren't what I'm after. max works over a set of records, but it compares them using only one value; I want max to consider several values, just like the way order by can consider several values. update 2: Suppose I have the following table: id class_name order_by1 order_by_2 1 a 0 0 2 a 0 1 3 b 1 0 4 b 0 9 I want a query that will group the records by class_name. Then, within each "class", select the record that would come first if you ordered by order_by1 ascending then order_by2 ascending. The result set would consist of records 2 and 3. In my magical query language, it would look something like this: select max(* order by order_by1 ASC, order_by2 ASC) from table group by class_name

    Read the article

  • Does Android AsyncTaskQueue or similar exist?

    - by Ben L.
    I read somewhere (and have observed) that starting threads is slow. I always assumed that AsyncTask created and reused a single thread because it required being started inside the UI thread. The following (anonymized) code is called from a ListAdapter's getView method to load images asynchronously. It works well until the user moves the list quickly, and then it becomes "janky". final File imageFile = new File(getCacheDir().getPath() + "/img/" + p.image); image.setVisibility(View.GONE); view.findViewById(R.id.imageLoading).setVisibility(View.VISIBLE); (new AsyncTask<Void, Void, Bitmap>() { @Override protected Bitmap doInBackground(Void... params) { try { Bitmap image; if (!imageFile.exists() || imageFile.length() == 0) { image = BitmapFactory.decodeStream(new URL( "http://example.com/images/" + p.image).openStream()); image.compress(Bitmap.CompressFormat.JPEG, 85, new FileOutputStream(imageFile)); image.recycle(); } image = BitmapFactory.decodeFile(imageFile.getPath(), bitmapOptions); return image; } catch (MalformedURLException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } catch (IOException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } } @Override protected void onPostExecute(Bitmap image) { if (view.getTag() != p) // The view was recycled. return; view.findViewById(R.id.imageLoading).setVisibility( View.GONE); view.findViewById(R.id.image) .setVisibility(View.VISIBLE); ((ImageView) view.findViewById(R.id.image)) .setImageBitmap(image); } }).execute(); I'm thinking that a queue-based method would work better, but I'm wondering if there is one or if I should attempt to create my own implementation.

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >