Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 75/402 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • JQuery Optimization: Is there any way to speed up the rendering of the FlexSelect control?

    - by Sephrial
    Greetings, I am new to jQuery, and I have a performance problem with the FlexSelect control where it takes about 5 seconds to render the dropdown control (in the renderDropdown() function). The dropdown list contains about 5000 element. I believe all the runtime is attributed to the following block of code: var list = this.dropdownList.html(""); $.each(this.results, function() { list.append($("<li/>").html(this.name)); }); Question: Are there any alternatives that would build this list of elements in a more inefficient manner?

    Read the article

  • speed of map() vs. list comprehension vs. numpy vectorized function in python

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing 'a': a = [foo(i) for i in xrange(100)] , a = map(foo, range(100)) , and vfoo = numpy.vectorize(foo) vfoo(range(100)) ? (I don't care whether the output is a list or a numpy array). Is there some other better way of doing this? Thanks.

    Read the article

  • How can I speed up line by line reading of an ASCII file? (C++)

    - by Jon
    Here's a bit of code that is a considerable bottleneck after doing some measuring: //----------------------------------------------------------------------------- // Construct dictionary hash set from dictionary file //----------------------------------------------------------------------------- void constructDictionary(unordered_set<string> &dict) { ifstream wordListFile; wordListFile.open("dictionary.txt"); string word; while( wordListFile >> word ) { if( !word.empty() ) { dict.insert(word); } } wordListFile.close(); } I'm reading in ~200,000 words and this takes about 240 ms on my machine. Is the use of ifstream here efficient? Can I do better? I'm reading about mmap() implementations but I'm not understanding them 100%. The input file is simply text strings with *nix line terminations.

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

  • Does allocation speed depend on the garbage collector being used?

    - by jkff
    My app is allocating a ton of objects (1mln per second; most objects are byte arrays of size ~80-100 and strings of the same size) and I think it might be the source of its poor performance. The app's working set is only tens of megabytes. Profiling the app shows that GC time is negligibly small. However, I suspect that perhaps the allocation procedure depends on which GC is being used, and some settings might make allocation faster or perhaps make a positive influence on cache hit rate, etc. Is that so? Or is allocation performance independent on GC settings under the assumption that garbage collection itself takes little time?

    Read the article

  • Is learning ed worth it to boost my speed in VIM?

    - by Ksiresh
    I've learned the basic/intermediate levels of VIM ( it's to vast to list ). I often find that I slip back to my old ways and start using the mouse, holding down keys to get somewhere, and doing other stupid things that could be spead up. Would it be worth spending time to learn ed to break the habits learned from years in Windoze? Does using ed cultivate the right type of thinking that will transfer to VIM???

    Read the article

  • How to speed up saving a UIImagePickerController image from the camera to the filesystem via UIImagePNGRepresentation()?

    - by kazuhito0000
    I'm making an applications that let users take a photo and show them both in thumbnail and photo viewer. I have NSManagedObject class called photo and photo has a method that takes UIImage and converts it to PNG using UIImagePNGRepresentation() and saves it to filesystem. After this operation, resize the image to thumbnail size and save it. The problem here is UIImagePNGRepresentation() and conversion of image size seems to be really slow and I don't know if this is a right way to do it. Tell me if anyone know the best way to accomplish what I want to do. Thank you in advance.

    Read the article

  • How do I test the speed of a mySQL query?

    - by Chris
    I have a select and query like below... $sql = "SELECT * FROM notifications WHERE to_id='".$userid."' AND (alert_read != '1' OR user_read != '1') ORDER BY alert_time DESC"; $result = mysql_query($sql); how do I test how long the query took to run?

    Read the article

  • Speed of interpolation algorithms, C# and C++ working together.

    - by Kaminari
    Hello. I need fast implementation of popular interpolation algorithms. I figured it out that C# in such simple algorithms will be much slower than C++ so i think of writing some native code and using it in my C# GUI. First of all i run some tests and few operations on 1024x1024x3 matrix took 32ms in C# and 4ms in C++ and that's what i basicly need. Interpolation however is not a good word because i need them only for downscaling. But the question is: Will it be faster than C# methods in Drawing2D Image outputImage = new Bitmap(destWidth, destHeight, PixelFormat.Format24bppRgb); Graphics grPhoto = Graphics.FromImage(outputImage); grPhoto.InterpolationMode = InterpolationMode.*; //all of them grPhoto.DrawImage(bmp, new Rectangle(0, 0, destWidth, destHeight), Rectangle(0, 0, sourceWidth, sourceHeight), GraphicsUnit.Pixel); grPhoto.Dispose(); Some of these method run in 20ms and some in 80. Is there a way to do it faster?

    Read the article

  • How to copy bytes from buffer into the managed struct?

    - by Chupo_cro
    I have a problem with getting the code to work in a managed environment (VS2008 C++/CLI Win Forms App). The problem is I cannot declare the unmanaged struct (is that even possible?) inside the managed code, so I've declared a managed struct but now I have a problem how to copy bytes from buffer into that struct. Here is the pure C++ code that obviously works as expected: typedef struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; int offset = 0; int filesize = 256; // simulates filesize int point_num = 10; // simulates number of records int main () { char *buffer_dyn = new char[filesize]; // allocate RAM // here, the file would have been read into the buffer buffer_dyn[0xa8] = 0x1e; // simulates the speed data (1e 00 00 00) buffer_dyn[0xa9] = 0x00; buffer_dyn[0xaa] = 0x00; buffer_dyn[0xab] = 0x00; offset = 0x90; // if the data with this offset is transfered trom buffer // to struct, int speed is alligned with the buffer at the // offset of 0xa8 track_point *points = new track_point[point_num]; points[0].speed = 0xff; // (debug) it should change into 0x1e memcpy(&points[0],buffer_dyn+offset,32); cout << "offset: " << offset << "\r\n"; //cout << "speed: " << points[0].speed << "\r\n"; printf ("speed : 0x%x\r\n",points[0].speed); printf("byte at offset 0xa8: 0x%x\r\n",(unsigned char)buffer_dyn[0xa8]); // should be 0x1e delete[] buffer_dyn; // release RAM delete[] points; /* What I need is to rewrite the lines 29 and 31 to work in the managed code (VS2008 Win Forms C++/CLI) What should I have after: array<track_point^>^ points = gcnew array<track_point^>(point_num); so I can copy 32 bytes from buffer_dyn to the managed struct declared as typedef ref struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; */ return 0; } Here is the paste to codepad.org so it can be seen the code is OK. What I need is to rewrite these two lines: track_point *points = new track_point[point_num]; memcpy(&points[0],buffer_dyn+offset,32); to something that will work in a managed application. I wrote: array<track_point^>^ points = gcnew array<track_point^>(point_num); and now trying to reproduce the described copying of the data from buffer over the struct, but haven't any idea how it should be done. Alternatively, if there is a way to use an unmanaged struct in the same way shown in my code, then I would like to avoid working with managed struct.

    Read the article

  • Slow loop in R, any suggestion to speed it up?

    - by cfceric
    I have a data frame "m" as shown below: I am trying to find each account's most active month (with most number of V1). for example for account "2", it will be "month 6", for account 3 it will be "month 1", .... I wrote the below loop, it works fine but just takes a long time even I only used 8000 rows, the whole data set has 250,000 rows, so the code below is not usable. Does any one can suggest a better way to achieve this? Thanks a lot.

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Bouncing off a circular Boundary with multiple balls?

    - by Anarkie
    I am making a game like this : Yellow Smiley has to escape from red smileys, when yellow smiley hits the boundary game is over, when red smileys hit the boundary they should bounce back with the same angle they came, like shown below: Every 10 seconds a new red smiley comes in the big circle, when red smiley hits yellow, game is over, speed and starting angle of red smileys should be random. I control the yellow smiley with arrow keys. The biggest problem I have reflecting the red smileys from the boundary with the angle they came. I don't know how I can give a starting angle to a red smiley and bouncing it with the angle it came. I would be glad for any tips! My js source code : var canvas = document.getElementById("mycanvas"); var ctx = canvas.getContext("2d"); // Object containing some global Smiley properties. var SmileyApp = { radius: 15, xspeed: 0, yspeed: 0, xpos:200, // x-position of smiley ypos: 200 // y-position of smiley }; var SmileyRed = { radius: 15, xspeed: 0, yspeed: 0, xpos:350, // x-position of smiley ypos: 65 // y-position of smiley }; var SmileyReds = new Array(); for (var i=0; i<5; i++){ SmileyReds[i] = { radius: 15, xspeed: 0, yspeed: 0, xpos:350, // x-position of smiley ypos: 67 // y-position of smiley }; SmileyReds[i].xspeed = Math.floor((Math.random()*50)+1); SmileyReds[i].yspeed = Math.floor((Math.random()*50)+1); } function drawBigCircle() { var centerX = canvas.width / 2; var centerY = canvas.height / 2; var radiusBig = 300; ctx.beginPath(); ctx.arc(centerX, centerY, radiusBig, 0, 2 * Math.PI, false); // context.fillStyle = 'green'; // context.fill(); ctx.lineWidth = 5; // context.strokeStyle = '#003300'; // green ctx.stroke(); } function lineDistance( positionx, positiony ) { var xs = 0; var ys = 0; xs = positionx - 350; xs = xs * xs; ys = positiony - 350; ys = ys * ys; return Math.sqrt( xs + ys ); } function drawSmiley(x,y,r) { // outer border ctx.lineWidth = 3; ctx.beginPath(); ctx.arc(x,y,r, 0, 2*Math.PI); //red ctx.fillStyle="rgba(255,0,0, 0.5)"; ctx.fillStyle="rgba(255,255,0, 0.5)"; ctx.fill(); ctx.stroke(); // mouth ctx.beginPath(); ctx.moveTo(x+0.7*r, y); ctx.arc(x,y,0.7*r, 0, Math.PI, false); // eyes var reye = r/10; var f = 0.4; ctx.moveTo(x+f*r, y-f*r); ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI); ctx.moveTo(x-f*r, y-f*r); ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI); // nose ctx.moveTo(x,y); ctx.lineTo(x, y-r/2); ctx.lineWidth = 1; ctx.stroke(); } function drawSmileyRed(x,y,r) { // outer border ctx.lineWidth = 3; ctx.beginPath(); ctx.arc(x,y,r, 0, 2*Math.PI); //red ctx.fillStyle="rgba(255,0,0, 0.5)"; //yellow ctx.fillStyle="rgba(255,255,0, 0.5)"; ctx.fill(); ctx.stroke(); // mouth ctx.beginPath(); ctx.moveTo(x+0.4*r, y+10); ctx.arc(x,y+10,0.4*r, 0, Math.PI, true); // eyes var reye = r/10; var f = 0.4; ctx.moveTo(x+f*r, y-f*r); ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI); ctx.moveTo(x-f*r, y-f*r); ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI); // nose ctx.moveTo(x,y); ctx.lineTo(x, y-r/2); ctx.lineWidth = 1; ctx.stroke(); } // --- Animation of smiley moving with constant speed and bounce back at edges of canvas --- var tprev = 0; // this is used to calculate the time step between two successive calls of run function run(t) { requestAnimationFrame(run); if (t === undefined) { t=0; } var h = t - tprev; // time step tprev = t; SmileyApp.xpos += SmileyApp.xspeed * h/1000; // update position according to constant speed SmileyApp.ypos += SmileyApp.yspeed * h/1000; // update position according to constant speed for (var i=0; i<SmileyReds.length; i++){ SmileyReds[i].xpos += SmileyReds[i].xspeed * h/1000; // update position according to constant speed SmileyReds[i].ypos += SmileyReds[i].yspeed * h/1000; // update position according to constant speed } // change speed direction if smiley hits canvas edges if (lineDistance(SmileyApp.xpos, SmileyApp.ypos) + SmileyApp.radius > 300) { alert("Game Over"); } // redraw smiley at new position ctx.clearRect(0,0,canvas.height, canvas.width); drawBigCircle(); drawSmiley(SmileyApp.xpos, SmileyApp.ypos, SmileyApp.radius); for (var i=0; i<SmileyReds.length; i++){ drawSmileyRed(SmileyReds[i].xpos, SmileyReds[i].ypos, SmileyReds[i].radius); } } // uncomment these two lines to get every going // SmileyApp.speed = 100; run(); // --- Control smiley motion with left/right arrow keys function arrowkeyCB(event) { event.preventDefault(); if (event.keyCode === 37) { // left arrow SmileyApp.xspeed = -100; SmileyApp.yspeed = 0; } else if (event.keyCode === 39) { // right arrow SmileyApp.xspeed = 100; SmileyApp.yspeed = 0; } else if (event.keyCode === 38) { // up arrow SmileyApp.yspeed = -100; SmileyApp.xspeed = 0; } else if (event.keyCode === 40) { // right arrow SmileyApp.yspeed = 100; SmileyApp.xspeed = 0; } } document.addEventListener('keydown', arrowkeyCB, true); JSFiddle : http://jsfiddle.net/gj4Q7/

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >