Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 116/595 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

  • Which relational databases exist with a public API for a high level language?

    - by Jens Schauder
    We typically interface with a RDBMS through SQL. I.e. we create a sql string and send it to the server through JDBC or ODBC or something similar. Are there any RDBMS that allow direct interfacing with the database engine through some API in Java, C#, C or similar? I would expect an API that allows constructs like this (in some arbitrary pseudo code): Iterator iter = engine.getIndex("myIndex").getReferencesForValue("23"); for (Reference ref: iter){ Row row = engine.getTable("mytable").getRow(ref); } I guess something like this is hidden somewhere in (and available from) open source databases, but I am looking for something that is officially supported as a public API, so one finds at least a note in the release notes, when it changes. In order to make this a question that actually has a 'best' answer: I prefer languages in the order given above and I will prefer mature APIs over prototypes and research work, although these are welcome as well.

    Read the article

  • Does allocation speed depend on the garbage collector being used?

    - by jkff
    My app is allocating a ton of objects (1mln per second; most objects are byte arrays of size ~80-100 and strings of the same size) and I think it might be the source of its poor performance. The app's working set is only tens of megabytes. Profiling the app shows that GC time is negligibly small. However, I suspect that perhaps the allocation procedure depends on which GC is being used, and some settings might make allocation faster or perhaps make a positive influence on cache hit rate, etc. Is that so? Or is allocation performance independent on GC settings under the assumption that garbage collection itself takes little time?

    Read the article

  • 2D Game: Fast(est) way to find x closest entities for another entity - huge amount of entities, high

    - by Pygmy
    I'm working on a 2D game that has a huge amount of dynamic entities. For fun's sake, let's call them soldiers, and let's say there are 50000 of them (which I just randomly thought up, it might be much more or much less :)). All these soldiers are moving every frame according to rules - think boids / flocking / steering behaviour. For each soldier, to update it's movement I need the X soldiers that are closest to the one I'm processing. What would be the best spatial hierarchy to store them to facilitate calculations like this without too much overhead ? (All entities are updated/moved every frame, so it has to handle dynamic entities very well)

    Read the article

  • set arraylist element as null

    - by Jessy
    The first index is set to null (empty), but it doesn't print the right output, why? //set the first index as null and the rest as "High" String a []= {null,"High","High","High","High","High"}; //add array to arraylist ArrayList<Object> choice = new ArrayList<Object>(Arrays.asList(a)); for(int i=0; i<choice.size(); i++){ if(i==0){ if(choice.get(0).equals(null)) System.out.println("I am empty"); //it doesn't print this output } }

    Read the article

  • What's the best CDN for image hosting on a high-volume web site?

    - by Mike
    Akamai is way too expensive. Photobucket is not reliable. Is there a great content delivery network that I can use just to host my images? We deploy images programmatically via FTP, so there is some programming behind the scenes. Having some sort of reporting about the reliability of the service, whether it's raw logs files or a web-based admin screen that shows http errors, would also be important. Has anyone worked with edgecast?

    Read the article

  • Is learning ed worth it to boost my speed in VIM?

    - by Ksiresh
    I've learned the basic/intermediate levels of VIM ( it's to vast to list ). I often find that I slip back to my old ways and start using the mouse, holding down keys to get somewhere, and doing other stupid things that could be spead up. Would it be worth spending time to learn ed to break the habits learned from years in Windoze? Does using ed cultivate the right type of thinking that will transfer to VIM???

    Read the article

  • How to speed up saving a UIImagePickerController image from the camera to the filesystem via UIImagePNGRepresentation()?

    - by kazuhito0000
    I'm making an applications that let users take a photo and show them both in thumbnail and photo viewer. I have NSManagedObject class called photo and photo has a method that takes UIImage and converts it to PNG using UIImagePNGRepresentation() and saves it to filesystem. After this operation, resize the image to thumbnail size and save it. The problem here is UIImagePNGRepresentation() and conversion of image size seems to be really slow and I don't know if this is a right way to do it. Tell me if anyone know the best way to accomplish what I want to do. Thank you in advance.

    Read the article

  • How do I prevent a <td> from being too high?

    - by Cornflake
    It must be something stupid, but I can't figure it out so far... Here is my HTML: <table cellspacing="0" cellpadding="0" border="0"> <tr> <td style="height: 8px"><img src="/media/note2.png" width="8" height="8" border="0"></td> <td style="height: 8px"></td> <td style="height: 8px"><img src="/media/note1.png" width="8" height="8" border="0"></td> </tr> <tr> <td class="NoteCell"></td> <td class="NoteCell">{{ text }}</td> <td class="NoteCell"></td> </tr> <tr> <td style="height: 8px"><img src="/media/note4.png" width="8" height="8" border="0"></td> <td style="height: 8px"></td> <td style="height: 8px"><img src="/media/note3.png" width="8" height="8" border="0"></td> </tr> I'm expecting the first and third rows to have a height of 8 pixels, but for some reason they are much higher (as if there was text inside, but there is no text!) Puzzled... Any help will be appreciated!

    Read the article

  • How do I test the speed of a mySQL query?

    - by Chris
    I have a select and query like below... $sql = "SELECT * FROM notifications WHERE to_id='".$userid."' AND (alert_read != '1' OR user_read != '1') ORDER BY alert_time DESC"; $result = mysql_query($sql); how do I test how long the query took to run?

    Read the article

  • Why doesn't list.get(0).equals(null) work?

    - by Jessy
    The first index is set to null (empty), but it doesn't print the right output, why? //set the first index as null and the rest as "High" String a []= {null,"High","High","High","High","High"}; //add array to arraylist ArrayList<Object> choice = new ArrayList<Object>(Arrays.asList(a)); for(int i=0; i<choice.size(); i++){ if(i==0){ if(choice.get(0).equals(null)) System.out.println("I am empty"); //it doesn't print this output } }

    Read the article

  • Speed of interpolation algorithms, C# and C++ working together.

    - by Kaminari
    Hello. I need fast implementation of popular interpolation algorithms. I figured it out that C# in such simple algorithms will be much slower than C++ so i think of writing some native code and using it in my C# GUI. First of all i run some tests and few operations on 1024x1024x3 matrix took 32ms in C# and 4ms in C++ and that's what i basicly need. Interpolation however is not a good word because i need them only for downscaling. But the question is: Will it be faster than C# methods in Drawing2D Image outputImage = new Bitmap(destWidth, destHeight, PixelFormat.Format24bppRgb); Graphics grPhoto = Graphics.FromImage(outputImage); grPhoto.InterpolationMode = InterpolationMode.*; //all of them grPhoto.DrawImage(bmp, new Rectangle(0, 0, destWidth, destHeight), Rectangle(0, 0, sourceWidth, sourceHeight), GraphicsUnit.Pixel); grPhoto.Dispose(); Some of these method run in 20ms and some in 80. Is there a way to do it faster?

    Read the article

  • How to copy bytes from buffer into the managed struct?

    - by Chupo_cro
    I have a problem with getting the code to work in a managed environment (VS2008 C++/CLI Win Forms App). The problem is I cannot declare the unmanaged struct (is that even possible?) inside the managed code, so I've declared a managed struct but now I have a problem how to copy bytes from buffer into that struct. Here is the pure C++ code that obviously works as expected: typedef struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; int offset = 0; int filesize = 256; // simulates filesize int point_num = 10; // simulates number of records int main () { char *buffer_dyn = new char[filesize]; // allocate RAM // here, the file would have been read into the buffer buffer_dyn[0xa8] = 0x1e; // simulates the speed data (1e 00 00 00) buffer_dyn[0xa9] = 0x00; buffer_dyn[0xaa] = 0x00; buffer_dyn[0xab] = 0x00; offset = 0x90; // if the data with this offset is transfered trom buffer // to struct, int speed is alligned with the buffer at the // offset of 0xa8 track_point *points = new track_point[point_num]; points[0].speed = 0xff; // (debug) it should change into 0x1e memcpy(&points[0],buffer_dyn+offset,32); cout << "offset: " << offset << "\r\n"; //cout << "speed: " << points[0].speed << "\r\n"; printf ("speed : 0x%x\r\n",points[0].speed); printf("byte at offset 0xa8: 0x%x\r\n",(unsigned char)buffer_dyn[0xa8]); // should be 0x1e delete[] buffer_dyn; // release RAM delete[] points; /* What I need is to rewrite the lines 29 and 31 to work in the managed code (VS2008 Win Forms C++/CLI) What should I have after: array<track_point^>^ points = gcnew array<track_point^>(point_num); so I can copy 32 bytes from buffer_dyn to the managed struct declared as typedef ref struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; */ return 0; } Here is the paste to codepad.org so it can be seen the code is OK. What I need is to rewrite these two lines: track_point *points = new track_point[point_num]; memcpy(&points[0],buffer_dyn+offset,32); to something that will work in a managed application. I wrote: array<track_point^>^ points = gcnew array<track_point^>(point_num); and now trying to reproduce the described copying of the data from buffer over the struct, but haven't any idea how it should be done. Alternatively, if there is a way to use an unmanaged struct in the same way shown in my code, then I would like to avoid working with managed struct.

    Read the article

  • Slow loop in R, any suggestion to speed it up?

    - by cfceric
    I have a data frame "m" as shown below: I am trying to find each account's most active month (with most number of V1). for example for account "2", it will be "month 6", for account 3 it will be "month 1", .... I wrote the below loop, it works fine but just takes a long time even I only used 8000 rows, the whole data set has 250,000 rows, so the code below is not usable. Does any one can suggest a better way to achieve this? Thanks a lot.

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >