Search Results

Search found 3296 results on 132 pages for 'executable compression'.

Page 4/132 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Using VB6 + WSH with Windows Compression

    - by OneNerd
    Having trouble with WSH and Windows Compression. My goal is to be able to zip up files (not folders, but individual files from various locations, which I have stored in an array) using the built-in Windows Compression. I am using VB6. Here is my routine (vb6 code): Dim objShell Dim objFolder Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.namespace(savePath & "\export.zip") ' -- ' loop through array holding files to zip For i = 0 To filePointer objFolder.CopyHere (filesToZip(i)) Next ' -- Set objShell = Nothing Set objFolder = Nothing It works, but issues arise when there are more than a few files. I start getting errors from Windows (presumably, its calling the compression too fast, and the zip file is locked). I cant seem to figure out how to WAIT until the COPYHERE function completes before calling the next one to avoid issues. Does anyone have any experience with this? Thanks -

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • GZip compression with WCF hosted on IIS7

    - by joniba
    So I'm going to add my query to the small ocean of questions on the subject. I'm trying to enable GZip compression on large soap responses from a WCF service. So far, I've followed instructions here and in a variety of other places to enable dynamic compression on IIS. Here's my dynamicTypes section from the applicationHost.config: <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/xop+xml" enabled="true" /> <add mimeType="application/soap+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> And also: <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> Though I'm not so clear on why that's needed. Threw some extra mime-types in there just in case. I've implemented IClientMessageInspector to add Accept-Encoding: gzip, deflate to my client's HttpRequests. Here's an example of a request-header taken from fiddler: POST http://[omitted]/TestMtomService/TextService.svc HTTP/1.1 Content-Type: application/soap+xml; charset=utf-8 Accept-Encoding: gzip, deflate Host: [omitted] Content-Length: 542 Expect: 100-continue Now, this doesn't work. There's simply no compression happening, no matter what the size of the message (tried up to 1.5Mb). I've looked at this post, but have not run into an exception as he describes, so I haven't tried the CodeProject implementation that he proposes. Also I've seen a lot of other implementations that are supposed to get this to work, but cannot make sense of them (e.g., msdn's GZip encoder). Why would I need to implement the encoder, or the code-project solution? Shouldn't IIS take care of the compression? So what else do I need to do to get this to work? Joni

    Read the article

  • When not to do maximum compression in png?

    - by user1444680
    Intro When saving png images through GIMP, I've always used level 9 (maximum) compression, as I knew that it's lossless. Now I've to specify compression level when saving png format image through GD extension of PHP. Question Is there any case when I shouldn't compress PNG to maximum level? Like any compatibility issues? If there's no problem then why to ask user; why not automatically compress to max?

    Read the article

  • mp3 compression MPEG1 vs MPEG2

    - by Remus Rigo
    hi all I'm using CDex for converting wav to mp3 and I wanted to ask you guys what version to use MPEG I has max of 320kbps MPEG II has max of 160kbps MPEG II.5 has max of 160kbps I'm looking for a better quality, and I want to know if it's better to use a greater version witch has a lower kbps (like MPEG II.5)... thanks

    Read the article

  • mod_deflate Supported Encodings for Compression

    - by sparc
    It seems to me, that mod_deflate in Apache 2.2 will always return: Content-Encoding: gzip and never: Content-Encoding: deflate It was explained to me, that although there may be a deflate algorithm, mod_deflate is named after a file-format, in which the algorithm could be any of: gzip, bzip. pkzip Of those three, mod_deflate provides gzip. It seems as though gzip is the most popular and widely-supported algorithm in web browsers, but I know some web servers and proxies do return Content-Encoding: deflate. Aside from the confusion of the module's name, it true that mod_deflate will only return Content-Encoding: gzip? Thank you.

    Read the article

  • Windows command line built-in compression/decompression tool?

    - by Will Marcouiller
    I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util?

    Read the article

  • Video compression artifacts in Flash

    - by lvanderhart
    This only started happening in the past two days, which seems very odd to me. Everything worked flawlessly up until now, and I use my my computer as my primary TV. Flash video from Hulu and Amazon, for no apparent reason, now have lots of artifacts in them. Some scenes are ok, but some are completely scrambled and unwatchable. My connection is a 15mb fios, and bandwidth tests indicate my connection speed is ok. I've tried the latest production version of Flash, as well as the 10.1RC4. Same problem. Enabling or disabling hardware acceleration in Flash makes no difference (with the scrambling issue, quality is better overall with hardware). Using a different H264 codec doesn't clean up the issue, although the scrambling does look different. I'm kind of stumped. The only thing I can think of now is to reinstall windows, which is obviously a drastic step. Edit: Forgot to say: Windows 7, Athlon 64X2, Geforce GTS 250

    Read the article

  • Windows command line built-in compression/extraction tool?

    - by Will Marcouiller
    I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util?

    Read the article

  • Embed a JRE in a Windows executable?

    - by perp
    Suppose I want to distribute a Java application. Suppose I want to distribute it as a single executable. I could easily build a .jar with both the application and all its external dependencies in a single file (with some Ant hacking). Now suppose I want to distribute it as an .exe file on Windows. That's easy enough, given the nice tools out there (such as Launch4j and the likes). But suppose now that I also don't want to depend on the end user having the right JRE (or any JRE at all for that matter) installed. I want to distribute a JRE with my app, and my app should run on this JRE. It's easy enough to create a Windows installer executable, and embed a folder with all necessary JRE files in it. But then I'm distributing an installer and not a single-file app. Is there a way to embed both the application, and a JRE, into an .exe file acting as the application launcher (and not as an installer)?

    Read the article

  • How determine application subsystem from executable file

    - by Luca
    I'm trying to detect console application from the list of the executables files installed on my computer. How to implement it? Every application has a "subsystem" (windows application, console application or library; specified to the linker as option, I think). How to detect it using only the executable file? Are there alternative methods to detect the application characteristic? Additionally, are there any method for detecting the file is a really executable file? Any issue for JAR executables?

    Read the article

  • How to get information about a Windows executable (.exe) using C++

    - by ereOn
    Hi, I have to create a software that will scan several directories and extracts information about the executables found. I need to do two things: Determine if a given file is an executable (.exe, .dll, and so on) - Checking the extension is probably not good enough. Get the information about this executable (the company name, the product name, and so on). I never did this before and thus am not aware if there is a Windows API (or lightweight C/C++ library) to do that or if it is even possible. I guess it is, because explorer.exe does it. Do you guys know anything that could point me in the right direction ? Thank you very much for your help.

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • Byte-Pairing for data compression

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Dave, what I read was: "The US patent office no longer grants patents on perpetual motion machines, but has recently granted at least two patents on a mathematically impossible process: compression of truly random data." I was not inferring the Patent Office was actually considering what I am inquiring about. I was merely commenting on the notion of a "mathematically impossible process." If someone has, in some way created a method of having a "single" data byte as a placeholder of 8 individual bytes of data, that would be a consideration for a patent. Now, about the mathematically impossibility of an 8 to 1 compression method, it is not so much a mathematically impossibility, but a series of rules and conditions that can be created. As long as there is the rule of 8 or 16 bit representation of storing data on a medium, there are ways to manipulate data that mirrors current methods, or creation by a new way of thinking.

    Read the article

  • Doubts in executable and relocatable object file

    - by bala1486
    Hello, I have written a simple Hello World program. #include <stdio.h> int main() { printf("Hello World"); return 0; } I wanted to understand how the relocatable object file and executable file look like. The object file corresponding to the main function is 0000000000000000 <main>: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: b8 00 00 00 00 mov $0x0,%eax e: e8 00 00 00 00 callq 13 <main+0x13> 13: b8 00 00 00 00 mov $0x0,%eax 18: c9 leaveq 19: c3 retq Here the function call for printf is callq 13. One thing i don't understand is why is it 13. That means call the function at adresss 13, right??. 13 has the next instruction, right?? Please explain me what does this mean?? The executable code corresponding to main is 00000000004004cc <main>: 4004cc: 55 push %rbp 4004cd: 48 89 e5 mov %rsp,%rbp 4004d0: bf dc 05 40 00 mov $0x4005dc,%edi 4004d5: b8 00 00 00 00 mov $0x0,%eax 4004da: e8 e1 fe ff ff callq 4003c0 <printf@plt> 4004df: b8 00 00 00 00 mov $0x0,%eax 4004e4: c9 leaveq 4004e5: c3 retq Here it is callq 4003c0. But the binary instruction is e8 e1 fe ff ff. There is nothing that corresponds to 4003c0. What is that i am getting wrong? Thanks. Bala

    Read the article

  • I need to choose a compression algorithm

    - by chiz
    I need to choose a compression algorithm to compress some data. I don't know the type of data I'll be compressing in advance (think of it as kinda like the WinRAR program). I've heard of the following algorithms but I don't know which one I should use. Can anyone post a short list of pros and cons? For my application the first priority is decompression speed; the second priority is space saved. Compression (not decompression) speed is irrelevant. Deflate Implode Plain Huffman bzip2 lzma

    Read the article

  • Large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • How reliable is HTTP compression using gzip?

    - by Liam
    YSlow has suggested that I use HTTP compression to improve the performance of my site. However, as noted by Yahoo that are some problems. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically. I understand that the most common problem occurs with IE6 behind a proxy. But how common are these problems today? To quantify it, roughly what percentage of web users experience bugs with HTTP compression?

    Read the article

  • .NET Multipage Tiff with Lossy Compression

    - by Adam Berent
    I need a way to take several jpgs and convert them into a single multi page Tiff. I have that working using GDI+ however it only works with the compression LZW which is lossless. This means that my 3 50KB Jpgs turn into 3MB multipage Tiff file. This is not something I can accept for the software that I am working on. I know that Tiff Image format can use a JPG compression scheme but GDI+ does not seem to support this. If anyone knows how to do this in .NET (C#) or of any component that does this conversion.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >