Search Results

Search found 1285 results on 52 pages for 'lossless compression'.

Page 3/52 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Set compression level when generating a ZIP file using RubyZip

    - by Vincent Robert
    Hi, I have a Ruby program that zips a directory tree of XML files using the rubyzip gem. My problem is that the file is starting to be heavy and I would like to increase the compression level, since compression time is not an issue. I could not find in the rubyzip documentation a way to specify the compression level for the created ZIP file. Anyone know how to change this setting?

    Read the article

  • Sql compression and backing up in sql server 2005

    - by cagin
    Hi there I want to backup my database with compression. This is my code : BACKUP DATABASE dbbbb TO DISK = N'C:\\dbbb.bak' WITH COMPRESSION this running correctly in Sql Server 2008. But my server has Sql Server 2005 and COMPRESSION is not a recognized BACKUP option in 2005. How can i compress my backup in 2005 Thank you for your helps.

    Read the article

  • Partition Table and Exadata Hybrid Columnar Compression (EHCC)

    - by Bandari Huang
    Create EHCC table CREATE TABLE ... COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED'; Convert Table/Partition/Subpartition to EHCC Compress Table&Partition&Subpartition to EHCC: ALTER TABLE table_name MOVE COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; ALTER TABLE table_name MOVE PARATITION partition_name COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; ALTER TABLE table_name MOVE SUBPARATITION subpartition_name COMPRESS FOR [QUERY LOW|QUERY HIGH|ARCHIVE LOW|ARCHIVE HIGH] [PARALLEL <dop>]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED'; select table_owner,table_name,partition_name,compress_for DBA_TAB_PARTITIONS where compression = ‘ENABLED’; select table_owner,table_name,subpartition_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ‘ENABLED’; Rebuild Unusable Index: select index_name from dba_index where status = 'UNUSABLE'; select index_name,partition_name from dba_ind_partition where status = 'UNUSABLE'; select index_name,subpartition_name from dba_ind_partition where status = 'UNUSABLE'; ALTER INDEX index_name REBUILD [PARALLEL <dop>]; ALTER INDEX index_name REBUILD PARTITION partition_name [PARALLEL <dop>]; ALTER INDEX index_name REBUILD SUBPARTITION subpartition_name [PARALLEL <dop>]; Convert Table/Partition/Subpartition from EHCC to OLTP compression or uncompressed format: Uncompress EHCC Table&Partition&Subpartition: ALTER TABLE table_name MOVE [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; ALTER TABLE table_name MOVE PARTITION partition_name [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; ALTER TABLE table_name MOVE SUBPARTITION subpartition_name [NOCOMPRESS|COMPRESS for OLTP] [PARALLEL <dop>]; select owner,table_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ''; select table_owner,table_name,partition_name,compress_for DBA_TAB_PARTITIONS where compression = ''; select table_owner,table_name,subpartition_name,compress_for DBA_TAB_SUBPARTITIONS where compression = ''; Rebuild Unusable Index: select index_name from dba_index where status = 'UNUSABLE'; select index_name,partition_name from dba_ind_partition where status = 'UNUSABLE'; select index_name,subpartition_name from dba_ind_partition where status = 'UNUSABLE'; ALTER INDEX index_name REBUILD [PARALLEL <dop>]; ALTER INDEX index_name REBUILD PARTITION partition_name [PARALLEL <dop>]; ALTER INDEX index_name REBUILD SUBPARTITION subpartition_name [PARALLEL <dop>];

    Read the article

  • IIS7 Compression

    - by Thomas
    Hi Guys, I have searched around and havent really found an answer anywhere and this is still not working for me. I am using compression in IIS7 and it doesn't appear to be working. The code I am using is per <urlCompression doStaticCompression="true" /> <httpCompression cacheControlHeader="max-age=86400" sendCacheHeaders="true" expiresHeader="true" minFileSizeForComp="0" directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> However my content is still not being gzipped ? Any ideas why this is happening ? Cheers

    Read the article

  • software for compression-decompression of pictures

    - by infant programmer
    I need to send hundreds of pictures via email. The size is reaching 100s of MBs .. which is certainly burden on network. WinRar, 7-Zip aren't helping .. Would you like to suggest any software which can carry out compression so that I can reduce the size and send it via email, and decompress it at the other end .. opensource or freeware are more preferred .. paid-versions are appreciated too Edit: I am seeking an alternative software for winrar and 7-zip which is more efficient in compressing .. but still not losing any data ..

    Read the article

  • How do I reinforce compression options?

    - by Gooberpatrol66
    Shortly after I got my computer, I enabled NTFS compression on it, selecting the option to compress "all files and subfolders". I recently noticed that several folders on my PC are not compressed anymore, including "Program Files" and "Windows". I suspect this happened when I installed Windows 8.1. The problem is, the only way I can think of to fix this would be to uncheck the tick box under "Properties" for my drive, thus decompressing everything on my drive, and then re-check it with the "all files and subfolders" option. Is there a way to compress all the uncompressed folders without first decompressing the compressed folders?

    Read the article

  • Why still use JPG compression? [closed]

    - by Torben Gundtofte-Bruun
    Back when the JPG image format was introduced, it made a lot of sense to reduce the file size, even accepting a loss in image quality, because files were being downloaded over a slow and expensive modem connection. In today's world, file size is no longer a concern, at least not regarding JPG where it seems silly to save 45kB on a photo. But my image editing apps still prompt me for the desired compression level when I save a file. Does it still make sense to go with the default 85? Why should I not crank it up to 100 for all files? Update based on comments: For web work, I might use PNG instead. But every smartphone and camera produces JPG files. The question arises when I save these edits. Audience is my own harddisk. We're talking photos, 2-5MB apiece. Chroma, subsampling, DCT - sorry, never heard of it. I'm a home user, not Photoshop guru. For the record, I use Paint Shop Pro on Win, and Gimp on Linux.

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Using VB6 + WSH with Windows Compression

    - by OneNerd
    Having trouble with WSH and Windows Compression. My goal is to be able to zip up files (not folders, but individual files from various locations, which I have stored in an array) using the built-in Windows Compression. I am using VB6. Here is my routine (vb6 code): Dim objShell Dim objFolder Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.namespace(savePath & "\export.zip") ' -- ' loop through array holding files to zip For i = 0 To filePointer objFolder.CopyHere (filesToZip(i)) Next ' -- Set objShell = Nothing Set objFolder = Nothing It works, but issues arise when there are more than a few files. I start getting errors from Windows (presumably, its calling the compression too fast, and the zip file is locked). I cant seem to figure out how to WAIT until the COPYHERE function completes before calling the next one to avoid issues. Does anyone have any experience with this? Thanks -

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • GZip compression with WCF hosted on IIS7

    - by joniba
    So I'm going to add my query to the small ocean of questions on the subject. I'm trying to enable GZip compression on large soap responses from a WCF service. So far, I've followed instructions here and in a variety of other places to enable dynamic compression on IIS. Here's my dynamicTypes section from the applicationHost.config: <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/xop+xml" enabled="true" /> <add mimeType="application/soap+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> And also: <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> Though I'm not so clear on why that's needed. Threw some extra mime-types in there just in case. I've implemented IClientMessageInspector to add Accept-Encoding: gzip, deflate to my client's HttpRequests. Here's an example of a request-header taken from fiddler: POST http://[omitted]/TestMtomService/TextService.svc HTTP/1.1 Content-Type: application/soap+xml; charset=utf-8 Accept-Encoding: gzip, deflate Host: [omitted] Content-Length: 542 Expect: 100-continue Now, this doesn't work. There's simply no compression happening, no matter what the size of the message (tried up to 1.5Mb). I've looked at this post, but have not run into an exception as he describes, so I haven't tried the CodeProject implementation that he proposes. Also I've seen a lot of other implementations that are supposed to get this to work, but cannot make sense of them (e.g., msdn's GZip encoder). Why would I need to implement the encoder, or the code-project solution? Shouldn't IIS take care of the compression? So what else do I need to do to get this to work? Joni

    Read the article

  • When not to do maximum compression in png?

    - by user1444680
    Intro When saving png images through GIMP, I've always used level 9 (maximum) compression, as I knew that it's lossless. Now I've to specify compression level when saving png format image through GD extension of PHP. Question Is there any case when I shouldn't compress PNG to maximum level? Like any compatibility issues? If there's no problem then why to ask user; why not automatically compress to max?

    Read the article

  • mp3 compression MPEG1 vs MPEG2

    - by Remus Rigo
    hi all I'm using CDex for converting wav to mp3 and I wanted to ask you guys what version to use MPEG I has max of 320kbps MPEG II has max of 160kbps MPEG II.5 has max of 160kbps I'm looking for a better quality, and I want to know if it's better to use a greater version witch has a lower kbps (like MPEG II.5)... thanks

    Read the article

  • mod_deflate Supported Encodings for Compression

    - by sparc
    It seems to me, that mod_deflate in Apache 2.2 will always return: Content-Encoding: gzip and never: Content-Encoding: deflate It was explained to me, that although there may be a deflate algorithm, mod_deflate is named after a file-format, in which the algorithm could be any of: gzip, bzip. pkzip Of those three, mod_deflate provides gzip. It seems as though gzip is the most popular and widely-supported algorithm in web browsers, but I know some web servers and proxies do return Content-Encoding: deflate. Aside from the confusion of the module's name, it true that mod_deflate will only return Content-Encoding: gzip? Thank you.

    Read the article

  • Windows command line built-in compression/decompression tool?

    - by Will Marcouiller
    I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util?

    Read the article

  • Video compression artifacts in Flash

    - by lvanderhart
    This only started happening in the past two days, which seems very odd to me. Everything worked flawlessly up until now, and I use my my computer as my primary TV. Flash video from Hulu and Amazon, for no apparent reason, now have lots of artifacts in them. Some scenes are ok, but some are completely scrambled and unwatchable. My connection is a 15mb fios, and bandwidth tests indicate my connection speed is ok. I've tried the latest production version of Flash, as well as the 10.1RC4. Same problem. Enabling or disabling hardware acceleration in Flash makes no difference (with the scrambling issue, quality is better overall with hardware). Using a different H264 codec doesn't clean up the issue, although the scrambling does look different. I'm kind of stumped. The only thing I can think of now is to reinstall windows, which is obviously a drastic step. Edit: Forgot to say: Windows 7, Athlon 64X2, Geforce GTS 250

    Read the article

  • Windows command line built-in compression/extraction tool?

    - by Will Marcouiller
    I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util?

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • Byte-Pairing for data compression

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Dave, what I read was: "The US patent office no longer grants patents on perpetual motion machines, but has recently granted at least two patents on a mathematically impossible process: compression of truly random data." I was not inferring the Patent Office was actually considering what I am inquiring about. I was merely commenting on the notion of a "mathematically impossible process." If someone has, in some way created a method of having a "single" data byte as a placeholder of 8 individual bytes of data, that would be a consideration for a patent. Now, about the mathematically impossibility of an 8 to 1 compression method, it is not so much a mathematically impossibility, but a series of rules and conditions that can be created. As long as there is the rule of 8 or 16 bit representation of storing data on a medium, there are ways to manipulate data that mirrors current methods, or creation by a new way of thinking.

    Read the article

  • I need to choose a compression algorithm

    - by chiz
    I need to choose a compression algorithm to compress some data. I don't know the type of data I'll be compressing in advance (think of it as kinda like the WinRAR program). I've heard of the following algorithms but I don't know which one I should use. Can anyone post a short list of pros and cons? For my application the first priority is decompression speed; the second priority is space saved. Compression (not decompression) speed is irrelevant. Deflate Implode Plain Huffman bzip2 lzma

    Read the article

  • Large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >