Search Results

Search found 1285 results on 52 pages for 'lossless compression'.

Page 18/52 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Why is my GZipStream not writeable?

    - by Ozzah
    I have some GZ compressed resources in my program and I need to be able to write them out to temporary files for use. I wrote the following function to write the files out and return true on success or false on failure. In addition, I've put a try/catch in there which shows a MessageBox in the event of an error: private static bool extractCompressedResource(byte[] resource, string path) { try { using (MemoryStream ms = new MemoryStream(resource)) { using (FileStream fs = new FileStream(path, FileMode.Create, FileAccess.ReadWrite)) { using (GZipStream zs = new GZipStream(fs, CompressionMode.Decompress)) { ms.CopyTo(zs); // Throws exception zs.Close(); ms.Close(); } } } } catch (Exception ex) { MessageBox.Show(ex.Message); // Stream is not writeable return false; } return true; } I've put a comment on the line which throws the exception. If I put a breakpoint on that line and take a look inside the GZipStream then I can see that it's not writeable (which is what's causing the problem). Am I doing something wrong, or is this a limitation of the GZipStream class?

    Read the article

  • How to decompress/inflate an XML response from ASP

    - by krisg
    Can anyone provide some insight into how i'd go about decompressing an XML response in classic ASP. We've been handed some code and asked to get it working: Set oXMLHttp = Server.CreateObject("MSXML2.ServerXMLHTTP") URL = HttpServer + re_domain + ".do;jsessionid=" + ue_session + "?" + data oXMLHttp.setTimeouts 5000, 60000, 1200000, 1200000 oXMLHttp.open "GET", URL, false oXMLHttp.setRequestHeader "Accept-Encoding", "gzip" oXMLHttp.send() if oXMLHttp.status = 200 Then if oXMLHttp.responseText = "" then htmlrequest_get = "Empty Response from Server" else htmlrequest_get = oXMLHttp.responseText end if else ... Apparently now that the response is compressed using gzip, we have to un-compress the XML response before we can start to work with the data. How should i go about this?

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • Multi-part gzip file random access (in Java)

    - by toluju
    This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic header bytes (i.e. 0x1f8b, as per the RFC), and attempt to read the gzip stream from the following bytes. The problem with this approach is that those same bytes can appear inside the actual data as well, so seeking for those bytes can lead to an invalid position to start reading a gzip stream from. Is there a better way to handle random access, given that the record offsets aren't known a priori?

    Read the article

  • Efficiently storing a list of prime numbers

    - by eSKay
    This article says: Every prime number can be expressed as 30k±1, 30k±7, 30k±11, or 30k±13 for some k. That means we can use eight bits per thirty numbers to store all the primes; a million primes can be compressed to 33,334 bytes "That means we can use eight bits per thirty numbers to store all the primes" This "eight bits per thirty numbers" would be for k, correct? But each k value will not necessarily take-up just one bit. Shouldn't it be eight k values instead? "a million primes can be compressed to 33,334 bytes" I am not sure how this is true. We need to indicate two things: VALUE of k (can be arbitrarily large) STATE from one of the eight states (-13,-11,-7,-1,1,7,11,13) I am not following how 33,334 bytes was arrived at, but I can say one thing: as the prime numbers become larger and larger in value, we will need more space to store the value of k. How, then can we fix it at 33,334 bytes?

    Read the article

  • Compressing xls content with apache deflate module

    - by Clinton Bosch
    I am trying to compress an excel spreadsheet being sent from my application using apache deflate module. I have added the following line to the my sites-enabled file: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/excel But is seems to make the response data bigger??? Using firebug, without the module I downloaded the xls spreadsheet from the application and it downloaded 100Kb of data, the file size once on the filesystem was also 100Kb as expected. Once I enabled the deflate module as described above and repeated the process, the amount of data downloaded was 295Kb?? but the file was still only 100Kb once save on the filesystem. As an experiment I manually gzipped the saved xls file and it compressed to 20Kb. What am I doing wrong here? Using deflate (Firebug output): 200 OK xxxxxxx.co.za 293 KB 4.43s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:01:43 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Vary Accept-Encoding Content-Encoding gzip Content-Type application/excel Without deflate (Firebug output): 200 OK xxxxxxxx.co.za 100 KB 3.46s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:06:00 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Content-Length 102912 Content-Type application/excel

    Read the article

  • How do I compute the approximate entropy of a bit string?

    - by dreeves
    Is there a standard way to do this? Googling -- "approximate entropy" bits -- uncovers multiple academic papers but I'd like to just find a chunk of pseudocode defining the approximate entropy for a given bit string of arbitrary length. (In case this is easier said than done and it depends on the application, my application involves 16,320 bits of encrypted data (cyphertext). But encrypted as a puzzle and not meant to be impossible to crack. I thought I'd first check the entropy but couldn't easily find a good definition of such. So it seemed like a question that ought to be on StackOverflow! Ideas for where to begin with de-cyphering 16k random-seeming bits are also welcome...) See also this related question: http://stackoverflow.com/questions/510412/what-is-the-computer-science-definition-of-entropy

    Read the article

  • Compressibility Example

    - by user285726
    From my algorithms textbook: The annual county horse race is bringing in three thoroughbreds who have never competed against one another. Excited, you study their past 200 races and summarize these as prob- ability distributions over four outcomes: first (“first place”), second, third, and other. Outcome Aurora Whirlwind Phantasm 0.15 0.30 0.20 first 0.10 0.05 0.30 second 0.70 0.25 0.30 third 0.05 0.40 0.20 other Which horse is the most predictable? One quantitative approach to this question is to look at compressibility. Write down the history of each horse as a string of 200 values (first, second, third, other). The total number of bits needed to encode these track- record strings can then be computed using Huffman’s algorithm. This works out to 290 bits for Aurora, 380 for Whirlwind, and 420 for Phantasm (check it!). Aurora has the shortest encoding and is therefore in a strong sense the most predictable. How did they get 420 for Phantasm? I keep getting 400 bytes, as so: Combine first, other = 0.4, combine second, third = 0.6. End up with 2 bits encoding each position. Is there something I've misunderstood about the Huffman encoding algorithm? Textbook available here: http://www.cs.berkeley.edu/~vazirani/algorithms.html (page 156).

    Read the article

  • Manual alternative to mod_deflate

    - by Bobby Jack
    Say I don't have mod_deflate compiled into apache, and I don't feel like recompiling right now. What are the downsides to a manual approach, e.g. something like: AddEncoding x-gzip .gz RewriteCond %{HTTP_ACCEPT_ENCODING} gzip RewriteRule ^/css/styles.css$ /css/styles.css.gz (Note: I'm aware that the specifics of that RewriteCond need to be tweaked slightly)

    Read the article

  • Updating gzip library in jre

    - by Sarmun
    Is there a way to update gzip library that JRE uses? There is a bug in gzip library that is used by latest JRE, and it has been fixed in later version of gzip library, so I would like to make latest JRE work by updating just gzip.

    Read the article

  • How do you get Lighttpd to compress CodeIgniter's "clean urls"?

    - by ocdcoder
    I was looking at PageSpeed on my test website and noticed that Lighttpd wasn't compressing my HTML (but was compressing my javascript and css files). I'm assuming this is because I'm using CodeIgniter and it's clean url system and since the requests don't have file extensions, Lighttpd doesn't have the rule to compress it. That being the case, how do I get Lighttpd to compress my HTML? Is this something I shouldn't be doing? Or something I need to specially configure Lighttpd for?

    Read the article

  • Random access gzip stream

    - by jkff
    I'd like to be able to do random access into a gzipped file. I can afford to do some preprocessing on it (say, build some kind of index), provided that the result of the preprocessing is much smaller than the file itself. Any advice? My thoughts were: Hack on an existing gzip implementation and serialize its decompressor state every, say, 1 megabyte of compressed data. Then to do random access, deserialize the decompressor state and read from the megabyte boundary. This seems hard, especially since I'm working with Java and I couldn't find a pure-java gzip implementation :( Re-compress the file in chunks of 1Mb and do same as above. This has the disadvantage of doubling the required disk space. Write a simple parser of the gzip format that doesn't do any decompressing and only detects and indexes block boundaries (if there even are any blocks: I haven't yet read the gzip format description)

    Read the article

  • J2ME Reduce Image color-depth/ Compress Image size

    - by updateraj
    Hi, I need to transmit the image from the mobile phone to the server. I am able to reduce the image screen size but not the memory size. I understand i have to deal with the color depth. J2ME does not seem to offer any scaling method which is available in J2SE: image rescaled = image.getScaledInstance(thumbWidth, thumbHeight, Image.SCALE_AREA_AVERAGING); BufferedImage biRescaled = toBufferedImage(rescaled, thumbWidth, thumbHeight, BufferedImage.TYPE_INT_RGB); How i would i tackle this ? I would like to reduce the image memory size before i transmit to the server. Thank you

    Read the article

  • Oracle output: cursor, file, or very long string?

    - by Klay
    First, the setup: I have a table in an Oracle10g database with spatial columns. I need to be able to pass in a spatial reference so that I can reproject the geometry into an arbitrary coordinate system. Ultimately, I need to compress the results of this projection to a zip file and make it available for download through a Silverlight project. I would really appreciate ideas as to the best way to accomplish this. In the examples below, the SRID is the Spatial reference ID integer used to convert the geometric points into a new coordinate system. In particular, I can see a couple of possibilities. There are many more, but this is an idea of how I'm thinking: a) Pass SRID to a dynamic view -- perform projection, output a cursor -- send cursor to UTL_COMPRESS -- write output to a file (somehow) -- send URL to Silverlight app b) Use SRID to call Oracle function from Silverlight app -- perform projection, output a string -- build strings into a file -- compress file using SharpZipLib library in .NET -- send bytestream back to Silverlight app I've done the first two steps of b), and the conversion of 100 points took about 7 seconds, which is unacceptably slow. I'm hoping it would be faster doing the processing totally in Oracle. If anyone can see potential problems with either way of doing this, or can suggest a better way, it would be very helpful. Thanks!

    Read the article

  • is jQuery 1.4.2 compatible with Closure Compiler?

    - by Mohammad
    According to the official release statement version 1.4 has been re-written to be compressed with Closure Compiler yet when I use the online version of closure compiler I get 130 warnings. This is the code I use. // ==ClosureCompiler== // @compilation_level ADVANCED_OPTIMIZATIONS // @output_file_name default.js // @code_url http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js // ==/ClosureCompiler== And as far as I know you get the real benefit of Closure Compiler if you include the library with your code also, so it removes the unused functions. Yet my testing show that I can't get any further than compressing the library itself.. What am I doing wrong? Any kind of insight will be much appreciated.

    Read the article

  • Compile/use unrar C++ source for iphone app?

    - by greypoint
    Writing an app that will include the ability to decompress zip and rar files. I think I'm OK on how to handle the .zips but .rars seem a little more trouble. I noticed that rarlabs has source available but it's C++. Is there a way to compile, wrap or otherwise use this code within an iPhone app? Reference: http://www.rarlab.com/rar_add.htm Open to alternate suggestions on how to handle .rar files as well. I'm still pretty much a newbie so please explain in small words :)

    Read the article

  • Very basic question about Hadoop and compressed input files

    - by Luis Sisamon
    I have started to look into Hadoop. If my understanding is right i could process a very big file and it would get split over different nodes, however if the file is compressed then the file could not be split and wold need to be processed by a single node (effectively destroying the advantage of running a mapreduce ver a cluster of parallel machines). My question is, assuming the above is correct, is it possible to split a large file manually in fixed-size chunks, or daily chunks, compress them and then pass a list of compressed input files to perform a mapreduce?

    Read the article

  • Counting common Bytes, Words and Double Words.

    - by Recursion
    I am scanning over a large amount of data and looking for common trends in it. Every time I meet a recurrence of a unit, I want to increment the count of it. What is the best data structure or way to hold this data. I need to be able to search it quickly, and also have a count with each unit of data.

    Read the article

  • Renaming ICSharpCode.SharpZipLib.dll

    - by John B.
    Hi, well I am having a problem renaming the ICSharpCode.SharpZipLib.dll file to anythig else. I am trying to shorten the file name. I reference the assembly in the project, but when the program reaches the statements where I use the library. It spawns an error that it could not find the assembly or file 'ICSharpCode.SharpZipLib'. When I change the file name back to ICSharpCode.SharpZipLib.dll the application works noramally. So, is there any way to change the file name. Also, am I allowed to change it without violating the license (I am going to use it in a commercial application). Thanks.

    Read the article

  • Compressing plaintext in JavaScript?

    - by AgileMeansDoAsLittleAsPossible
    I have a simple Notepad-like web application I'm making for fun. When you save a document, the contents of a <textarea> are sent to the server via Ajax and persisted in a database. Let's just say for shits and giggles that we need to compress the contents of the <textarea> before sending it because we're on a 2800 baud modem. Are there JavaScript libraries to do this? How well does plain text compress in the first place?

    Read the article

  • Help with writing a php code that repeats itself per array value

    - by Mohammad
    Hi I'm using Closure Compiler to compress and join a few JavaScript files the syntax is something like this; $c = new PhpClosure(); $c->add("JavaScriptA.js") ->add("JavaScriptB.js") ->write(); How could I make it systematically add more files from an array lets say for each array element in $file = array('JavaScriptA.js','JavaScriptB.js','JavaScriptC.js',..) it would execute the following code $c = new PhpClosure(); $c->add("JavaScriptA.js") ->add("JavaScriptB.js") ->add("JavaScriptC.js") ->add ... ->write(); Thank you so much in advance!

    Read the article

  • min or gzip, which is better?

    - by Nimbuz
    jquery-1.4.2.min.js is 71.8KB Same file compressed through this tool, with gzip enabled, becomes 32.9 KB Which is better? If latter, why doesn't jQuery provide a packed file too instead of just uncompressed and min versions? Thanks

    Read the article

  • Most flexible minimizer/compressor for ASP.NET MVC 2?

    - by AlexanderN
    From your experience, what's the most flexible minimizer/compressor (JS+CSS) for ASP.NET MVC you've dealt with? So far mbcompress doesn't seem to be too MVC friendly weboptimizer.codeplex.com lacks documentation clientdependency.codeplex.com is still in beta compress2 seems like a good candidate, but haven't tried it yet mvcscriptmanager only combines and compresses javascript but not CSS By flexible I mean Choose what should be compressed, minified, and combined Add exceptions. E.g. if debug don't compress XYZ.JS or don't minify ABC.CSS Caching In the end, it should help offer the best YSLOW score. If you know of any other assemblies out there, please list them also.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >