Search Results

Search found 1269 results on 51 pages for 'lossy compression'.

Page 17/51 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Will HttpResponse.Filter buffer the whole data before start the sending?

    - by vtortola
    Hi, An user posts this article about how to use HttpResponse.Filter to compress large amounts of data. But what will happen if I try to transfer a 4G file? will it load the whole file in memory in order to compress it? or otherwise it will compress it chunk by chunk? I mean, I'm doing this right now: public void GetFile(HttpResponse response) { String fileName = "example.iso"; response.ClearHeaders(); response.ClearContent(); response.ContentType = "application/octet-stream"; response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName); response.AppendHeader("Content-Length", new FileInfo(fileName).Length.ToString()); using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open)) using (DeflateStream ds = new DeflateStream(fs,CompressionMode.Compress)) { Byte[] buffer = new Byte[4096]; Int32 readed = 0; while ((readed = ds.Read(buffer, 0, buffer.Length)) > 0) { response.OutputStream.Write(buffer, 0, readed); response.Flush(); } } } So at the same time I'm reading, I'm compressing and sending it. Then I wanna know if HttpResponse do the same thing, or otherwise it will load the whole file in memory in order to compress it. Cheers.

    Read the article

  • Problems compiling peazip on OSX

    - by Yansky
    I'm having some problems with compiling Peazip on OSX (10.6). I emailed the Peazip developer and he said he probably couldn't help me too much as the error seems to be OSX specific and he doesn't have access to an OSX machine any more. The compiler I'm using is Lazarus as the source is in Pascal. The actual compile process seems to go ok, but when I run the peazip.app program launcher, I get the following error: http://img.photobucket.com/albums/v215/thegooddale/Screen-shot-2010-05-22-at-71907-PM.png Here is the app launcher that the compile made: http://forboden.com/coding/peazip.app.zip - you can use an unzip program to look at the files inside (i.e. unzip it twice). I also tried just running the peazip unix file executable that was produced after the compile from the terminal and I got this: http://img.photobucket.com/albums/v215/thegooddale/Screen-shot-2010-05-22-at-72148-PM.png Here are the messages from the compile log from Lazarus while compiling Peazip: http://pastebin.com/qK4bdncL (I asked on the Lazarus forums and they said I can just ignore those "ld: warning: unknown stabs type" warnings). Here is the info from the project_peach.compiled file: <?xml version="1.0"?> <CONFIG> <Compiler Value="/usr/local/bin/ppc386" Date="1238949773"/> <Params Value=" -MObjFPC -Sgi -O1 -gl -k-framework -kCarbon -k-framework -kOpenGL -k'-dylib_file' -k'/System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib:/System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib' -WG -vewnhi -l -Fu/Users/yansky/Desktop/peazip-3.1.src/res/themes/crystalc/ -Fu/Developer/lazarus/components/synedit/units/i386-darwin/ -Fu/Developer/lazarus/ideintf/units/i386-darwin/ -Fu/Developer/lazarus/lcl/units/i386-darwin/ -Fu/Developer/lazarus/lcl/units/i386-darwin/carbon/ -Fu/Developer/lazarus/packager/units/i386-darwin/ -Fu/Users/yansky/Desktop/peazip-3.1.src/ -Fu. -opeazip -dLCL -dLCLcarbon project_peach.lpr"/> </CONFIG> I guess there's little chance that anyone here has experience with Pascal and Lazarus since it's not that popular a language and the compiler is still in beta, but I thought I would post here in the hopes that someone might point me in the right general direction about where/how the peazip.app launcher is breaking.

    Read the article

  • Sending compressed data via socket

    - by Pizza
    Hi, I have to make a log server in java, and one task is to send the data compressed. Now I am sending it line by line in plain text, but I must compress it. The server handle "HTTP like" request. For example, I can get a log sending "GET xxx.log". This will entablish a TCP connection to the server, the server response with a header and the log, and close the connection. The client, reads line by line and analyzes each LOG entry. I tried some ways without success. My main problem is that I don't know where each line ends(in the client size). Any idea?

    Read the article

  • Naming lossless more appropriate

    - by LarsOn
    Starting just discussing the naming first ignoring tech details, lossless is a negative and double negative word for one good reason should get named more appropriate: Lossless also matches nothing. When naming like all intact or likewise matches something in a manner more physical like sounds are. Then more technically stated that a copy might contain more information than the original. Does method as such have a name, if so what do I refer to, can we name it if you please or just discuss related handling.

    Read the article

  • Converting stream of jpg files to FLV stream

    - by Mark
    I work with a Panasonic hcm280a camera that can be controlled by my software, It generates a stream of jpeg files that are huge and I want to convert this stream to a FLV stream preferably with a good compressional ration Does FFMpeg do that? I am basically looking for an off the shelve open source software (or commercial software) that can generate that streaming media for me. Again my input is a stream of jpg files that come from the camera server. Any insight or comment would be greatly appreciated Thanks

    Read the article

  • Compressing a web service response for jQuery

    - by SirDemon
    I'm attempting to gzip a JSON response from an ASMX web service to be consumed on the client-side by jQuery. My web.config already has httpCompression set like so: (I'm using IIS 7) <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" staticCompressionDisableCpuUsage="90" staticCompressionEnableCpuUsage="60" dynamicCompressionDisableCpuUsage="80" dynamicCompressionEnableCpuUsage="50"> <dynamicTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> <add mimeType="application/json; charset=utf-8" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="text/css" enabled="true" /> <add mimeType="video/x-flv" enabled="true" /> <add mimeType="application/x-shockwave-flash" enabled="true" /> <add mimeType="text/javascript" enabled="true" /> <add mimeType="text/*" enabled="true" /> </staticTypes> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" /> Through fiddler I can see that normal aspx and other compressions work fine. However, the jQuery ajax request and response work as they should, only nothing gets compressed. What am I missing?

    Read the article

  • GZIP Java vs .NET

    - by Jim Jones
    Using the following Java code to compress/decompress bytes[] to/from GZIP. First text bytes to gzip bytes: public static byte[] fromByteToGByte(byte[] bytes) { ByteArrayOutputStream baos = null; try { ByteArrayInputStream bais = new ByteArrayInputStream(bytes); baos = new ByteArrayOutputStream(); GZIPOutputStream gzos = new GZIPOutputStream(baos); byte[] buffer = new byte[1024]; int len; while((len = bais.read(buffer)) >= 0) { gzos.write(buffer, 0, len); } gzos.close(); baos.close(); } catch (IOException e) { e.printStackTrace(); } return(baos.toByteArray()); } Then the method that goes the other way compressed bytes to uncompressed bytes: public static byte[] fromGByteToByte(byte[] gbytes) { ByteArrayOutputStream baos = null; ByteArrayInputStream bais = new ByteArrayInputStream(gbytes); try { baos = new ByteArrayOutputStream(); GZIPInputStream gzis = new GZIPInputStream(bais); byte[] bytes = new byte[1024]; int len; while((len = gzis.read(bytes)) > 0) { baos.write(bytes, 0, len); } } catch (IOException e) { e.printStackTrace(); } return(baos.toByteArray()); } Think there is any effect since I'm not writing out to a gzip file? Also I noticed that in the standard C# function that BitConverter reads the first four bytes and then the MemoryStream Write function is called with a start point of 4 and a length of input buffer length - 4. So is that effect the validity of the header? Jim

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • smart reversing of compressed javascript with obscured variable & function names ?

    - by Jerome WAGNER
    Hello, I want to know if there exists a tool to help in reversing a compressed javascript that has obscure variable names. I am not looking for a pretty-printing beautifier, but for a tool that actually know a to change & propagate variable name choices. Let me be more specific : - some of the functions belong to the 'public' API and i want to impose readable argument names in their prototypes - there are intermediary variables for document, window and other browser idioms I would like to give this knowledge to the tool and then let it create another javascript where the knowledge would have been correctly propagated. thanks Jerome Wagner

    Read the article

  • Multiple Headers in asp.net

    - by digiguru
    I'm running code that seems to hit the "AppendHeader" twice in the code. Response.Filter = New DeflateStream(Response.Filter, CompressionMode.Compress, True) Response.AppendHeader("Content-encoding", "deflate") ... Response.AppendHeader("Content-encoding", "deflate") I have tried using the following.... Response.Headers("Content-encoding") = "deflate" But it says This operation requires IIS integrated pipeline mode. How do I check for a headers existence, and overwrite it rather than appending it.

    Read the article

  • Using both chunked transfer encoding and gzip

    - by RadiantHeart
    I recently started using gzip on my site and it worked like charm on all browsers except Opera which gives an error saying it could not decompress the content due to damaged data. From what I can gather from testing and googling it might be a problem with using both gzip and chunked transfer encoding. The fact that there is no error when requesting small files like css-files also points in that direction. Is this a known issue or is there something else that I havent thought about? Someone also mentioned that it could have something to do with sending a Content-Length header. Here is a simplified version of the most relevant part of my code: $contents = ob_get_contents(); ob_end_clean(); header('Content-Encoding: '.$encoding); print("\x1f\x8b\x08\x00\x00\x00\x00\x00"); $size = strlen($contents); $contents = gzcompress($contents, 9); $contents = substr($contents, 0, $size); print($contents); exit();

    Read the article

  • Exclude debug javascript code during minification

    - by Tauren
    I looking into different ways to minify my javascript code including the regular JSMin, Packer, and YUI solutions. I'm really interested in the new Google Closure Compiler, as it looks exceptionally powerful. I noticed that Dean Edwards packer has a feature to exclude lines of code that start with three semicolons. This is handy to exclude debug code. For instance: ;;; console.log("Starting process"); I'm spending some time cleaning up my codebase and would like to add hints like this to easily exclude debug code. In preparation for this, I'd like to figure out if this is the best solution, or if there are other techniques. Because I haven't chosen how to minify yet, I'd like to clean the code in a way that is compatible with whatever minifier I end up going with. So my questions are these: Is using the semicolons a standard technique, or are there other ways to do it? Is Packer the only solution that provides this feature? Can the other solutions be adapted to work this way as well, or do they have alternative ways of accomplishing this? I will probably start using Closure Compiler eventually. Is there anything I should do now that would prepare for it?

    Read the article

  • Best Way to automatically compress and minimize JavaScript files in an ASP.NET MVC app

    - by wgpubs
    So I have an ASP.NET MVC app that references a number of javascript files in various places (in the site master and additional references in several views as well). I'd like to know if there is an automated way, and if so what is the recommended approach, for compressing and minimizing such references into a single .js file where possible. Such that this ... <script src="<%= ResolveUrl("~") %>Content/ExtJS/Ext.ux.grid.GridSummary/Ext.ux.grid.GridSummary.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.rating/ext.ux.ratingplugin.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext-starslider/ext-starslider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.dollarfield.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.combobox.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.datepickerplus/ext.ux.datepickerplus-min.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/SessionProvider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/TabCloseMenu.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/ActivityForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/UserForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/SwappedGrid.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/Tree.js" type="text/javascript"></script> ... could be reduced to something like this ... <script src="<%= ResolveUrl("~") %>Content/MyViewPage-min.js" type="text/javascript"></script> Thanks

    Read the article

  • java: how to compress data into a String and uncompress data from the String

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into data.zip server send to repository content of data.zip repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool I have tried a lots of compressing example found on the web but each time a send the data to the repository, my resulting zip file is corrupted. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • Why am I getting 404 in Django?

    - by alex
    After installing this python Django module: http://code.google.com/p/django-compress/wiki/Installation I am getting 404's in my media static files. This Django module is supposed to "compress" the javascript/css files. That's why I'm getting 404 I guess. The problem is, I don't want this anymore. And when I installed this program, I did "python setup.py install" How do I install it? I just want to revert it back to normal so I don't get any 404 errors.

    Read the article

  • Why is my GZipStream not writeable?

    - by Ozzah
    I have some GZ compressed resources in my program and I need to be able to write them out to temporary files for use. I wrote the following function to write the files out and return true on success or false on failure. In addition, I've put a try/catch in there which shows a MessageBox in the event of an error: private static bool extractCompressedResource(byte[] resource, string path) { try { using (MemoryStream ms = new MemoryStream(resource)) { using (FileStream fs = new FileStream(path, FileMode.Create, FileAccess.ReadWrite)) { using (GZipStream zs = new GZipStream(fs, CompressionMode.Decompress)) { ms.CopyTo(zs); // Throws exception zs.Close(); ms.Close(); } } } } catch (Exception ex) { MessageBox.Show(ex.Message); // Stream is not writeable return false; } return true; } I've put a comment on the line which throws the exception. If I put a breakpoint on that line and take a look inside the GZipStream then I can see that it's not writeable (which is what's causing the problem). Am I doing something wrong, or is this a limitation of the GZipStream class?

    Read the article

  • How to decompress/inflate an XML response from ASP

    - by krisg
    Can anyone provide some insight into how i'd go about decompressing an XML response in classic ASP. We've been handed some code and asked to get it working: Set oXMLHttp = Server.CreateObject("MSXML2.ServerXMLHTTP") URL = HttpServer + re_domain + ".do;jsessionid=" + ue_session + "?" + data oXMLHttp.setTimeouts 5000, 60000, 1200000, 1200000 oXMLHttp.open "GET", URL, false oXMLHttp.setRequestHeader "Accept-Encoding", "gzip" oXMLHttp.send() if oXMLHttp.status = 200 Then if oXMLHttp.responseText = "" then htmlrequest_get = "Empty Response from Server" else htmlrequest_get = oXMLHttp.responseText end if else ... Apparently now that the response is compressed using gzip, we have to un-compress the XML response before we can start to work with the data. How should i go about this?

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • Multi-part gzip file random access (in Java)

    - by toluju
    This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic header bytes (i.e. 0x1f8b, as per the RFC), and attempt to read the gzip stream from the following bytes. The problem with this approach is that those same bytes can appear inside the actual data as well, so seeking for those bytes can lead to an invalid position to start reading a gzip stream from. Is there a better way to handle random access, given that the record offsets aren't known a priori?

    Read the article

  • Efficiently storing a list of prime numbers

    - by eSKay
    This article says: Every prime number can be expressed as 30k±1, 30k±7, 30k±11, or 30k±13 for some k. That means we can use eight bits per thirty numbers to store all the primes; a million primes can be compressed to 33,334 bytes "That means we can use eight bits per thirty numbers to store all the primes" This "eight bits per thirty numbers" would be for k, correct? But each k value will not necessarily take-up just one bit. Shouldn't it be eight k values instead? "a million primes can be compressed to 33,334 bytes" I am not sure how this is true. We need to indicate two things: VALUE of k (can be arbitrarily large) STATE from one of the eight states (-13,-11,-7,-1,1,7,11,13) I am not following how 33,334 bytes was arrived at, but I can say one thing: as the prime numbers become larger and larger in value, we will need more space to store the value of k. How, then can we fix it at 33,334 bytes?

    Read the article

  • Compressing xls content with apache deflate module

    - by Clinton Bosch
    I am trying to compress an excel spreadsheet being sent from my application using apache deflate module. I have added the following line to the my sites-enabled file: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/excel But is seems to make the response data bigger??? Using firebug, without the module I downloaded the xls spreadsheet from the application and it downloaded 100Kb of data, the file size once on the filesystem was also 100Kb as expected. Once I enabled the deflate module as described above and repeated the process, the amount of data downloaded was 295Kb?? but the file was still only 100Kb once save on the filesystem. As an experiment I manually gzipped the saved xls file and it compressed to 20Kb. What am I doing wrong here? Using deflate (Firebug output): 200 OK xxxxxxx.co.za 293 KB 4.43s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:01:43 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Vary Accept-Encoding Content-Encoding gzip Content-Type application/excel Without deflate (Firebug output): 200 OK xxxxxxxx.co.za 100 KB 3.46s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:06:00 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Content-Length 102912 Content-Type application/excel

    Read the article

  • How do I compute the approximate entropy of a bit string?

    - by dreeves
    Is there a standard way to do this? Googling -- "approximate entropy" bits -- uncovers multiple academic papers but I'd like to just find a chunk of pseudocode defining the approximate entropy for a given bit string of arbitrary length. (In case this is easier said than done and it depends on the application, my application involves 16,320 bits of encrypted data (cyphertext). But encrypted as a puzzle and not meant to be impossible to crack. I thought I'd first check the entropy but couldn't easily find a good definition of such. So it seemed like a question that ought to be on StackOverflow! Ideas for where to begin with de-cyphering 16k random-seeming bits are also welcome...) See also this related question: http://stackoverflow.com/questions/510412/what-is-the-computer-science-definition-of-entropy

    Read the article

  • Compressibility Example

    - by user285726
    From my algorithms textbook: The annual county horse race is bringing in three thoroughbreds who have never competed against one another. Excited, you study their past 200 races and summarize these as prob- ability distributions over four outcomes: first (“first place”), second, third, and other. Outcome Aurora Whirlwind Phantasm 0.15 0.30 0.20 first 0.10 0.05 0.30 second 0.70 0.25 0.30 third 0.05 0.40 0.20 other Which horse is the most predictable? One quantitative approach to this question is to look at compressibility. Write down the history of each horse as a string of 200 values (first, second, third, other). The total number of bits needed to encode these track- record strings can then be computed using Huffman’s algorithm. This works out to 290 bits for Aurora, 380 for Whirlwind, and 420 for Phantasm (check it!). Aurora has the shortest encoding and is therefore in a strong sense the most predictable. How did they get 420 for Phantasm? I keep getting 400 bytes, as so: Combine first, other = 0.4, combine second, third = 0.6. End up with 2 bits encoding each position. Is there something I've misunderstood about the Huffman encoding algorithm? Textbook available here: http://www.cs.berkeley.edu/~vazirani/algorithms.html (page 156).

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >