Search Results

Search found 139 results on 6 pages for 'decompress'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Zlib compression in boost::iostreams not compatible with zlib.NET

    - by Johan
    Hello, I want to send compressed data between my C# to a C++ application in ZLIB format. In C++, I use the zlib_compressor/zlib_decompressor available in boost::iostreams. In C#, I am currently using the ZOutputStream available in the zlib.NET library. First of all, when I compress the same data using both libraries, the results look different: boost::iostreams::zlib_compressor: FF 13 49 48 00 00 01 00 01 00 00 00 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 zlib.NET (zlib.ZOutputStream): FF 13 49 48 00 00 01 00 01 00 00 00 78 9C 63 61 60 60 F8 00 C4 C1 25 45 99 79 E9 23 87 04 00 4F 31 63 8D (Note the 78 9C pattern that is present in zlib.NET, but not in boost). Furthermore, when I decompress data in boost that I compressed in zlib.NET, I am not able to read from the stream suggesting something is wrong. It does work when I try to decompress data compressed in boost. Does anybody know what is going wrong? Thank you, Johan

    Read the article

  • MultiWidget in MultiWidget how to compress the first one?

    - by sacabuche
    I have two MultiWidget one inside the other, but the problem is that the MultiWidget contained don't return compress, how do i do to get the right value from the first widget? In this case from SplitTimeWidget class SplitTimeWidget(forms.MultiWidget): """ Widget written to split widget into hours and minutes. """ def __init__(self, attrs=None): widgets = ( forms.Select(attrs=attrs, choices=([(hour,hour) for hour in range(0,24)])), forms.Select(attrs=attrs, choices=([(minute, str(minute).zfill(2)) for minute in range(0,60)])), ) super(SplitTimeWidget, self).__init__(widgets, attrs) def decompress(self, value): if value: return [value.hour, value.minute] return [None, None] class DateTimeSelectWidget (forms.MultiWidget): """ A widget that splits date into Date and Hours, minutes, seconds with selects """ date_format = DateInput.format def __init__(self, attrs=None, date_format=None): if date_format: self.date_format = date_format #if time_format: # self.time_format = time_format hours = [(hour,str(hour)+' h') for hour in range(0,24)] minutes = [(minute,minute) for minute in range(0,60)] seconds = minutes #not used always in 0s widgets = ( DateInput(attrs=attrs, format=self.date_format), SplitTimeWidget(attrs=attrs), ) super(DateTimeSelectWidget,self).__init__(widgets, attrs) def decompress(self, value): if value: return [value.date(), value.time()] else: [None, None, None]

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Using the WordPress HTTP_API (wp_remote_get) with GZiped data.

    - by Volmar
    Hi i working on a wordpress plugin where i'm getting data from a remote API. at irst i used cURL, but after reading this blogpost i started using WordPress HTTP_API instead. but i've got onw problem. The API answers are Gziped, and i havn't figured out how to decompress them. The Codex page is talking about an argument called decompress, but i've tried it in alot of ways but i don't get it right. I used this code in cURL: $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, $url); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_ENCODING, "gzip"); $result = curl_exec($curl); curl_close($curl); anyone knows a way to do the same thing with the HTTP_API?

    Read the article

  • Best method to compress JSON string in term of performance and compress radio

    - by Eric Yin
    For a JSON string, contains all kinds of settings, numbers, string etc. Total JSON string fairly fall into 10k~50K range. I want to compress it before save to database. So I wonder which compress method should I choose, I am using c# 4, I know I can choose gzip and deflate but the compression radio is not good (although speed is good). More specific, compress can be a little slow (since only once) but should be small. Decompress should be lighting fast since decompress happens lots. Please give some advice.

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

  • how to setup hive on a single node?

    - by Harman
    I successfully setup hadoop on ubuntu 10.04 on a single node by going through the steps mentioned in Michael Noll's tutorial ( Running Hadoop On Ubuntu Linux (Single-Node Cluster) ). Now, I'm trying to setup hive on the same machine but I'm stuck as of what to do after I decompress the hive-0.8.1-bin.tar.gz and move it to /usr/local/hive. Any help would be appreciated but as I'm new to Linux, it would be very helpful if someone could help me step-by-step.

    Read the article

  • C# code to GZip and upload a string to Amazon S3

    - by BigJoe714
    Hello. I currently use the following code to retrieve and decompress string data from Amazon C#: GetObjectRequest getObjectRequest = new GetObjectRequest().WithBucketName(bucketName).WithKey(key); using (S3Response getObjectResponse = client.GetObject(getObjectRequest)) { using (Stream s = getObjectResponse.ResponseStream) { using (GZipStream gzipStream = new GZipStream(s, CompressionMode.Decompress)) { StreamReader Reader = new StreamReader(gzipStream, Encoding.Default); string Html = Reader.ReadToEnd(); parseFile(Html); } } } I want to reverse this code so that I can compress and upload string data to S3 without being written to disk. I tried the following, but I am getting an Exception: using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKeyID, AWSSecretAccessKeyID)) { string awsPath = AWSS3PrefixPath + "/" + keyName+ ".htm.gz"; byte[] buffer = Encoding.UTF8.GetBytes(content); using (MemoryStream ms = new MemoryStream()) { using (GZipStream zip = new GZipStream(ms, CompressionMode.Compress)) { zip.Write(buffer, 0, buffer.Length); PutObjectRequest request = new PutObjectRequest(); request.InputStream = ms; request.Key = awsPath; request.BucketName = AWSS3BuckenName; using (S3Response putResponse = client.PutObject(request)) { //process response } } } } The exception I am getting is: Cannot access a closed Stream. What am I doing wrong?

    Read the article

  • HttpWebRequest does has empty response requesting a search from Bing

    - by Jarrod Maxwell
    I have the following code that sends a HttpWebRequest to Bing. When I request the url below though it returns what appears to be an empty response when it should be returning a list of results. var response = string.Empty; var httpWebRequest = WebRequest.Create("http://www.bing.com/search?q=stackoverflow&count=100") as HttpWebRequest; httpWebRequest.Method = WebRequestMethods.Http.Get; httpWebRequest.Headers.Add("Accept-Language", "en-US"); httpWebRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Win32)"; httpWebRequest.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate"); using (var httpWebResponse = httpWebRequest.GetResponse() as HttpWebResponse) { Stream stream = null; using (stream = httpWebResponse.GetResponseStream()) { if (httpWebResponse.ContentEncoding.ToLower().Contains("gzip")) stream = new GZipStream(stream, CompressionMode.Decompress); else if (httpWebResponse.ContentEncoding.ToLower().Contains("deflate")) stream = new DeflateStream(stream, CompressionMode.Decompress); var streamReader = new StreamReader(stream, Encoding.UTF8); response = streamReader.ReadToEnd(); } } Its pretty standard code for requesting and receiving a web page. Any ideas why the response is empty? Thanks in advance. EDIT I left off a query string parameter in the url. I also had &count=100 which I have now corrected. It seems to work for values of 50 and below but returns nothing when larger. This works ok when in the browser, but not for this web request. It makes me think the issue is that the response is large and HttpWebResponse is not handling that for me the way I have it set up. Just a guess though.

    Read the article

  • GZipStream not reading the whole file

    - by Ed
    I have some code that downloads gzipped files, and decompresses them. The problem is, I can't get it to decompress the whole file, it only reads the first 4096 bytes and then about 500 more. Byte[] buffer = new Byte[4096]; int count = 0; FileStream fileInput = new FileStream("input.gzip", FileMode.Open, FileAccess.Read, FileShare.Read); FileStream fileOutput = new FileStream("output.dat", FileMode.Create, FileAccess.Write, FileShare.None); GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress, true); // Read from gzip steam while ((count = gzipStream.Read(buffer, 0, buffer.Length)) > 0) { // Write to output file fileOutput.Write(buffer, 0, count); } // Close the streams ... I've checked the downloaded file; it's 13MB when compressed, and contains one XML file. I've manually decompressed the XML file, and the content is all there. But when I do it with this code, it only outputs the very beginning of the XML file. Anyone have any ideas why this might be happening?

    Read the article

  • C# Compress Triple Byte Array

    - by Mark
    Hi. I currently got this script, which compresses byte arrays. But I need it rewritten, so it can compress triple byte arrays [,,] Thanks! public static byte[] Compress(byte[] buffer) { MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(buffer, 0, buffer.Length); zip.Close(); ms.Position = 0; MemoryStream outStream = new MemoryStream(); byte[] compressed = new byte[ms.Length]; ms.Read(compressed, 0, compressed.Length); byte[] gzBuffer = new byte[compressed.Length + 4]; Buffer.BlockCopy(compressed, 0, gzBuffer, 4, compressed.Length); Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gzBuffer, 0, 4); return gzBuffer; } public static byte[] Decompress(byte[] gzBuffer) { MemoryStream ms = new MemoryStream(); int msgLength = BitConverter.ToInt32(gzBuffer, 0); ms.Write(gzBuffer, 4, gzBuffer.Length - 4); byte[] buffer = new byte[msgLength]; ms.Position = 0; GZipStream zip = new GZipStream(ms, CompressionMode.Decompress); zip.Read(buffer, 0, buffer.Length); return buffer; }

    Read the article

  • when /etc/modules is used?

    - by Dyno Fu
    "# /etc/modules: kernel modules to load at boot time." my question is when and where the module loading job done? my first guess is some init scripts in /etc/init.d/ but grep got none. then i think it might be the init ramdisk, but after decompress it, i found conf/modules which is different with /etc/modules. any idea? thanx.

    Read the article

  • How to make TAR to not save the directory hierarchy

    - by Nerian
    system("tar -czf #{RAILS_ROOT}/tmp/export-result #{RAILS_ROOT}/tmp/export") When I decompress the resulting file I get app/c3ec2057-7d3a-40d9-9a9d-d5c3fe3ffd6f/home/tmp/export/and_the_files I would like to just get: export_result/and_the_files How do I change my TAR call to accomplish this? solution: system("tar -czf #{RAILS_ROOT}/tmp/export.tgz --directory=#{RAILS_ROOT}/tmp export/")

    Read the article

  • How to flip a BC6/BC7 texture?

    - by postgoodism
    I have some code to load DDS image files into OpenGL textures, and I'd like to extend it to support the BC6 and BC7 compressed formats introduced in D3D11. Since DirectX and OpenGL disagree about whether a texture's origin is in the upper-left or lower-left corner, my DDS loader flips each image's pixels along the Y axis before passing the pixels to OpenGL. Flipping compressed textures presents an additional wrinkle: in addition to flipping each row of 4x4-pixel blocks, you also need to flip the pixels within each block. I found code here to flip BC1/BC2/BC3 blocks, and from the block diagrams on MSDN it was easy to adapt the BC3-flipping code to handle BC4 and BC5. The BC6 and BC7 formats look significantly more intimidating, though. Is there a similar bit-twiddling trick to flip these formats, or would I have to fully decompress and recompress each block?

    Read the article

  • Good practices when writing a parser for a standard file format (such as ePub)

    - by J-F L-R
    I am considering writing an Android reader software that can read ePubs and display them. I checked the ePub standard documents. However, these contain a lot of information. So I am wondering what is the process of implementing a standard for a file format. What are the steps to get a working implementation without passing by parts of the standard? Are there any best practices? Also, is it even possible to program this alone in a reasonable time? From what I have already found out, ePub is basically a zip archive. That means I could probably use zlib to decompress it. The content is in XHTML and CSS, so I believe it should be possible to display it in a WebView. The parts that are missing are writing the code that can read the metadata and manage the non-standard XHTML extensions.

    Read the article

  • read object from compressed file that generate from actionscript3

    - by Last Chance
    I have made a simple game Map Editor, and I want to save a array that contain map tile info to a file, as below: var arr:Array = [.....2d tile info in it...]; var ba:ByteArray = new ByteArray(); ba.writeObject(arr); ba.compress(); var file:File = new File(); file.save(ba); now I had successful save a compressed object to a file. now the problem is my server side need to read this file and decompress get the arr out from file, then convert it as python list. is that prossible?

    Read the article

  • Read an object from compressed file generated from ActionScript 3

    - by Last Chance
    I have made a simple game Map Editor and I want to save a array that contain map tile info to a file, as below: var arr:Array = [.....2d tile info in it...]; var ba:ByteArray = new ByteArray(); ba.writeObject(arr); ba.compress(); var file:File = new File(); file.save(ba); I had successfully saved a compressed object to a file. Now the problem is my server side need to read this file and decompress the array out from the file, then convert it to a Python list. Is that possible?

    Read the article

  • Package operation failed?

    - by user95092
    While updating, I got this error in Update manager: installArchives() failed: (Reading database ... [...] (Reading database ... 100%% (Reading database ... 168216 files and directories currently installed.) Preparing to replace libasound2 1.0.25-1ubuntu10 (using .../libasound2_1.0.25-1ubuntu10.1_i386.deb) ... Unpacking replacement libasound2 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libasound2_1.0.25-1ubuntu10.1_i386.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libasound2_1.0.25-1ubuntu10.1_i386.deb Error in function:

    Read the article

  • untar filename.tr.gz to directory "filename"

    - by Jorre
    I would like to untar an archive e.g. "tar123.tar.gz" to directory /myunzip/tar123/" using a shell command. tar -xf tar123.tar.gz will decompress the files but in the same directory as where I'm working in. If the filename would be "tar233.tar.gz" I want it to be decompressed to /myunzip/tar233.tar.gz" so destination directory would be based on the filename. Does anyone know if the tar command can do this?

    Read the article

  • iphone application

    - by jaynaiphone
    I am working on the module in which I have to decompress the data. The data comes from server which is zipped using gzip format. I am using NSData+Gzip.m file in which there is a function named "gzipInflate" to unzip the data. but it gives me the error "Z_OK -3". Now what is the solution of that error. How can I solve it. Please reply me :)

    Read the article

  • Compile/use unrar C++ source for iphone app?

    - by greypoint
    Writing an app that will include the ability to decompress zip and rar files. I think I'm OK on how to handle the .zips but .rars seem a little more trouble. I noticed that rarlabs has source available but it's C++. Is there a way to compile, wrap or otherwise use this code within an iPhone app? Reference: http://www.rarlab.com/rar_add.htm Open to alternate suggestions on how to handle .rar files as well. I'm still pretty much a newbie so please explain in small words :)

    Read the article

  • How to write and read browser cache from Flex

    - by yveslebeau
    Hi, I have a flex application that makes use of the autocomplete control. And I use a web service to download the data after successful login. My problem is that the data in Mb is about 4Mb and it takes quite a while to decompress in the application (after downloading it every time). Is there a way to make use of the browser cache to store that data from flex to save time on downloading that data each time? Regards, Yves

    Read the article

  • How to securely transfer

    - by michaeltk
    I have two servers -- a backend server, and a frontend server. Every night, the backend server generates static .html files, which are then compressed into .tar format. I need to write a script that resides on the backend server that will transfer the .tar file to the frontend server, and then decompress that .tar file into to the public web directory of the frontend server. What is the standard, secure way to do this? Thanks in advance.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >